Search is not available for this dataset
text
stringlengths 1
1.92M
| id
stringlengths 14
6.21k
| metadata
dict |
---|---|---|
\section{Introduction}
The investigation of in-medium properties of hadrons has
found widespread interest during the last decade. This interest was
triggered by two aspects.
The first aspect was a QCD sum-rule based prediction by
Hatsuda and Lee\cite{HL} in 1992 that the masses of vector mesons
should drop dramatically as a function of nuclear density. It was widely
felt that an
experimental verification of this prediction would establish a
long-sought direct link between quark degrees of freedom and nuclear
hadronic interactions. In the same category fall the predictions of
Brown and Rho that argued for a general scaling for hadron masses with
density\cite{Brown-Rho}.
The second aspect is that even in ultrarelativistic heavy-ion reactions,
searching for
observables of a quark-gluon plasma phase of nuclear matter, inevitably
also many relatively low-energy ($\sqrt{s} \approx 2 - 4 \:
\mbox{GeV}$) final state interactions take place. These interactions
involve collisions between many mesons for which the
cross-sections and meson self-energies in the nuclear medium are not
known, but may influence the interpretation of the experimental
results.
Hadron properties in medium involve masses, widths and coupling
strengths of these hadrons. In lowest order in the nuclear density all
of these are linked by the $t \rho$ approximation that assumes that the
interaction of a hadron with many nucleons is simply given by the
elementary $t$-matrix of the hadron-nucleon interaction multiplied with
the nuclear density $\rho$. For vector mesons this approximation reads
\begin{equation}\label{Vself}
\Pi_{\rm V} = - 4 \pi f_{\rm VN} (0) \rho
\end{equation}
where $f_{\rm VN}$ is the forward-scattering amplitude of the vector
meson (V) nucleon (N) interaction. Approximation (\ref{Vself}) is good
for low densities ($\Pi_{\rm V}$ is linear in $\rho$) and/or large
relative momenta where the vector meson `sees' only one nucleon at a
time. Relation (\ref{Vself}) also neglects the Fermi-motion of the
nucleons although this could easily be included.
Simple collision theory\cite{EJ,Kon} then gives the shift of mass and
width of a meson in nuclear matter as
\begin{eqnarray}\label{masswidth}
\delta m_{\rm V} &=& - \gamma v \sigma_{\rm VN} \eta \rho \nonumber \\
\delta \Gamma_{\rm V} &=& \gamma v \sigma_{\rm VN} \rho ~.
\end{eqnarray}
Here, according to the optical theorem, $\eta$ is given by the ratio of
real to imaginary part of the forward scattering amplitude
\begin{equation}
\eta = \frac{\Re{f_{\rm VN}(0)}}{\Im{f_{\rm VN}(0)}} ~.
\end{equation}
The expressions (\ref{masswidth}) are interesting since an
experimental observation of these mass- and width-changes could give
valuable information on the free cross sections $\sigma_{\rm VN}$ which
may not be available otherwise. The more fundamental question, however, is if
there is more to in-medium properties than just the simple collisional
broadening predictions of (\ref{masswidth}).
\section{Fundamentals of Dilepton Production}
From QED it is well known that vacuum polarization, i.e. the virtual
excitation of electron-positron pairs, can dress the photon. Because
the quarks are charged, also quark-antiquark loops can dress the photon.
These virtual
quark-antiquark pairs have to carry the quantum numbers of the photon,
i.e.\ $J^\pi = 1^-$. The $q \bar q$ pairs can thus be viewed as vector
mesons which have the same quantum numbers; this is the basis of
Vector Meson Dominance (VMD).
The vacuum polarization tensor is then, in complete analogy to QED,
given by
\begin{equation}\label{jjcorr}
\Pi^{\mu \nu} = \int d^4x \, e^{i q x}
\langle 0 | T[j^\mu (x) j^\nu (0)] | 0 \rangle
= \left( g^{\mu \nu} - \frac{q^\mu q^\nu}{q^2} \right) \Pi(q^2)
\end{equation}
where $T$ is the time ordering operator.
Here the tensor structure has been exhibited
explicitly. This so-called current-current correlator contains the
currents $j^\mu$ with the correct charges of the vector mesons in
question. Simple VMD\cite{Sak} relates these currents to the vector
meson fields
\begin{equation}\label{VMD}
j^\mu(x) = \frac{{m_{\rm V}^0}^2}{g_{\rm V}} V^\mu(x)~.
\end{equation}
Using this equation one immediately sees that the current-current
correlator (\ref{jjcorr}) is nothing else but the vector meson
propagator $D_{\rm V}$
\begin{equation}\label{Vprop}
\Pi(q^2) = \left( \frac{{m_{\rm V}^0}^2}{g_{\rm V}} \right)^2
D_{\rm V} (q^2) ~.
\end{equation}
The scalar part of the vector meson propagator is given by
\begin{equation}\label{Vprop1}
D_{\rm V}(q^2) = \frac{1}{q^2 - {m_{\rm V}^0}^2 - \Pi_{\rm V}(q^2)}~.
\end{equation}
Here $\Pi_{\rm V}$ is the selfenergy of the vector meson.
For the free $\rho$ meson information about $\Pi(q^2)$ can be obtained
from hadron production in $e^+ e^-$ annihilation reactions\cite{PS}
\begin{equation}\label{hadprod}
R(s) = \frac{\sigma (e^+ e^- \rightarrow \mbox{hadrons})}
{\sigma(e^+ e- \rightarrow \mu^+ \mu^-)}
= - \frac{12 \pi}{s} \Im \Pi(s) ~
\end{equation}
with $s = q^2$. This determines the imaginary part of $\Pi$ and,
invoking vector meson dominance, also of $\Pi_{\rm V}$. The data (see,
e.g. Fig.\ 18.8 in\cite{PS}, or Fig.\ 1 in\cite{KW}) clearly show at
small $\sqrt{s}$ the vector meson peaks, followed by a flat plateau
starting at $\sqrt{s} \approx 1.5 \: \mbox{GeV}$
described by perturbative QCD.
In order to get the in-medium properties of the vector mesons, i.e.\
their selfenergy $\Pi_{\rm V}$, we now have two ways to proceed: We
can, \emph{first}, try to determine the current-current correlator by
using QCD sum rules\cite{HL}; from this correlator we can then
determine the self-energy of the vector meson following
eqs.\ (\ref{Vprop}),(\ref{Vprop1}). The \emph{second} approach consists
in setting up a hadronic model and calculating the selfenergy of the
vector meson by simply dressing its propagators with appropriate
hadronic loops. In the following sections I will discuss both of these
approaches.
\subsection{QCD sum rules and in-medium masses}
The QCD sum rule for the current-current correlator is obtained by
evaluating the function $R(s)$, and thus $\Im \Pi(s)$ (see
(\ref{hadprod})), in a hadronic model on one hand and in a QCD-based
model on the other. The latter, QCD based, calculation uses the fact
that the current-current correlator (\ref{jjcorr}) can be Taylor
expanded in the space-time distance $x$ for small space-like distances
between $x$ and $0$; this is nothing else than the Operator Product
Expansion (OPE)\cite{PS}. In this way we obtain for the free meson
\begin{equation}
R^{\rm OPE}(M^2) = \frac{1}{8 \pi^2} \left(1 + \frac{\alpha_{\rm
S}}{\pi} \right) + \frac{1}{M^4} m_{\rm q} \langle \bar{q} q \rangle +
\frac{1}{24 M^4} \langle \frac{\alpha_{\rm S}}{\pi} G^2 \rangle -
\frac{56}{81 M^6} \pi \alpha_{\rm S} \kappa \langle \bar{q} q \rangle^2
~.
\end{equation}
Here $M$ denotes the so-called Borel mass. The expectation values
appearing here are the quark- and gluon-condensates. The last term here
contains the mean field approximation
$
\langle (\bar{q} q)^2 \rangle = \kappa \langle \bar{q} q \rangle^2
$.
The other representation of $R$ in the space-like region can be
obtained by analytically continuing $\Im \Pi(s)$ from the time-like to
the space-like region by means of a twice subtracted dispersion
relation. This finally gives
\begin{equation}
R^{\rm HAD} (M^2) = \frac{\Re{\Pi^{\rm HAD}(0)}}{M^2}
- \frac{1}{\pi M^2} \int_0^\infty
ds \, \Im \Pi^{\rm HAD}(s) \frac{s}{s^2 + \epsilon^2}
\exp{-s/M^2}~.
\end{equation}
Here $\Pi^{\rm HAD}$ represents a phenomenological hadronic spectral
function. Since for the vector mesons this spectral
function is dominated by resonances in the low-energy part it
is usually parametrized in terms of a resonance part
with parameters such as strength, mass and width with a
connection to the QCD perturbative result for the quark structure for
the current-current correlator at higher energies (for details
see Leupold et al.\cite{Leupoldbar} in these
proceedings and refs.\cite{Leup1,Leup2,Lee}).
The QCD sum rule is then obtained by setting
\begin{equation}
R^{\rm OPE}(M^2) = R^{\rm HAD}(M^2)~.
\end{equation}
Knowing the lhs of this equation then allows one to determine the
parameters in the spectral function appearing in $R^{\rm HAD}$ on the
rhs. If the vector meson moves in the nuclear medium, then $R$ depends also
on its momentum. However, detailed studies\cite{Leup2,Lee} find only a very
weak momentum dependence.
The first applications\cite{HL} of the QCDSR have used a simplified
spectral function, represented by a $\delta$-function at the meson mass
and a perturbative QCD continuum. Such an analysis gives a value for the free
meson mass that agrees with experiment. On this basis the QCDSR has been
applied to the
prediction of in-medium masses of vector mesons by making the
condensates density-dependent (for details see\cite{HL,Leup1,Leup2}).
This then leads to a lowering of the vector meson mass in nuclear
matter.
This analysis has recently been repeated with a spectral function that
uses a Breit-Wigner parametrization with finite width. In this
study\cite{Leup1} it turns out that QCD sum rules are compatible with a
wide range of masses and widths (see also ref.\cite{KW}). Only if the
width is -- artificially -- kept zero, then the mass of the vector meson
has to drop with
nuclear density\cite{HL}. However, also the opposite scenario, i.e. a
significant broadening of the meson at nearly constant pole position,
is compatible with the QCDSR.
\subsection{Hadronic models}
Hadronic models for the in-medium properties of hadrons start from
known interactions between the hadrons and the nucleons. In principle,
these then allow one to calculate the forward scattering amplitude
$f_{\rm VN}$ for vector meson interactions. Many such
models have been developed over the last few
years\cite{KW,HFN,AK,FP,Rapp}.
The model of Friman and Pirner\cite{FP} was taken up by Peters et
al.\cite{Peters} who also included
$s$-wave nucleon resonances. It turned out that in this analysis the
$D_{13}\: N(1520)$ resonance plays an overwhelming role. This resonance
has a significant $\rho$ decay branch of about 20 \%. Since at the pole
energy of 1520 MeV the $\rho$ decay channel is not yet energetically
open this decay can
only take place through the tails of the mass distributions of
resonance and meson. The relatively large relative decay branch then
translates into a very strong $N^* N \rho$ coupling constant (see
also\cite{Rapp,PE}).
The main result of this $N^* h$ model for the $\rho$ spectral
function is a considerable broadening for the latter. This
is primarily so for the transverse vector mesons (see Fig. \ref{Figrhot}),
whereas the longitudinal degree of freedom gets only a little broader with
only a slight change of strength downwards\cite{Peters}.
\begin{figure}
\centerline{\includegraphics[height=5cm]{tspect1.eps}}
\caption{Spectral function of transverse $\rho$ mesons as a function of
their three-momentum and invariant mass (from \protect\cite{Peters}).}
\label{Figrhot}
\end{figure}
The results shown in Fig.\ \ref{Figrhot} actually go beyond the simple "$t\rho$"
approximation discussed earlier (see (\ref{Vself})) in that they
contain higher order density effects: a lowering of the $\rho$ meson
strength leads to a strong
increase of the $N(1520)$ $\rho$-decay width which in turn affects the
spectral function. The result is the very broad, rather featureless spectral function for the
transverse $\rho$ shown in Fig.\ \ref{Figrhot}.
\section{Experimental Observables}
In this section I will now discuss various possibilities to verify
experimentally the predicted changes of the $\rho$ meson properties in
medium.
\subsection{Heavy-Ion Reactions}
Early work\cite{Wolf,XK} on an experimental verification of the
predicted broadening of the $\rho$ meson spectral function has
concentrated on the dilepton spectra measured at relativistic energies
(about 1 -- 4 A GeV) at the BEVALAC, whereas more recently many
analyses have been performed for the CERES and HELIOS data obtained at
ultrarelativistic energies (150 -- 200 A GeV) at the SPS. In such collisions
nuclear densities of about 2 - 3 $\rho_0$ can
already be reached in the relativistic domain; in the ultrarelativistic
energy range baryon densities of up to $10 \rho_0$ are predicted (for a
recent review see\cite{BC}). Since the selfenergies of produced vector
mesons are at least proportional to the density $\rho$ (see
(\ref{Vself})) heavy-ion reactions seem to offer a natural
enhancement factor for any in-medium changes.
The CERES data\cite{CERES} indeed seem to confirm this expectation.
The present situation is -- independent of
the special model used for the description of the data -- that agreement
with the measured dilepton mass spectrum in the mass range between
about 300 and 700 MeV for the 200 A GeV $S + Au$ and $S + W$ reactions
can only be obtained if $\rho$-meson strength is shifted downwards (for
a more detailed discussion see\cite{BC,QM}) (for the recently measured 158
A GeV $Pb + Au$ reaction the situation is not so clear; here the
calculations employing `free' hadron properties lie at the lower end of
the experimental error bars\cite{BC}).
However, all the
predictions are based on equilibrium models in which the properties of
a $\rho$ meson embedded in nuclear matter with infinite space-time
extensions are calculated. An ultrarelativistic heavy-ion collision is
certainly far away from this idealized scenario. In addition, heavy-ion
collisions necessarily average over the polarization degrees of freedom.
The two physically
quite different scenarios, broadening the spectral
function or shifting simply the $\rho$ meson mass downwards while keeping
its free width, thus lead to indistinguishable observable consequences
in such collisions.
This can be understood by observing that even in an ultrarelativistic
heavy-ion collision, in which very high baryonic densities are reached,
a large part of the observed dileptons is actually produced at rather
low densities (see Fig. 3 in\cite{Cass}).
\section{$\pi + A$ Reactions}
Motivated by this observation we have performed calculations of the
dilepton invariant mass spectra in $\pi^-$ induced reactions on
nuclei\cite{Weidmann}; the experimental study of such reactions will be
possible in the near future at GSI. The calculations are based on a
semiclassical transport theory, the so-called Coupled Channel BUU
method (for details see\cite{Teis}) in which the nucleons, their
resonances up to 2 GeV mass and the relevant mesons are propagated from
the initial contact of projectile and target until the final stage of
the collision. This method allows one to
describe strongly-coupled, inclusive processes without any a-priori
assumption on the equilibrium or preequilibrium nature of the process.
In these reactions the dominant emission channels are the same as in
ultrarelativistic heavy-ion collisions; this can be seen in Fig.
\ref{pidilept}
\begin{figure}
\centerline{\includegraphics[height=8cm]{piafig51.eps}}
\caption{Invariant mass yield of dileptons produced in pion-induced
reactions at 1.3 GeV on various nuclei. The topmost part shows results
of calculations employing free hadron properties, the results shown
in the middle part
are based on simple mass-shifts\protect\cite{HL}, and the curves
in the lowest part show results of a calculation using the spectral function from
\protect\cite{Peters} for the $\rho$ meson and a
mass-shift and collisional broadening for the $\omega$ meson (from
\protect\cite{Weidmann}).} \label{pidilept}
\end{figure}
where I show the results for the dilepton spectra produced by
bombarding Pb nuclei with 1.3 GeV pions.
In the top picture in Fig. \ref{pidilept} I show the results of a
calculation that assumes free hadronic properties for all radiation
sources. The middle picture shows the results of a calculation with
lowered in medium masses plus collisional broadening of the vector
mesons, calculated dynamically according to (\ref{masswidth}), and the
bottom figure shows the results of a calculation employing the $\rho$
spectral function calculated in the resonance-hole model discussed
above\cite{Peters}. In the lower 2 pictures the most relevant change
takes place for the $\rho$, which is significantly widened, and the
$\omega$, which develops a shoulder on its low mass tail. The latter is
primarily due to the nuclear density profile: $\omega$'s are produced
at various densities from $\rho_0$ down to 0.
The dilepton yield in the range of about 600 - 700 MeV is increased by
the in-medium effects by up to a factor of 2.5, comparable to
the ultrarelativistic heavy-ion collisions discussed
above. In particular the $\omega$ shoulder should clearly be visible.
\section{Photonuclear Reactions}
Pion induced reactions have the disadvantage that the projectile
already experiences strong initial state interactions so that many
produced vector mesons are located in the surface where the
densities are low. A projectile that is free of this undesirable
behavior is the photon.
In addition, the calculated strong differences in the in-medium
properties of longitudinal and transverse vector mesons can probably
only be verified in photon (real or virtual) induced reactions, where the
incoming
polarization can be controlled. Another approach
would be to measure the coherent photoproduction of vector mesons; here
the first calculation available so far\cite{PE} shows a distinct
difference in the production cross sections of transverse and
longitudinal vector mesons.
\subsection{Dilepton Production}
We have therefore -- also in view of a corresponding proposal for such
an experiment at CEBAF\cite{Preed} -- performed calculations for the
dilepton yield expected from $\gamma + A$ collisions. In these calculations
we have used the same transporttheoretical method as above for the
pion-induced dilepton emission, but now employ a better description
of the population of broad resonances\cite{BratEffe}.
Results of these calculations are shown in Fig. \ref{gammadil}.
\begin{figure}
\centerline{\includegraphics[height=9cm]{gamdil.eps}}
\caption{Invariant mass spectra of dileptons produced in $\gamma +
\mbox{}^{208}Pb$ reactions at 2.2 GeV. The top figure shows the
various radiation sources, the middle figure the total yield from the
top together with the Bethe-Heitler contributions, and the bottom part
shows the expected in-medium effects (from \protect\cite{BratEffe}).}\label{gammadil}
\end{figure}
In the top figure the various sources of
dilepton radiation are shown. The dominant sources are again the same
as those in pion- and heavy-ion induced reactions, but the (uncertain)
$\pi N$ bremsstrahlung does not contribute in this reaction. The middle
part of this figure shows both the Bethe-Heitler (BH)
contribution and the contribution from all the hadronic sources. In the
lowest (dot-dashed) curve we have chosen a cut on the product of the
four-momenta of incoming photon ($k$) and lepton ($p$) in order to
suppress the BH contribution.
It is seen that even without BH subtraction the vector meson signal
surpasses that of the BH process.
The lowest
figure, finally, shows the expected in-medium effects\cite{BratEffe}: the
sensitivity in the region between about 300 and 700 MeV amounts to a
factor of about 3 and is thus in the same order of magnitude as in the
ultrarelativistic heavy-ion collisions.
\subsection{Photoabsorption}
Earlier in this paper I have discussed that a strong change of the
$\rho$ meson properties comes about because of its coupling to $N^* h$
excitations and that this coupling -- through a higher-order effect --
in particular leads to a very strong increase of the $\rho$ decay width
of the $N(1520) \: D_{13}$ resonance.
This increase may provide a reason for the observed disappearance of
the higher nucleon resonances in the photoabsorption cross sections on
nuclei\cite{Bianchi}. The
disappearance in the third region is a consequence of
the Fermi-motion. The disapparance of the second resonance
region, i.e. in particular of the $N(1520)$ resonance, however, presents
an interesting problem; it is obviously a typical in-medium effect.
First explanations\cite{Kon} assumed a very strong collisional
broadening, but in ref.\cite{Effeabs} it has been shown that this
broadening is not strong enough to explain the observed disappearance
of the $D_{13}$ resonance. Since at the energy around 1500 MeV also the
$2 \pi$ channel opens it is natural to look for a possible connection
with the $\rho$ properties in medium. Fig.\ref{photabs}
\begin{figure}
\centerline{\includegraphics[height=6cm]{abs.eps}}
\caption{Photoabsorption cross section in the second resonance region.
Shown are the data from ref.\protect\cite{Bianchi}, the free
absorption cross section on the proton, a Breit-Wigner fit with a total
width of 300 MeV (dashed curve) and the result of a
transporttheoretical calculation\protect\cite{Effeabs} with a medium
broadened $\rho$ decay width of the $N(1520)$(solid curve).}\label{photabs}
\end{figure}
shows the results of such an analysis (see also\cite{PE}). It is
clear that the opening of the phase space for $\rho$ decay of this
resonance provides enough broadening to explain its disappearance.
The talk by N. Bianchi at this conference\cite{Bianchitalk} covers this
topic in much more detail.
\section{Rho-meson properties at higher energies}
The calculations of spectral functions of $\rho$-meson properties in
medium so far have focussed on properties of rather slow mesons.
However, the relations (\ref{masswidth}) can be used to obtain some
information on the in-medium changes to be expected, because the cross
sections entering here contain all the possible couplings.
A first attempt in this direction was made by Eletsky and
Ioffe\cite{EJ} who obtained a \emph{positive} mass shift for $\rho$
mesons with momenta in the GeV region. This analysis has been linked to
the low-energy behavior by including all the relevant nucleon
resonances with $\rho$ meson decay branches by Kondratyuk et
al.\cite{KondrCass}. These authors find indeed that at momenta of about
100 MeV/c the mass shift according to (\ref{masswidth}) turns from
negative to positive; the width stays remarkably constant as a function
of $\rho$ momentum at a value of about 250 MeV, corresponding to a
collisional broadening width $\delta \Gamma \approx 100$ MeV (see
Fig.\ref{mgam}).
\begin{figure}
\centerline{\includegraphics[height=6cm]{mgam.eps}}
\caption{$\rho$ meson mass and width as a function of momentum (from
\protect\cite{KondrCass}).}\label{mgam}
\end{figure}
This number allows an easy estimate for the onset of shadowing in
high-energy photon-induced particle production experiments. For
example, in high-energy vector meson production\cite{Oneill} a crucial
scale is given by the so-called coherence length
\begin{equation}\label{coh}
l_{\rm c} = \frac{2 \nu }{M_{\rm v}^2 + Q^2}
\end{equation}
where $Q$ is the four-momentum transfer, $\nu$ the energy transfer and
$M_{\rm v}$ the vector meson mass.
This length gives a measure for the distance over which the photon
appears as a rho meson. An obvious condition for the onset of shadowing
is then
$
l_{\rm c} \ge \lambda_{\rho}
$
where $\lambda_\rho$ is the mean free path of the $\rho$ meson in
nuclear matter. The value of $\delta \Gamma_\rho$ of about 100 MeV
calculated in ref.\cite{KondrCass} translates into a value of
$\lambda_\rho$ of about 2 fm at high momenta. Using this value in
yields for the `shadowing threshold'
\begin{equation}\label{shad1}
\frac{2 \nu}{M_{\rm v}^2 + Q^2} \ge 2 \mbox{fm} ~.
\end{equation}
For example, at an energy transfer of 10 GeV, corresponding to the
Hermes regime, we obtain as condition for the shadowing regime
\begin{equation}\label{shadcond}
Q^2 \le 1.5 \mbox{GeV}^2 ~.
\end{equation}
Increasing $Q^2$ (at fixed $\nu$) beyond this value yields a larger
transparency of the photon. This effect, which also shows up in the
calculations of Kopeliovich and Huefner\cite{KH}, must clearly be taken
into account in experiments of this sort searching for color
transparency; for further discussions of this exciting topic I refer to
the talk of T. O'Neill at this conference\cite{Oneill}.
\section{Summary}
In this talk I have concentrated on a discussion of the in-medium
properties of the vector mesons. I have shown that the scattering
amplitudes of these mesons on nucleons determine their in-medium
spectral functions, at least in lowest order in the
nuclear density. The dilepton channel can give information on the
properties of the $\rho$ and $\omega$ deep inside nuclear matter
whereas the $\pi$ decay channels --
because of their strong final state interactions -- can give only
information about the vector mesons in the low-density region.
The original QCD sum rule predictions of a lowered
vector meson mass have turned out to be too naive, because they were based on
the assumption of a sharp resonance state. In contrast, all specific
hadronic models yield very broad spectral functions for the $\rho$
meson with a distinctly different behavior of longitudinal and transvese
$\rho$'s. Recent QCD sum rule analyses indeed do not predict a lowering
of the mass, but only yield --
rather wide -- constraints on the mass and width of the vector mesons.
I have also discussed that hadronic models that include the coupling of
the $\rho$ meson to nucleon resonances and a corresponding shift of
vector meson strength to lower masses give a natural backreaction on
the width of these resonances. In particular, the $N(1520) D_{13}$
resonance is significantly broadened because of its very large coupling
constant to the $\rho N$ channel. Since the $\rho$ decay
of this resonance is
deduced only from a partial wave analysis of $2 \pi$ production\cite{Manley},
it would be very essential to have new, better data (and a new analysis of
these) for this channel.
A large part of the unexpected surplus of dileptons
produced in ultrarelativistic heavy-ion collisions can
presently be reproduced only by including a shift of $\rho$ strength to
lower masses. However, heavy-ion reactions are quite insensitive to
details of in-medium changes.
Motivated by this observation I have then discussed predictions for
experiments using pion and photon beams as incoming particles. In both
cases the in-medium sensitivity of the dilepton mass spectra is
comparable to that in heavy-ion experiments. In addition, such experiments
take place much closer to equilibrium, an assumption on which all
predictions of in-medium properties are based. Furthermore, only in such
experiments it will be possible to look for polarization effects. I have
also discussed the intriguing suggestion that the observed disappearance
of the second resonance region in the
photoabsorption cross section is due to the broadening of the $N(1520)$
resonance caused by the shift of $\rho$ strength to lower masses.
At high energies, finally, the in-medium broadening of the $\rho$ leads
to a mean free path of about 2 fm in nuclear matter. Shadowing will
thus only be essential if the coherence length is larger than this mean
free path. Increasing the four-momentum transfer $Q^2$ at fixed energy
transfer $\nu$ leads to smaller coherence length and thus leads to a larger
transparency. This effect is essential to verify experimentally; it is
superimposed on the color-transparency effect that is still being
looked for.
| proofpile-arXiv_065-8418 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The Circinus galaxy (A1409-65) is a nearby ($\simeq$4 Mpc) gas rich spiral
lying close to the galactic plane in a region of relatively low
(\mbox{${A_{\rm V}}$}$\simeq$1.5 mag) interstellar extinction (Freeman et al. \cite{freeman}).
Several observed characteristics indicate that this galaxy hosts the nearest
Seyfert 2 nucleus known.
These include optical images showing a spectacular
[OIII] cone (Marconi et al. 1994 hereafter \cite{M94});
optical/IR spectra rich
in prominent and narrow coronal lines (Oliva et al. 1994,
hereafter \cite{O94},
Moorwood et al. 1996, hereafter \cite{M96});
X-ray spectra displaying a very
prominent Fe-K fluorescent line (Matt et al. \cite{matt96}) and optical
spectropolarimetric data which reveal relatively
broad H$\alpha$ emission in polarized light (Oliva et al. \cite{oliva98}).
Complementary to these is observational evidence that this galaxy
has recently experienced a powerful nuclear starburst which is now
traced by the near IR emission of red supergiants (Oliva et al.
\cite{oliva95}, Maiolino et al. \cite{maiolino})
and which may have propagated outwards
igniting the bright ring of O stars and
HII regions visible in the \mbox{H$\alpha$}\ image (\cite{M94}).
Such a starburst could have been triggered by gas moving toward
the nuclear region and eventually falling onto the accretion
disk around the black hole powering the AGN.
A debated issue is whether nuclear starbursts are common
features of AGNs and if they are more common in type 2
than in type 1 Seyferts, as suggested by e.g. 10$\mu$m observations
(Maiolino et al. \cite{maiolino95}) and studies of the stellar
mass to light ratios (\cite{O94}).
Since starbursts are predicted and observed to deeply modify the
chemical abundances of the host galaxy
(e.g. Matteucci \& Padovani \cite{matteucci93}), such an effect
should also be evident in this and other Seyferts.
However, to the best
of our knowledge, no reliable measurement of metallicity for
the narrow line region clouds of Seyfert 2's
exists in the literature. In particular,
although has been since long known that the large [NII]/H$\alpha$ ratio
typical of Seyferts cannot be easily explained using simple models
with normal
nitrogen abundances (e.g.
Osterbrock \cite{osterbrock89},
Komossa \& Schulz \cite{komossa97}),
the question of whether its absolute (N/H) or relative (e.g. N/O)
abundance is truly different than solar
is still open.
Finding a reliable method to derive metallicities and, therefore, to
trace and put constraints on past starburst activity is the main aim
of this paper.
\begin{table}
\caption{Observed and dereddened line fluxes}
\label{tab_obs1}
\vskip-9pt
\def\SKIP#1{\noalign{\vskip#1pt}}
\def\rlap{:}{\rlap{:}}
\def\ \rlap{$^{(1)}$}{\ \rlap{$^{(1)}$}}
\def\ \rlap{$^{(2)}$}{\ \rlap{$^{(2)}$}}
\def\rlap{$^{(3)}$}{\rlap{$^{(3)}$}}
\begin{flushleft}
\begin{tabular}{lrrcrrcc}
\hline
\hline\SKIP{1}
& \multicolumn{3}{c}{Nucleus} & \multicolumn{3}{c}{KnC} \\
& \multicolumn{3}{c}{(4.6"x2")} & \multicolumn{3}{c}{(4.1"x2")} \\
& \multicolumn{3}{c}{\mbox{${A_{\rm V}}$}=4.5} &
\multicolumn{3}{c}{\mbox{${A_{\rm V}}$}=1.9} \\
\SKIP{3}
& F\ \rlap{$^{(1)}$} & I\ \rlap{$^{(2)}$} & & F\ \rlap{$^{(1)}$} & I\ \rlap{$^{(2)}$} & &
${\rm A_\lambda /A_V}$\rlap{$^{(3)}$} \\
\SKIP{1}
\hline
\SKIP{3}
${\rm [OII] }\,\lambda\lambda$3727
& 77 & 270
&
& 78 & 133
&
&1.49 \\
${\rm [NeIII] }\,\lambda$3869
& 45 & 136
&
& 41 & 66
&
&1.45 \\
${\rm [SII] }\,\lambda\lambda$4073
& &
&
& 4\rlap{:} & 7\rlap{:}
&
&1.40 \\
${\rm \mbox{H$\delta$} }\,\lambda$4102
& &
&
& 17 & 24
&
&1.40 \\
${\rm \mbox{H$\gamma$} }\,\lambda$4340
& &
&
& 35 & 46
&
&1.34 \\
${\rm [OIII] }\,\lambda$4363
& &
&
& 16 & 21
&
&1.34 \\
${\rm HeII }\,\lambda$4686
& 32 & 41
&
& 54 & 60
&
&1.24 \\
${\rm [ArIV] }\,\lambda$4711
& &
&
& 9 & 10
&
&1.23 \\
${\rm [ArIV] }\,\lambda$4740
& &
&
& 9 & 10
&
&1.22 \\
${\rm \mbox{H$\beta$} }\,\lambda$4861
& 100 & 100
&
& 100 & 100
&
&1.18 \\
${\rm [OIII] }\,\lambda$4959
& 365 & 320
&
& 317 & 300
&
&1.15 \\
${\rm [OIII] }\,\lambda$5007
& 1245 & 1025
&
& 1048 & 965
&
&1.14 \\
${\rm [FeVI] }\,\lambda$5146
& &
&
& $<$8 & $<$7
&
&1.09 \\
${\rm [NI] }\,\lambda\lambda$5199
& 28 & 18
&
& 9 & 7
&
&1.08 \\
${\rm [FeXIV] }\,\lambda$5303
& $<$17 & $<$10
&
& $<$7 & $<$5
&
&1.05 \\
${\rm HeII }\,\lambda$5411
& &
&
& 5.9 & 4.4
&
&1.02 \\
${\rm [FeVII] }\,\lambda$5721
& 23 & 9
&
& 8.9 & 6.0
&
&0.95 \\
${\rm [NII] }\,\lambda$5755
& &
&
& 7.2 & 4.7
&
&0.95 \\
${\rm HeI }\,\lambda$5876
& 35\rlap{:} & 12\rlap{:}
&
& 15 & 9.7
&
&0.92 \\
${\rm [FeVII] }\,\lambda$6087
& 36 & 11
&
& 16 & 9.7
&
&0.89 \\
${\rm [OI] }\,\lambda$6300
& 165 & 42
&
& 46 & 26
&
&0.85 \\
${\rm [SIII] }\,\lambda$6312
& 20\rlap{:} & 5\rlap{:}
&
& 11 & 6
&
&0.85 \\
${\rm [OI] }\,\lambda$6364
& 56 & 14
&
& 16 & 8.6
&
&0.84 \\
${\rm [FeX] }\,\lambda$6374
& 55 & 14
&
& $<$4
& $<$2
&
&0.84 \\
${\rm [ArV] }\,\lambda$6435
& &
&
& 5.4 & 2.9
&
&0.83 \\
${\rm [NII] }\,\lambda$6548
& 535 & 116
&
& 154 & 81
&
&0.81 \\
${\rm \mbox{H$\alpha$} }\,\lambda$6563
& 1390 & 298
&
& 565 & 295
&
&0.81 \\
${\rm [NII] }\,\lambda$6583
& 1620 & 343
&
& 457 & 237
&
&0.81 \\
${\rm HeI }\,\lambda$6678
& &
&
& 5.2 & 2.6
&
&0.80 \\
${\rm [SII] }\,\lambda$6716
& 490 & 96
&
& 128 & 64
&
&0.79 \\
${\rm [SII] }\,\lambda$6731
& 496 & 96
&
& 113 & 57
&
&0.79 \\
${\rm [ArV] }\,\lambda$7006
& 10\rlap{:} & 2\rlap{:}
&
& 14 & 6.4
&
&0.75 \\
${\rm [ArIII] }\,\lambda$7136
& 148 & 22
&
& 61 & 27
&
&0.73 \\
${\rm [CaII] }\,\lambda$7291
& $<$20 & $<$3
&
& $<$4 & $<$2
&
&0.70 \\
${\rm [OII] }\,\lambda\lambda$7319
& 45 & 6
&
& 12 & 5.2
&
&0.70 \\
${\rm [OII] }\,\lambda\lambda$7330
& 40 & 5
&
& 10 & 4.5
&
&0.70 \\
${\rm [ArIII] }\,\lambda$7751
& 38 & 4
&
& 12 & 4.8
&
&0.64 \\
${\rm [FeXI] }\,\lambda$7892
& 112 & 11
&
& $<$9
& $<$3
&
&0.62 \\
${\rm [FeII] }\,\lambda$8617
& $<$40
& $<$3
&
& $<$13
& $<$4
&
&0.52 \\
${\rm [SIII] }\,\lambda$9069
& 980 & 53
&
& 220 & 65
&
&0.48 \\
${\rm [SIII] }\,\lambda$9531
& 2900 & 134
&
& 570 & 155
&
&0.44 \\
${\rm [CI] }\,\lambda$9850
& 70 & 2.9
&
& $<$33
& $<$9
&
&0.42 \\
${\rm [SVIII] }\,\lambda$9913
& 100 & 4.1
&
& &
&
&0.41 \\
${\rm Pa7 }\,\lambda$10049
& 90\rlap{:} & 4\rlap{:}
&
& &
&
&0.40 \\
${\rm [FeII] }\,\lambda$16435
& 470 & 7.1
&
& 30\rlap{:} & 5\rlap{:}
&
&0.17 \\
${\rm [SiVI] }\,\lambda$19629
& 640 & 8
&
& $<$90 & $<$15
&
&0.12 \\
${\rm H_2 }\,\lambda$21213
& 400 & 4.7
&
& 50\rlap{:} & 8\rlap{:}
&
&0.11 \\
${\rm \mbox{Br$\gamma$} }\,\lambda$21655
& 230 & 2.7
&
& 18\rlap{:} & 3\rlap{:}
&
&0.11 \\
\SKIP{2}
\mbox{H$\beta$}\ flux$^a$
& 10 & 1350
&
& 5 & 36
&
& \\
\SKIP{1}
\hline
\end{tabular}
\def\NOTA#1#2{\hbox{\vtop{\hbox{\hsize=0.030\hsize\vtop{\centerline{#1}}}}
\vtop{\hbox{\hsize=0.97\hsize\vtop{#2}}}}}
\vskip1pt
\NOTA{ $^{(1)}$ }{ Observed line flux, relative to \mbox{H$\beta$}=100}
\NOTA{ $^{(2)}$ }{ Dereddened flux (\mbox{H$\beta$}=100), a colon denotes uncertain
values. Blank entries are undetected lines with non-significant upper limits
}
\NOTA{ $^{(3)}$ }{ From Savage \& Mathis
(\cite{savage79}) and Mathis (\cite{mathis90})}
\NOTA{ $^a$}{ Units of $10^{-15}$ erg cm$^{-2}$ s$^{-1}$}
\end{flushleft}
\end{table}
\begin{figure}
\centerline{\resizebox{\hsize}{!}{\includegraphics{showslit.ps}}}
\caption{
Slit positions overlaid on grey-scale line images. The positions
of the various knots are marked.
}
\label{showslit}
\end{figure}
\begin{figure}
\centerline{\resizebox{8.8cm}{!}{\includegraphics{spec_nuc.ps}}}
\caption{
Spectrum of the nucleus,
i.e. the central region
at PA=318$^\circ$ (cf. Figs. 1, 4).
Fluxes are in units of 10$^{-16}$ erg cm$^{-2}$ s$^{-1}$ \AA$^{-1}$.
}
\label{spec_nuc}
\end{figure}
\begin{figure}
\centerline{\resizebox{8.8cm}{!}{\includegraphics{spec_knc.ps}}}
\caption{
Spectrum of Knot C,
i.e. a region 15.5\arcsec from
the nucleus at PA=318$^\circ$ (cf. Figs.~1, 4).
Flux units are as in Fig.~\ref{spec_nuc}
}
\label{spec_knc}
\end{figure}
We chose the Circinus galaxy as a benchmark
because its emission line spectrum is characterized by remarkably narrow
($\la$150 km/s, \cite{O94}) emission lines which
are particularly easy to measure and which indicate relatively low
dynamical activity. This last aspect may be used to put tight constrains on
the possible contribution of shock excitation which may complicate
the modelling of the observed spectrum and the determination of metallicities.
This paper presents new optical and infrared spectroscopic data
and is structured as follows.
Observations and data reduction are described in Sect. 2 and the results
are analyzed in Sect. 3.
In Sect. 4 we constrain the excitation conditions of the gas and
model the observed spectra in terms of photoionization from the AGN.
The derived chemical abundances are discussed in Sect. 5 and
in Sect. 6 we draw our conclusions.
\section {Observations and Data Reduction}
Long slit optical spectra were collected
at the ESO NTT telescope in March 1995 using a dichroic beam
splitter feeding both blue (1024$^2$ Tek-CCD with 0.37\arcsec/pix)
and red (2048$^2$ Tek-CCD with 0.27\arcsec/pix) arms of EMMI.
Simultaneous blue
(3700--5000 with $\simeq$1.7 \AA/pix) and red (5000--10000 with
$\simeq$1.2 \AA/pix)
spectra were obtained through a 2\arcsec slit centered on the optical
nucleus at two position angles (cf. Fig.~\ref{showslit}).
The first was at PA=318$^\circ$, roughly aligned with the [OIII]
cone axis and including the brightest [OIII] emitting knots,
while the second was at PA=243$^\circ$ and along the low excitation
rim visible in the [SII] and
``true colour'' images shown in Fig.~5 and Fig.~10 of \cite{M94},
\footnote{These images are available at
http://\-www.arcetri.astro.it/$\sim$oliva}.
Several exposures were averaged and
the total integration times of the spectra at PA=318$^\circ$
were 100, 75, 25 minutes over the 3700--5000, 5000--7300 and 7700--10080 \AA\
wavelength ranges, respectively.
The spectra at PA=243$^\circ$ covered 3700-5000 and 5000--7300 \AA\
with a total integration time of 50 minutes per wavelength interval.
Data were flux calibrated using observations of LTT3218
and reduced with MIDAS using standard procedures.
Infrared spectra were collected in March 1995, also
at the ESO NTT using the spectrometer IRSPEC equipped with a 62x58
SBRC InSb array whose pixel size was 2.2\arcsec along the slit
and $\simeq$5 \AA\ along the dispersion direction.
Spectra of [FeII]1.644 \mbox{$\mu$m}, H$_2$ 2.121 \mbox{$\mu$m}, \mbox{Br$\gamma$}\ and
[SiVI]1.962 \mbox{$\mu$m}\ were
obtained using a 4.4\arcsec slit at PA=318$^\circ$.
Each long-slit spectrum consisted of 4 ABBA cycles (A=source, B='sky',
i.e. a region 300\arcsec E) with 2x60 sec integrations per position.
The data were reduced using the IRSPEC context of MIDAS which was developed
by one of us (EO) and which also allows the accurate subtraction of
time variable OH sky lines. The same airglow emission was used as
wavelength reference (Oliva \& Origlia \cite{oliva92}) and the spectra were
corrected for instrumental and atmospheric transmission using spectra
of featureless early O stars and flux calibrated using measurements
of photometric standard stars.
\begin{table*}
\caption{Fluxes of significant lines
\label{tab_obs2}
in the various knots$^{(1)}$ }
\def\SKIP#1{\noalign{\vskip#1pt}}
\def\ \ \ \ \ {\ \ \ \ \ }
\def\rlap{:}{\rlap{:}}
\begin{flushleft}
\begin{tabular}{lrrrrrrrrrrr}
\hline\hline
& \multicolumn{1}{c}{Nucleus} & \multicolumn{1}{c}{KnA}
& \multicolumn{1}{c}{KnB} & \multicolumn{1}{c}{KnC}
& \multicolumn{1}{c}{KnD} & \multicolumn{1}{c}{KnE}
& \multicolumn{1}{c}{KnF$^a$} & \multicolumn{1}{c}{KnG$^a$}
& \multicolumn{1}{c}{KnH$^a$} & \multicolumn{1}{c}{KnI$^a$}
& \multicolumn{1}{c}{KnL$^a$} \\
& \multicolumn{1}{c}{4.6"x2"} & \multicolumn{1}{c}{4.6"x2"}
& \multicolumn{1}{c}{4.1"x2"} & \multicolumn{1}{c}{4.1"x2"}
& \multicolumn{1}{c}{4.1"x2"} & \multicolumn{1}{c}{5.4"x2"}
& \multicolumn{1}{c}{5.7"x2"} & \multicolumn{1}{c}{4.1"x2"}
& \multicolumn{1}{c}{6.5"x2"} & \multicolumn{1}{c}{8.4"x2"}
& \multicolumn{1}{c}{4.3"x2"} \\
& \multicolumn{1}{c}{\llap{A}$_{\rm V}$=4.5}
& \multicolumn{1}{c}{\llap{A}$_{\rm V}$=4.4}
& \multicolumn{1}{c}{\llap{A}$_{\rm V}$=3.8}
& \multicolumn{1}{c}{\llap{A}$_{\rm V}$=1.9}
& \multicolumn{1}{c}{\llap{A}$_{\rm V}$=2.0}
& \multicolumn{1}{c}{\llap{A}$_{\rm V}$=5.0}
& \multicolumn{1}{c}{\llap{A}$_{\rm V}\!\simeq\!4$}
& \multicolumn{1}{c}{\llap{A}$_{\rm V}\!\simeq\!4$}
& \multicolumn{1}{c}{\llap{A}$_{\rm V}\!\simeq\!1.5$}
& \multicolumn{1}{c}{\llap{A}$_{\rm V}\!\simeq\!1.5$}
& \multicolumn{1}{c}{\llap{A}$_{\rm V}\!\simeq\!6$}
\\
\hline
\SKIP{1}
${\rm [OII] }\,\lambda\lambda$3727
& 270\ \ \ \ \
& 740\rlap{$^b$}\ \ \ \ \
& 312\ \ \ \ \
& 133\ \ \ \ \
& 112\ \ \ \ \
& $<$200\ \ \ \ \
& 650\rlap{$^b$}\ \ \ \ \
& 400\ \ \ \ \
& 410\ \ \ \ \
& 380\ \ \ \ \
& 290\rlap{:}\ \ \ \ \
\\
${\rm HeII }\,\lambda$4686
& 41\ \ \ \ \
& 40\rlap{:}\ \ \ \ \
& 30\rlap{:}\ \ \ \ \
& 60\ \ \ \ \
& 53\ \ \ \ \
& $<$25\ \ \ \ \
&
&
&
&
&
\\
${\rm [OIII] }\,\lambda$5007
& 1025\ \ \ \ \
& 900\ \ \ \ \
& 690\ \ \ \ \
& 965\ \ \ \ \
& 815\ \ \ \ \
& 20\rlap{:}\ \ \ \ \
& 400\ \ \ \ \
& 230\ \ \ \ \
& 260\ \ \ \ \
& 250\ \ \ \ \
&
\\
${\rm [FeVII] }\,\lambda$\rlap{6087}
& 11\ \ \ \ \
& $<$11\ \ \ \ \
& $<$6\ \ \ \ \
& 9.7\ \ \ \ \
& 19\ \ \ \ \
&
& $<$15\ \ \ \ \
& $<$10\ \ \ \ \
&
&
&
\\
${\rm [OI] }\,\lambda$6300
& 42\ \ \ \ \
& 51\ \ \ \ \
& 24\ \ \ \ \
& 26\ \ \ \ \
& 13\rlap{:}\ \ \ \ \
& 5\ \ \ \ \
& 62\ \ \ \ \
& 31\ \ \ \ \
& 100\ \ \ \ \
& 95\ \ \ \ \
& 13\ \ \ \ \
\\
${\rm [NII] }\,\lambda$6583
& 343\ \ \ \ \
& 407\ \ \ \ \
& 223\ \ \ \ \
& 237\ \ \ \ \
& 133\ \ \ \ \
& 128\ \ \ \ \
& 425\ \ \ \ \
& 265\ \ \ \ \
& 585\ \ \ \ \
& 586\ \ \ \ \
& 159\ \ \ \ \
\\
${\rm [SII] }\,\lambda$6716
& 96\ \ \ \ \
& 130\ \ \ \ \
& 75\ \ \ \ \
& 64\ \ \ \ \
& 37\ \ \ \ \
& 45\ \ \ \ \
& 162\ \ \ \ \
& 105\ \ \ \ \
& 285\ \ \ \ \
& 342\ \ \ \ \
& 59\ \ \ \ \
\\
${\rm [SII] }\,\lambda$6731
& 96\ \ \ \ \
& 105\ \ \ \ \
& 60\ \ \ \ \
& 57\ \ \ \ \
& 32\ \ \ \ \
& 35\ \ \ \ \
& 124\ \ \ \ \
& 83\ \ \ \ \
& 210\ \ \ \ \
& 238\ \ \ \ \
& 51\ \ \ \ \
\\
${\rm [ArIII] }\,\lambda$\rlap{7136}
& 22\ \ \ \ \
& 14\rlap{:}\ \ \ \ \
& 11\ \ \ \ \
& 27\ \ \ \ \
& 22\ \ \ \ \
& $<$10\ \ \ \ \
& 10\rlap{:}\ \ \ \ \
& $<$15\ \ \ \ \
&
&
& $<$10\ \ \ \ \
\\
${\rm [SIII] }\,\lambda$9531
& 134\ \ \ \ \
& 68\ \ \ \ \
& 61\ \ \ \ \
& 155\ \ \ \ \
& 139\ \ \ \ \
& 29\ \ \ \ \
& \multicolumn{5}{c}{\dotfill not measured \dotfill not measured \dotfill }
\\
\SKIP{2}
\mbox{H$\alpha$}\ flux$^c$
& 4000\ \ \ \ \
& 590\ \ \ \ \
& 430\ \ \ \ \
& 110\ \ \ \ \
& 32\ \ \ \ \
& 820\ \ \ \ \
& 330\ \ \ \ \
& 260\ \ \ \ \
& 9\ \ \ \ \
& 9\ \ \ \ \
& 1500\ \ \ \ \
\\
\SKIP{2}
W$_\lambda$(\mbox{H$\alpha$})\rlap{$^d$}
& 23\ \ \ \ \
& 10\ \ \ \ \
& 37\ \ \ \ \
& 72\ \ \ \ \
& 30\ \ \ \ \
& 59\ \ \ \ \
& 8\ \ \ \ \
& 21\ \ \ \ \
& 5\ \ \ \ \
& 5\ \ \ \ \
& 24\ \ \ \ \
\\
W$_\lambda$([OIII])\rlap{$^d$}
& 50\ \ \ \ \
& 20\ \ \ \ \
& 60\ \ \ \ \
& 300\ \ \ \ \
& 100\ \ \ \ \
& $<$3\ \ \ \ \
& 8\ \ \ \ \
& 12\ \ \ \ \
& 7\ \ \ \ \
& 7\ \ \ \ \
&
\\
\hline
\end{tabular}
\def\NOTA#1#2{\hbox{\vtop{\hbox{\hsize=0.015\hsize\vtop{\centerline{#1}}}}
\vtop{\hbox{\hsize=0.97\hsize\vtop{#2}}}}}
\vskip1pt
\NOTA{$^{(1)}$}
{ Dereddened fluxes relative to \mbox{H$\beta$}=100, extinctions are computed
imposing \mbox{H$\alpha$}=290 and the adopted visual extinctions are given
at the top of each column. Blank entries are undetected lines
with non-significant upper limits}
\NOTA{$^a$}{ \mbox{H$\beta$}\ is weak and the derived extinction is therefore
uncertain.}
\NOTA{$^b$}{ Possibly overestimated due to contamination by foreground
gas with lower extinction. }
\NOTA{$^c$}{ Dereddened flux, units of $10^{-15}$ erg cm$^{-2}$ s$^{-1}$}
\NOTA{$^d$}{ Equivalent widths in \AA\ of nebular emission lines. }
\end{flushleft}
\vskip-16pt
\end{table*}
\section{ Results }
\subsection{ Line fluxes and ratios }
The quasi-complete spectra of the nucleus and of knot C are shown
in Figs~\ref{spec_nuc}, \ref{spec_knc} respectively, and
the derived line fluxes are summarized in Table~\ref{tab_obs1}.
Dilution by a stellar continuum is particularly strong in the nuclear
spectrum where the equivalent width of [OIII]\L5007
is only 50~\AA\ (cf. Table~\ref{tab_obs2}) and a factor
of $\sim$10 lower than found in typical Seyfert 2's.
The stellar contribution is normally estimated and subtracted
using either off--nuclear spectra extracted from the same 2D long
slit frames, or a suitable combination of spectra of non--active
galaxies used as templates (e.g. Ho \cite{ho96}, Ho et al. \cite{ho97}).
However, neither of the methods proved particularly useful because
line emission contaminates the stellar emission all along the slit,
and we could not find any template which
accurately reproduces the prominent stellar absorption features
typical of quite young stellar populations.
The fluxes of weak lines ($<$5\% of the continuum) in the
nucleus are therefore uncertain and, in a few cases, quite different
than those reported in \cite{O94}, the largest
discrepancy being for [NI] which is a factor of 2 fainter here.
The spectrum of knot C has a much more
favourable line/\-continuum ratio and shows many faint lines which
are particularly useful for the modelling described in
Sect.~\ref{photion_model}.
\subsection{Spatial distribution of emission lines}
\label{spatial_line_distribution}
The spatial variation of the most important lines is visualized
in Figs.~\ref{velcont}, \ref{spec_all}
which show contour plots of the continuum
subtracted long slit spectra and selected spectral sections of the
various knots respectively.
The fluxes are summarized in Table~\ref{tab_obs2} together
with the extinctions which were derived from hydrogen recombination lines
assuming standard case-B ratios (Hummer \& Storey \cite{hummer}).
A remarkable result is the large variations of the
typical line diagnostic ratios
[OIII]/\mbox{H$\beta$}, [OI]/\mbox{H$\alpha$}, [NII]/\mbox{H$\alpha$}\ and [SII]/\mbox{H$\alpha$}\ which are plotted
in Fig.~\ref{knotdiag} and range
from values typical of high excitation Seyferts
(nucleus, knots A, B, C, D), to
low excitation LINERs (knots H, I) and normal HII regions (knots E, L).
Another interesting result is the steep extinction gradient between
the regions outside (knots C, D, H, I) and those close to the
galactic disk (nucleus and knots A, E, L). However,
a comparison between the \mbox{Br$\gamma$}\ map
(Moorwood \& Oliva \cite{invited94}), the \mbox{H$\alpha$}\ images
(\cite{M94}) and the observed Br$\alpha$ flux from the whole
galaxy (\cite{M96}), do not show evidence
of more obscured ($\mbox{${A_{\rm V}}$}\!\sim\!10\!-\!30$) ionized regions
such as those observed in NGC4945 and other starburst galaxies
(e.g. Moorwood \& Oliva \cite{moorwood88}). Nevertheless,
these data cannot exclude the presence of deeply embedded ionized
gas which is obscured even at 4$\mu$m (i.e. \mbox{${A_{\rm V}}$}$>$50 mag).
\begin{figure}
\centerline{\resizebox{8.78cm}{!}{\includegraphics{velcont.ps}}}
\caption{
Intensity contour plots in the position--\hbox{$\lambda$}\
plane.
The long slit spectra are continuum
subtracted and the levels are logarithmically spaced by 0.2 dex.
The ordinate are arc-sec from the
\mbox{H$\alpha$}\ peak along the two slit orientations (cf. Fig.~\ref{showslit}).
The dashed lines show the regions where the spectra displayed in
Figs.~\ref{spec_nuc}, \ref{spec_knc}, \ref{spec_all} were extracted.
}
\label{velcont}
\end{figure}
\begin{figure}
\centerline{\resizebox{\hsize}{!}{\includegraphics{spec_all.ps}}}
\caption{
Spectra of the various knots (cf. Figs.~\ref{showslit}, \ref{velcont})
at selected wavelength ranges including [OIII], \mbox{H$\beta$}, HeII (left panels) and
[FeVII], [OI], [NII], \mbox{H$\alpha$}, [SII] (right hand panels).
Fluxes are in units of $10^{-16}$ erg cm$^{-2}$ s$^{-1}$ \AA$^{-1}$ and
\hbox{$\lambda$}'s are in \AA.
The spectra are also scaled by a factor given in the plots
to show faint features.
}
\label{spec_all}
\end{figure}
Particularly interesting is the variation of the line
ratios between the adjacent knots C and D.
The ratio [FeVII]/\mbox{H$\beta$}\ is a factor of
2 larger in knot D than in C, but this most probably reflects
variations of the iron gas phase abundance (see also Sect.~\ref{iron}).
Much more puzzling is the spatial variation of
the low excitation lines [OI], [SII], [NII] which
drop by a factor 1.8, while the high excitation lines HeII, [OIII],
[NeIII], [ArIII] together with the [SII] density sensitive ratio and
[OIII]/[OII] vary by much smaller amounts
(cf. Table~\ref{tab_obs2}). This cannot therefore be explained
by variations of the ionization parameters, which should first of all
affect the [OIII]/[OII] ratio.
A possible explanation for this is discussed in Sect.~\ref{model_others}.
\begin{figure}
\centerline{\resizebox{8.8cm}{!}{\includegraphics{knotdiag.ps}}}
\caption{
Spectral classification diagrams (from
Veilleux \& Osterbrock \cite{veilleux87} and Baldwin et al. \cite{baldwin81})
using the line ratios observed in the various knots.
Note that ``N'' refers to the nuclear spectrum while ``I'' is the average
of the H and I knots which have similar line ratios (cf. Table 2).
}
\label{knotdiag}
\end{figure}
\subsection{Diagnostic line ratios}
\label{diagnostic}
Temperature and density sensitive line ratios are summarized in
Table~\ref{tab_diagnostic}.
Note that only a few values are available on the nucleus because of
the strong stellar continuum which prevents the measurement of
faint lines.
\subsubsection{ Electron temperature }
The relatively large \mbox{$T_e$}(OIII), slightly lower \mbox{$T_e$}(SIII) and much cooler
[NII], [SII] temperatures are typical of gas photoionized
by a ``typical AGN'', i.e. a spectrum characterized by
a power law continuum with a super--imposed
UV bump peaked at $\approx$50--100 eV
(e.g. Mathews \& Ferland \cite{mathews87}).
As \mbox{$T_e$}(OIII) mainly depends on the
average energy of $h\nu\!<\!54$ eV ionizing photons, ``bumpy'' spectra,
which are quite flat between 13 and 54 eV, yield hot [OIII].
The lower ionization species, such as [NII] and [SII], mostly form in
the partially ionized region,
heated by soft X--rays, whose temperature
cannot exceed 10$^4$ K due to the powerful cooling by collisionally
excited Ly$\alpha$ and 2--photon emission.
The contrast between the OIII and NII temperatures can be further
increased if sub--solar metallicities are adopted, because [OIII]
is a major coolant while [NII] only plays a secondary role in
the cooling of the partially ionized region.
An alternative explanation for the OIII, NII etc. temperature differences
is to assume that part of the line emission arises from density bounded
clouds. In this case there is no need to adopt a ``bumpy'' AGN spectrum
and detailed models, assuming a pure power law ionizing continuum,
were developed by Binette et al. (1996, hereafter \cite{B96}).
However, it should be
kept in mind that \mbox{$T_e$}(OIII)$>$\-\mbox{$T_e$}(NII) does not necessarily
indicate the presence of density bounded clouds.
\subsubsection{ Electron density}
There is a clear trend between \mbox{$n_e$}\ and excitation of the species
used to determine the density.
The [SII] red doublet yields densities lower than [OII] which
is compatible with a single-density cloud for the following reason.
If the flux of soft X-rays (200-500 eV)
is strong enough, then [SII] lines are mostly produced in the X-ray
heated region where the average hydrogen ionization fraction is quite
low ($\la$0.1). The lines of
[OII] on the contrary can only be produced in the transition region,
where the ionization degree is close to unity, because of the very
rapid O--H charge exchange reactions.
Hence \mbox{$n_e$}(SII)$<$\mbox{$n_e$}(OII) most probably indicates the presence of a strong
soft-X flux, as one indeed expects to be the case for an AGN spectrum.
This is also confirmed by the detailed modelling described below.
The higher densities in the fully ionized region are new
results because the blue [ArIV] doublet is usually too weak in AGN
spectra and the FIR [NeV] lines are only
accessible with the ISO-SWS spectrometer (\cite{M96}).
The [ArIV] density of knot C is equal, within the errors, to that derived
by [OII] thus indicating that no large variations of densities are
present in this cloud.
\begin{table}
\def\SKIP#1{\noalign{\vskip#1pt}}
\def\hbox{$\lambda$}{\hbox{$\lambda$}}
\def\hbox{$\lambda\lambda$}{\hbox{$\lambda\lambda$}}
\caption{Temperature and density sensitive line ratios.}
\label{tab_diagnostic}
\begin{flushleft}
\begin{tabular}{lcc}
\hline\hline
\SKIP{1}
\ \ \ \ \ Ratio & Value & Diagnostic \\
\SKIP{1} \hline \SKIP{3}
\multicolumn{3}{c}{\dotfill\ \ Nucleus\ \ \dotfill } \\
\SKIP{3}
\ [OII]\ \LL3727/\LL7325 & 25$\pm$6 & 300$<$\mbox{$n_e$}$<$6000 \\
\ [SII]\ \L6731/\L6716 & 1.00$\pm$0.07 & 400$<$\mbox{$n_e$}$<$800 \\
\ [NeV]$^a$\ 14.3$\mu$m/24.3$\mu$m\ \ \ \ \ \ \ &
1.6$\pm$0.4 & 2000$<$\mbox{$n_e$}$<$12000 \\
\SKIP{2}
\ [SIII]\ \L9531/\L6312& $\ga$25 & \mbox{$T_e$}$\la$14000 \\
\SKIP{7}
\multicolumn{3}{c}{\dotfill\ \ Knot C\ \ \dotfill } \\
\SKIP{3}
\ [OII]\ \LL3727/\LL7325 & 14$\pm$4 & 1400$<$\mbox{$n_e$}$<$11000 \\
\ [SII]\ \L6731/\L6716 & 0.89$\pm$0.06 & 200$<$\mbox{$n_e$}$<$500 \\
\ [ArIV]\ \L4711/\L4740 & 1.0$\pm$0.2 & 1500$<$\mbox{$n_e$}$<$10000 \\
\SKIP{1}
\ [NII]\ \L6583/\L5755 & 50$\pm$13 & 9000$<$\mbox{$T_e$}$<$12500 \\
\ [OIII]\ \L5007/\L4363 & 46$\pm$9 & 14000$<$\mbox{$T_e$}$<$18000 \\
\ [SII]\ \L6731/\LL4073 & $\la$8 & \mbox{$T_e$}$\la$12000 \\
\ [SIII]\ \L9531/\L6312& 26$\pm$7 & 11500$<$\mbox{$T_e$}$<$16000 \\
\SKIP{1} \hline
\end{tabular}
\vskip3pt
$^a$ [NeV] FIR lines from M96
\end{flushleft}
\end{table}
\section{ Photoionization models }
\label{photion_model}
Several photoionization
models for the Circinus galaxy have been discussed in the literature.
These used line intensities and ratios measured from the central
regions of the galaxy and were
mostly directed to constraining the intrinsic shape of AGN
photoionizing continuum. Quite different conclusions were drawn
by \cite{M96}, who found evidence for a prominent UV bump centered
at $\simeq$70 eV, and Binette et al. (1997, hereafter \cite{B97})
who showed that equally good
(or bad) results could be obtained using a combination of radiation
and density bounded clouds photoionized by a pure power--law spectrum.
In all cases the metal abundances were assumed to be solar and no
attempt was made to constrain metallicities.
Here we mainly concentrate on knot C, an extranuclear
cloud whose rich emission line spectrum and simple geometry
(a plane parallel region) are better suited for
deriving physical parameters from photoionization modelling.
We first analyze the observational evidence against shock excitation
and then describe in some detail the new modelling procedure
which is primarily aimed at determining the gas metal abundances,
(cf. Sects.~\ref{details_photion} and \ref{knc_abund}).
We also analyze the problems to reproduce the observed
[FeVII]/[FeII] and [NII]/[NI] ratios (Sects.~\ref{iron}, \ref{NII_NI}),
discuss the role of dust and reanalyze several crucial aspects of the
line emission from the brightest regions closest to the nucleus.
\subsection{ Arguments against shock excitation }
\label{against_shock}
Strong evidence against shock excitation comes from the very
low strength of [FeII] which indicates that iron is
strongly underabundant, i.e. hidden in grains,
within the partially ionized region
(cf. Figs.~\ref{figZ}, \ref{figZnoch}).
Shocks are very effective in destroying dust grains,
and velocities as low as 100 km/s are enough to return most of
the iron to the
gas phase (e.g. Draine \& McKee \cite{drainemckee}).
This is confirmed by observations of
nearby Herbig--Haro objects (e.g. Beck-Winchatz et al. \cite{beck94})
and supernova remnants whose near IR spectra
are dominated by prominent [FeII] IR lines
(e.g. Oliva et al. \cite{oliva89}).
Although the low Fe$^+$ gas phase abundance could,
in principle, be compatible with a
very slow shock ($\la$50 km/s), this
falls short by orders of magnitude in producing
high excitation species.
Another argument
comes from the HeII/\mbox{H$\beta$}\ and HeII/HeI ratios
which are a factor $>$2 larger than those predicted by shocks models with
velocities $v\!\le\!500$ km/s
(Dopita \& Sutherland \cite{dopita95}).
It should be noted that,
although stronger HeII could probably be obtained by increasing
the shock speed to $\sim$1000 km/s, these velocities are
incompatible with the observed line profiles.
More generally, the
observed line widths ($<$150 km/s, \cite{O94})
are difficult to reconcile with the fact that
only shocks faster than 300 km/s can produce prominent
high excitation lines such as [OIII]
(Dopita \& Sutherland \cite{dopita96}).
This argument becomes even stronger when interpreting the nuclear spectra
where the highest excitation lines, [FeX,XI],
do not show any evidence of large scale motions at velocities $>$200 km/s.
(cf. \cite{O94} and work in preparation).
Finally, it is worth mentioning that detailed shock~+~photoionization
composite models recently developed by
Contini et al. (\cite{contini1998})
suggest that
shock excited gas does not contribute significantly
to any of the optical/IR lines from Circinus.
\subsection{ Details of the photoionization models }
\label{details_photion}
\subsubsection{ Single component, radiation bounded clouds }
We constructed a grid of models covering
a wide range of
ionization parameters ($-3.5\!<\!\log\, U\!<\!-1.5$),
densities ($2.0\!<\!\log\, n\!<\!5.0$),
metallicities ($0.08\!<\!{\rm He/H}\!<\!0.16$,
$-1.5\!<\!\log\, {\rm Z}/\mbox{$Z_\odot$}\!<\!0$) and
shape of the ionizing continuum which we parameterized as a combination
of a power law extending from 1 eV to 10 keV
with index $2<\alpha<0.4$ and a black--body\footnote{
The choice of a black--body does not have any direct physical
implications. We used it just because it provides a simple way to
parameterize the UV bump which seems to be a typical features of
AGNs (e.g. Mathews \& Ferland \cite{mathews87}, Laor \cite{laor90}) and is
predicted by accretion disk models
(e.g. Laor \& Netzer \cite{laor_netzer89}) }
with $5\!<\!\log T_{BB}\!<\!6$, the relative fraction of the two components
being another free parameter. All the parameters were varied randomly
and about 27,000 models were produced using
Cloudy (Ferland \cite{cloudy}) which we slightly modified
to include the possibility of varying the N--H charge exchange rate.
The line intensities were also computed using the temperature
and ionization structure from Cloudy and an in--house data base
of atomic parameters. The agreement with the Cloudy output
was good for all the ``standard'' species while large discrepancies
were only found for coronal species (e.g. [FeX]) whose collision strengths
are still very uncertain and much debated.
Out of this large grid we selected about 400 models whose
[OIII]/\mbox{H$\beta$}, [OIII]/[OII]/[OI], [ArV]/[ArIV]/[ArIII],
[SIII]/[SII],
[OIII]\L5007/\L4363, [OII]\LL3727/\LL7325,
[SII]\L6731/\L6716, [ArIV]\L4711/\L4740
line ratios
were reasonably close to those observed in knot C. These were used
as the starting point for computing the ``good models'', with adjusted
values of relative metal abundances, which minimized
the differences between predicted and observed line ratios.
Note that to reproduce both [FeVII] and [FeII] we were
in all cases forced to vary the iron gas phase abundance between the
fully and partially ionized regions (cf. also Sect.~\ref{iron}).
The results of the best model are summarized in Table~\ref{tab_modelKNC}
and discussed in Sect.~\ref{detail_knc_model}. Note the large
discrepancy for [NI] which is overpredicted by a factor of $\sim$5,
this problem is discussed in Sect.~\ref{NII_NI}.
The most important results are the abundance
histograms shown in Figs.~\ref{figZ}, \ref{figZnoch}
which were constructed by including all
the ``good'' models with $\chi^2_{red}\!<\!5$. Although
the choice of the cutoff is arbitrary, it should be
noticed that variations of this parameter do not alter
the mean values, but only influence the shape of the distributions
whose widths roughly double if the $\chi^2_{red}$ cutoff is increased to
values as large as 30.
\subsubsection{ Multi-density components, radiation bounded clouds }
\label{details_multidens}
The large grid described above was also organized to have
at least 4 models with the same photon flux, continuum shape and abundances
but different densities, and these were used as starting points
to construct multi-density models. We simulated
2--density clouds by coupling all available models
with different weights and selected about 300 of them,
which were used as starting points to compute the ``good models''
following the same procedure adopted for the single-density case.
We also simulated clouds with $\ge$3 density components,
but these results are not
included here because this complication had virtually no effect
on the quality of the fit.
The most important result is that single and multi-density
models are equally good (or bad) in reproducing the observed line ratios.
Obviously, this does not necessarily imply that knot C is a single-density
cloud, but rather indicates that the stratifications
which probably exist have little effect on the available density sensitive
line ratios.
\subsubsection{ Mixed models with density and radiation bounded clouds }
We constructed a grid of about 10,000
models photoionized by a power law AGN continuum with
index $2\!<\!\alpha\!<\!0.3$, which turns into
$\nu^{-0.7}$ beyond 500 eV, and covering the same range of physical
parameters as the radiation bounded clouds described above.
The column density of the radiation bounded component was always large
enough to reach a temperature $<$3000 K at its outer edge.
For each model we also computed line intensities from a density
bounded cloud which we defined as a layer with thickness $\Delta\,R$
equal to 5\% of the nominal radius of the HII Str\"omgren sphere,
numerically ($f$= filling factor)
$$ \Delta\,R\simeq 2500\; {U\over \mbox{$n_{\rm H}$} \, f} \ \ \ \ \ \ \ \ {\rm pc} $$
The relative contribution of density and radiation bounded regions
was set by imposing HeII\L4686/\mbox{H$\beta$}=0.6
This approach is somewhat similar to that adopted by \cite{B96} apart from
the following details.
The radiation bounded component here is formed by high density gas,
a condition required
to match the observed [ArIV]\L4711/\L4740, and is concentrated
within the projected size of knot C, i.e. $<$80 pc.
In practice, the two components have the same densities and see un--filtered
radiation from the AGN.
The ``good models'' were optimized, and the best abundances derived
using the the same procedure depicted above.
Worth mentioning is that most of the ``good models'' have spectral slopes
in the range 1.3--1.5
which are in good agreement with those used by \cite{B96}.
The most important result is that these mixed models
give similar, though slightly poorer
(cf. Sect.~\ref{detail_knc_model}) results than
radiation bounded clouds.
\subsubsection{ The role of dust in extranuclear knots}
\label{role_of_dust}
Dust can modify
the ionization and temperature structure because it competes
with gas in absorbing the UV photons, and because it hides
refractory elements such as iron.
Given the low ionization parameters, however,
the first effect is negligible, i.e.
the ionization structure of the fully ionized region of knot C
is not affected by dust although this
plays an important role in
modifying the heating--cooling balance of the partially ionized region
heated by X--rays.
Cooling: the refractory species hidden in the grains cannot contribute
to the line cooling and this effect is correctly computed by Cloudy.
For example, depleting Fe on grains produces higher
gas temperature and stronger [OI], [SII] etc. lines,
because it suppresses the near IR lines of [FeII]
which are among the major coolants for quasi--solar Fe/O
gas phase abundances.
Heating: the metals hidden in the dust still contribute
to the heating of the gas because the
X--rays have energies much larger than the binding energy
of the grains, and cannot therefore recognize
metals in dust from those in the gas phase.
The major problem is that Cloudy (and probably other photoionization
models) does not include the X--ray heating from the grain metals
and therefore underestimates the temperature of the partially ionized
region and hence may predict weaker fluxes of [OI], [SII] and other
low excitation lines.
Therefore, the models for knot C, having most of iron
depleted on grains, are not fully self--consistent
and most probably require a too high flux of X--rays to reproduce e.g.
[OI]/[OIII]. This may imply that the ``true'' models
should have somewhat softer spectra than those of Fig.~\ref{agn_cont}.
\subsection{ Best photoionization models of knot C:
the shape of the AGN continuum and the role of density bounded clouds}
\label{detail_knc_model}
Table~\ref{tab_modelKNC}
lists the results of the best photoionization models, selected
from the several hundred which provided a reasonable fit
to the spectrum of knot C. Note that the results of multi-density clouds
(Sect.~\ref{details_multidens}) are not included
because they are virtually indistinguishable from those of single-density
models.
Fig.~\ref{agn_cont} shows the AGN continuum adopted for the
``best models'' of Table~\ref{tab_modelKNC}.
Radiation bounded models require a prominent UV bump
centered (in $F_\nu$) at about 100 eV.
This is slightly bluer than that found by
\cite{M96} and somewhat harder than model spectra
of more luminous accretion disks (cf. e.g.
Laor \cite{laor90}), but in qualitative agreement with the predicted
dependence of spectral shape with AGN luminosity, i.e. that
lower luminosity accretion disks should have harder spectra
(Netzer et al. \cite{netzer92}).
\begin{table}
\caption{Photoionization models for knot C}
\label{tab_modelKNC}
\def\SKIP#1{\noalign{\vskip#1pt} }
\def\MYBOX#1#2{\hbox to 100pt{#1 \hfil #2}}
\def\rlap{(!)} {\rlap{(!)} }
\def\rlap{(!!)} {\rlap{(!!)} }
\begin{flushleft}
\begin{tabular}{lccc}
\hline\hline\SKIP{1}
Adopted parameters & Model 1 & Model 2 & Model 3\\
\SKIP{4} \hline \SKIP{2}
\hglue 10pt Element & \multicolumn{3}{c}{Abundances$^{(1)}$} \\
\SKIP{1}
Helium & 11.1 & 11.1 & 11.1 \\
Nitrogen & 8.05 & 8.20 & 8.05 \\
Oxygen & 8.17 & 8.27 & 8.17 \\
Neon & 7.27 & 7.37 & 7.27 \\
Sulphur & 6.98 & 6.90 & 6.75 \\
Argon & 6.38 & 6.45 & 6.20 \\
Silicon & 6.85 & 6.95 & 6.85 \\
Fe low-ion$^a$ & 5.80 & 5.80 & 5.80 \\
Fe high-ion$^b$ & 6.90 & 7.00 & 7.00 \\
\SKIP{04} \hline \SKIP{2}
Gas density ($n_{\rm H}$) & 2000 & 3000 & 2000 \\
Ionizing continuum$^c$
& UV-bump & $\nu^{-1.5}$ & $\nu^{-1.4}$ \\
Ionization parameter$^d$ ($U$)
& 3.5 $10^{-3}$ & 4.7 $10^{-3}$ & 2.0 $10^{-3}$ \\
Fraction radiation bounded$^e$
& 100\% & 4\% & 3\% \\
$F$(\mbox{H$\beta$})/$F_{\rm RB}$(\mbox{H$\beta$})$^f$
& 100\% & 6\% & 6\% \\
\SKIP{04} \hline\SKIP{2}
\MYBOX{Line ratio$^{(2)}$}{Observed}
& Model 1 & Model 2 & Model 3\\
\SKIP{1}\hline\SKIP{3}
\MYBOX{HeII/\mbox{H$\beta$}}{ 0.60$\pm$0.08 } & 0.60 & 0.60 & 0.60 \\
\MYBOX{HeII/HeI}{ 6.2$\pm$0.9 } & 6.8 & 5.8 & 6.3 \\
\SKIP{3}
\MYBOX{[NII]/\mbox{H$\alpha$}}{ 0.80$\pm$0.08 } & 0.83 & 0.79 & 0.77 \\
\SKIP{0}
\MYBOX{[NII]/[NI]}{ 34$\pm$7 } & 8.0\rlap{(!!)} & 6.5\rlap{(!!)} & 4.7\rlap{(!!)} \\
\MYBOX{\L6583/\L5755}{ 50$\pm$13} & 41 & 57 & 39 \\
\SKIP{3}
\MYBOX{[OIII]/\mbox{H$\beta$}}{9.7$\pm$1} & 10 & 9.8 & 9.3 \\
\MYBOX{\L5007/\L4363}{46$\pm$9} & 31\rlap{(!)} & 43 & 42 \\
\MYBOX{[OII]/[OIII]}{0.14$\pm$0.02 } & 0.12 & .057\rlap{(!!)} & 0.12 \\
\MYBOX{[OI]/[OIII]}{.027$\pm$.003} & .027 & .024 & .028 \\
\MYBOX{\LL3727/\LL7325}{14$\pm$4} & 14 & 12 & 13 \\
\SKIP{3}
\MYBOX{[NeIII]/\mbox{H$\beta$}}{0.66$\pm$0.1} & 0.68 & 0.70 & 0.73 \\
\SKIP{3}
\MYBOX{[SiVI]/\mbox{Br$\gamma$}}{ $<$5} & 2.2 & 1.5 & 0.12 \\
\SKIP{3}
\MYBOX{[SII]/\mbox{H$\alpha$}}{ 0.19$\pm$0.02} & 0.20 & 0.20 & 0.19 \\
\MYBOX{\L6731/\L6716 }{ 0.89$\pm$0.06} & 0.95 & 0.88 & 0.91 \\
\MYBOX{[SIII]/[SII] }{ 2.8$\pm$0.4} & 3.0 & 2.4 & 2.6 \\
\MYBOX{\L9531/\L6312 }{ 26$\pm$7} & 20 & 22 & 21 \\
\SKIP{3}
\MYBOX{[ArIII]/\mbox{H$\alpha$} }{ .092$\pm$.011} & .089 & .098 & .092 \\
\MYBOX{[ArIV]/[ArIII] }{ 0.37$\pm$0.1} & 0.56 & 0.57 & 0.25 \\
\MYBOX{[ArV]/[ArIII] }{ 0.24$\pm$0.03} & 0.17 & 0.17 & .032\rlap{(!!)} \\
\MYBOX{\L4711/\L4740 }{ 1.0$\pm$0.2} & 0.88 & 0.96 & 0.88 \\
\SKIP{3}
\MYBOX{[FeVII]/\mbox{H$\alpha$} }{ .033$\pm$.005} & .031 & .030 & .005\rlap{(!!)} \\
\MYBOX{[FeVI]/[FeVII] }{ $<$0.7} & 0.70 & 1.2 & 4.1\rlap{(!!)} \\
\MYBOX{[FeX]/[FeVII] }{ $<$0.2} & .045 & .030 & .005 \\
\MYBOX{[FeII]/\mbox{Br$\gamma$} }{ $\approx$1.5} & 1.6 & 1.8 & 1.8 \\
\SKIP{2} \hline
\end{tabular}
\def\NOTA#1#2{\hbox{\vtop{\hbox{\hsize=0.030\hsize\vtop{\centerline{#1}}}}
\vtop{\hbox{\hsize=0.97\hsize\vtop{#2}}}}}
\NOTA{ $^{(1)}$ }{ 12+log(X/H) where X/H is the absolute abundance by number }
\NOTA{ $^{(2)}$ }{ Abbreviation used: HeII=\L4686, HeI=\L5876,
[NII]=\L6583, [NI]=\LL5200, [OIII]=\L5007,
[OII]=\LL3727, [OI]=\L6300, [NeIII]=\L3869, [NeV]=\L3426,
[SiVI]=\L19629, [SiVII]=\L24827,
[SII]=\L6731, [SIII]=\L9531, [ArIII]=\L7136,
[ArIV]=\L4740, [ArV]=\L7006, [FeVII]=\L6087,
[FeVI]=\L5146, [FeII]=\L16435}
\NOTA{ $^a$}{ Gas phase abundance in the partially ionized region }
\NOTA{ $^b$}{ Gas phase abundance in the fully ionized region }
\NOTA{ $^c$}{ UV--bump has $T_{BB}\!=\!4\,10^5$ K and $\alpha$=1.4,
cf. Fig.~\ref{agn_cont} }
\NOTA{ $^d$}{ $U=Q(H)/4\pi R^2 n_{\rm H} c$}
\NOTA{ $^e$}{ Models 2,3 include density bounded clouds with the
same density and $U$ as radiation bounded regions, cf. Sect. 4.6.2 }
\NOTA{ $^f$}{ Flux relative to that produced by a radiation bounded model }
\end{flushleft}
\end{table}
\begin{figure}
\centerline{\resizebox{8.2cm}{!}{\includegraphics{agn_cont.ps}}}
\caption{
The AGN ionizing continua
used to compute the best models
(Table~\ref{tab_modelKNC}) are plotted together with
the observed X and IR continua.
The solid line refers to a single density, radiation bounded component
which provides a good fit for all lines but [NI].
The broken curves are for combinations of density and
radiation bounded clouds, but these models cannot
simultaneously
reproduce the [OII]/[OIII] and [ArV]/[ArIII] ratios.
The dashed curve provides the best fit for
high excitations lines while the dotted curve best reproduces the
OI/OII/OIII ionization balance
(see Sect.~\ref{detail_knc_model} for details).
Note that the curves show the spectra seen by knot C, and should
be scaled by a coefficient which takes into account the ``intrinisic beaming''
of the AGN ionizing radiation (a factor of 2 for an optically thick disk)
before being compared with the observed points. See Sect.~\ref{agn_budget}
for a discussion of the AGN energy budget.
}
\label{agn_cont}
\end{figure}
The amplitude of the UV bump
could be significantly decreased by relaxing the assumption
on the gas density distribution and, with a properly tuned combination
of density and radiation bounded clouds, one may probably get
a similarly good fit with a power law.
It is nevertheless instructive to analyze why our mixed models provide
a worse fit to the data and, in particular,
are unable to simultaneously reproduce the OII/OIII balance and
the [ArV]/[ArIII] and [FeVII]/[FeVI] ratios
(cf. last two columns of Table~\ref{tab_modelKNC}).
Model \#2 has a high ionization parameter and correctly
predicts high excitation lines but underestimates all [OII] lines while
model \#3, with a lower value of U, correctly reproduces the
[OII]/[OIII] ratio but predicts very faint high excitation lines.
The main reason for the above difference is that, for a given gas density,
the [OII]/[OIII] ratio is a measure of the flux of soft ionizing
photons ($h\nu$=13--54 eV), while [ArV]/[ArIII] and [FeVII]/[FeVI]
depend on the flux of harder radiation ($h\nu\!\ga\!6$ 80 eV).
The spectrum of knot C is characterized by a quite large [OII]/[OIII] ratio,
which points towards low fluxes of soft photons, and strong high
excitation lines, which require large fluxes of hard photons. These
somewhat contradictory
requirements can be easily satisfied by a spectrum which steeply rises
beyond the Lyman edge, and peaks at $\approx$100 eV, i.e. a spectrum
similar to that of Model \#1. \\
Independently on the detailed results of the models, the following
arguments indicate that density bounded clouds may indeed play
an important role.\\
-- Ionizing spectra with pronounced
UV--bumps tend to produce too high [OIII] temperatures and
model \#1 is indeed quite hot, though still compatible (within 1.5$\sigma$)
with the somewhat noisy measurement of [OIII]\L4363
(cf. Fig.~\ref{spec_knc} and Tab.~\ref{tab_modelKNC}).
Curiously, though, the result of Model \#1 is the opposite of that obtained
by most previous models in the literature which failed to
produce hot enough [OIII].
Therefore, the classical ``[OIII] temperature problem'' may just reflect
the fact that past models were mostly biased toward
high metallicities and rather flat (i.e. without strong bumps) continua. \\
-- The \mbox{H$\beta$}\ flux from knot C is only $\simeq$5\% of that expected
if the gas absorbed all the ionizing photon impinging on
the 2\arcsec~x~2\arcsec (40~x~40 pc$^2$) geometrical cross--section
of the cloud.
Such a small ``effective covering factor''
could, in principle, be obtained by assuming a suitable distribution of
radiation bounded, geometrically thin clouds or filaments. However,
density bounded clouds seem to provide a more self--consistent interpretation
because the measured value is remarkably close to the 6\% predicted by models
\#2 and \#3 (cf. note $f$ of Tab.~\ref{tab_modelKNC}).\\
-- The variation of line ratios between the different knots cannot be
explained by radiation bounded clouds illuminated by the same ionizing
continuum, but require e.g. intrinsic beaming of the AGN continuum and/or
filtering by matter bounded clouds closer to the nucleus.
Mixed models could give a more natural
explanation to the spatial variation of line ratios (see also \cite{B96}).
\subsection{Energy budget of the AGN }
\label{agn_budget}
\begin{table}
\def\SKIP#1{\noalign{\vskip#1pt}}
\def\MYBOX#1#2{\hbox to 155pt{#1 \hfil (#2)}}
\caption{Global properties of the AGN ionizing continuum$^{(1)}$}
\label{tab_globalAGN}
\begin{flushleft}
\begin{tabular}{lr}
\hline\hline
\SKIP{1}
\MYBOX{Ionizing flux seen by knot C$^a$}{cm$^{-2}$\ s$^{-1}$} &
$(1\!-\!4) \times 10^{11}$ \\
Effective covering factor of knot C$^b$ & $\sim$5\% \\
\MYBOX{$Q$(H)$_{\rm AGN}^c$}{s$^{-1}$} & $(0.5\!-\!2.0) \times 10^{54}$ \\
\MYBOX{Observed recombination rate$^d$}{s$^{-1}$} & $2\cdot10^{52}$ \\
Fraction of $Q$(H)$_{\rm AGN}$ intercepted by gas$^e$ & $\sim$1\% \\
\MYBOX{Total AGN ionizing luminosity$^f$}{\mbox{$L_\odot$}} & $\sim2\cdot10^{10}$ \\
\MYBOX{Observed FIR luminosity (\mbox{$L_{\rm FIR}$})$^g$}{\mbox{$L_\odot$}} & $\simeq1.2\cdot10^{10}$ \\
\SKIP{1}
\hline
\end{tabular}
\vskip3pt
\def\NOTA#1#2{
\hbox{\vbox{\hbox{\hsize=0.030\hsize\vtop{\centerline{#1}}}}
\vbox{\hbox{\hsize=0.97\hsize\vtop{\baselineskip2pt #2}}}}}
\NOTA{$^{(1)}$} {Assuming a distance of 4 Mpc}
\NOTA{$^a$}{ Required to reproduce the $U$--sensitive and density sensitive
line ratios measured in Knot C
(cf. Sects.~\ref{detail_knc_model},~\ref{agn_budget})
}
\NOTA{$^b$}{ Ratio between the covering factor and the
$\simeq\!40\!\times\!40$ pc$^2$ geometric cross section of the knot
}
\vskip3pt
\NOTA{$^c$}{ Total number rate of AGN ionizing photons,
assuming emission from an optically thick disk
(cf. Sect.~\ref{agn_budget})
}
\NOTA{$^d$}{ From $L$(Br$\alpha$)=$2.5\,10^5$ \mbox{$L_\odot$}\ (M96)}
\NOTA{$^e$}{ Assuming that half of the observed Br$\alpha$
emission is produced by the starburst ring}
\NOTA{$^f$}{ Assuming an average photon energy of 50 eV}
\NOTA{$^g$}{ From Siebenmorgen et al. (\cite{siebenmorgen}) }
\end{flushleft}
\end{table}
The ionizing photon flux seen by
knot~C is constrained by the $U$--sensitive and density sensitive
line ratios which yield
$$ \Phi_{ion}=U\cdot\mbox{$n_{\rm H}$}\cdot c \simeq\ (1-4) \times 10^{11} \ \ {\rm cm}^{-2} $$
This can be translated into $Q$(H)$_{\rm AGN}$, the total number rate
of ionizing photons from the AGN, once the angular distribution of the
UV ionizing radiation is known. Assuming that it arises from a
geometrically thin, optically thick accretion disk, one finds
$Q(\theta)\approx\cos\theta$ (e.g. Laor \& Netzer \cite{laor_netzer89})
and
$$ Q({\rm H})_{\rm AGN} = 2\pi\,R_{\rm knC}^2\; \Phi_{ion} \ \simeq
\ {(0.5-2.0) \times 10^{54}\over\cos^2 i} \ \ {\rm s}^{-1}$$
where $R_{\rm knC}$ is the projected distance of knot C i.e. 15.5\arcsec
or $d$=300/$\cos i$ pc
from the nucleus, $i$ being the inclination angle of the cone relative to
the plane of the sky. Detailed kinematical studies indicate
$|i|\!\le\!40^\circ$ (Elmouttie et al. \cite{elmouttie}).
The AGN luminosity in the ionizing continuum is therefore
$$ L_{ion} = Q({\rm H})_{\rm AGN} <\!h\nu_{ion}\!>
\ \simeq\ {1\!-\!4 \cdot 10^{10}\over\cos^2 i} \mbox{$L_\odot$} $$
where $<\!h\nu_{ion}\!>$ is the average photon energy which is here
assumed to be $\simeq$50 eV.
The ionizing luminosity is therefore very large but
compatible, within the errors, with
the observed FIR luminosity
$\mbox{$L_{\rm FIR}$}\!\simeq\!1.2\,10^{10}$ \mbox{$L_\odot$}\
(Siebenmorgen et al. \cite{siebenmorgen}). This
implies that the AGN emits most of its energy in the ionizing continuum
or, equivalently, that the AGN intrinsic spectrum has a prominent
peak or bump in the ionizing UV, and much weaker emission at lower energies.
This is in good agreement with computed models of accretion disks
which also predict that low luminosity AGNs, such as Circinus, should
be characterized by a quite hard ionizing continuum (cf.
Laor \cite{laor90}, Netzer et al. \cite{netzer92}).
The global properties of the AGN are summarized in Table~\ref{tab_globalAGN}
which also includes a comparison between $Q({\rm H})_{\rm AGN}$
and the observed recombination
rate, based on ISO observations of the Br$\alpha$ H--recombination line
at 4.05 $\mu$m (cf. \cite{M96}). The difference is remarkably large,
with only $\simeq$1\% of the AGN ionizing photons being accounted
for by emission from ``normal'' ionized gas. This indicates
that the bulk of the Lyman continuum radiation
either goes into ionizing regions which are obscured even at 4 $\mu$m
(i.e. \mbox{${A_{\rm V}}$}$>$50 mag), or is directly absorbed by grains in dusty clouds
lying very close to the AGN.
\subsection{ Iron depletion and the [FeVII]/[FeII] problem }
\label{iron}
The observed
[FeVII]\L6087/[FeII]\L16435$\ga$1 ratio cannot be
explained using the same iron gas phase abundance in the HeIII
Str\"omgren sphere, where FeVII is formed, and in the partially
ionized region, where iron is predominantly FeII due to the
rapid charge exchange recombination reactions with neutral hydrogen.
It should be noted that this is a fundamental problem unrelated to the
details of the photoionization models and primarily reflects
the fact that the [FeVII] line has a
very small collision strength ($\Upsilon/\omega_1\!\simeq\!0.1$)
which is factor of about 10 lower than
the [FeII] transition.
A Fe$^{+6}$/Fe$^+\!\ga\!2$
integrated relative abundance is thus required to reproduce the observed
line ratio. This number is incompatible with the relative sizes of
the HeIII and partially ionized
regions which are constrained by e.g. the HeII and [OI] lines.
It is also interesting to note that this problem
is even exacerbated if the shock models
of Dopita \& Sutherland (\cite{dopita96}) are adopted
because these predict Fe$^{+6}$/Fe$^+$ ratios much smaller than
the photoionization models described above.
\begin{figure}
\centerline{\resizebox{\hsize}{!}{\includegraphics{nitrogen.ps}}}
\caption{
Effect of varying $\delta$(N), the rate coefficient of N--H charge exchange,
on the [NII]/[NI] and [NI]/\mbox{H$\beta$}\ ratios predicted by the
photionization models discussed
in Sect.~\ref{detail_knc_model}.
The solid line refers to model \#1 while the dashed curve
is for model \#2 (cf. Table~\ref{tab_modelKNC}), and the dashed
area show the value (with errors) measured in knot C.
The parameter $\delta_0$(N)
is the standard rate coefficient used by Cloudy.
}
\label{Nitrogen}
\end{figure}
A possible solution could be to advocate that the [FeVII] collision
strengths are underestimated by a factor of $\sim$10 which would
reconcile the observed [FeVII] and [FeII] intensities with a
low (2\% of solar) iron abundance (cf. Figs.~\ref{figZ}, \ref{figZnoch}).
However, although the collision strengths of coronal Fe lines are known
to be very uncertain, and vary by factors $>$10 depending on whether
resonances are included in the computation
(cf. Sect. 5 of Oliva \cite{oliva_china97}),
the available collision strengths for [FeVII]
increase by only 20\% between the old DW computations of Nussbaumer \&
Storey (\cite{Fe7-DW}) and the newer R--MAT (i.e. including resonances)
values of Keenan \& Norrington (\cite{Fe7-RMAT}).
More detailed modelling of the [FeVII] lines and spectroscopic studies
of nearby astrophysical laboratories (e.g. high excitation planetary
nebulae) are required to clarify this issue.
The alternative possibility is to assume that the iron depletion is
much larger in the partially than in the fully ionized region,
as already suggested by Netzer \& Laor (\cite{netzer_laor93})
and Ferguson et al. (\cite{ferguson97}). However, we
could not find any simple explanation for such a stratification
in knot C which lies far from the AGN and receives a relatively
modest flux of hard UV photons. Therefore, Fe--bearing dust cannot be
photo--evaporated and the only mechanism to destroy grains is sputtering.
The slow shock produced by the ionization front
which propagated outward when the AGN turned on was too
slow ($\le$40 km/s) to effectively return Fe to the gas phase
(e.g. Draine \& McKee \cite{drainemckee}).
Faster shocks ($\ge$100 km/s) are a natural and efficient source of
sputtering but cannot explain the observations because they are
expected, and observed, to emit prominent [FeII] lines from the
dust--free recombining gas.
A possibility to overcome this problem is a combination of
shocks and photoionization where e.g. the dust--free gas processed by the
shock is kept fully ionized by the AGN radiation. However, this
situation is short lived because, after a few thousand years, the gas piled
up behind the shock will eventually reach a
column density high enough to become radiation bounded, and
shield the recombining gas which will therefore show up in [FeII].
\subsection{ The [NII]/[NI] dilemma }
\label{NII_NI}
The photoionization models of knot C systematically overestimate
by large factors ($>$6) the strength of [NI] relative to [NII]
(cf. Table~\ref{tab_modelKNC} and Fig.~\ref{figZ}).
Although [NI]\LL5200 has a quite low critical density ($\simeq$1500
cm$^{-3}$), this error cannot be attributed to the presence
of higher density clouds because these would also effect the [SII]
density sensitive ratio. In other words, multi-density models
which correctly reproduce the high [NII]/[NI] ratio inevitably predict
too large [SII]\L6731/\L6716 ratios. Also, we can exclude observational
errors because, in knot C, the [NI] doublet
has an equivalent width of about 2.5 \AA\ and is only marginally affected
by blending with neighbouruing stellar absorption lines.
A possible solution is to argue that the
rate coefficients for N--H charge exchange are overestimated,
as already suggested by e.g.
Ferland \& Netzer (\cite{ferland_netzer}).
In the partially ionized region, NII is mostly neutralized via
charge exchanges with H$^0$ and adopting lower charge
exchange efficiencies yield larger [NII]/[NI] ratios.
This is evident in Fig.~\ref{Nitrogen} where this ratio is plotted
as a function of the assumed value of $\delta$(N), the charge exchange rate
coefficient.
Assuming a N--H charge exchange a factor of $\sim$30 lower than
presently adopted yields the correct [NII]/[NI] ratio.
Noticeably, the problem we find here is exactly the opposite of what
reported by Stasinska (\cite{stasinska84}) whose models systematically
underpredicted the [NI]/[NII] ratio in objects with low [OI]/[OII].
\subsection{ Modelling the nuclear spectrum: dusty, dust--free and
diffuse components. }
\label{model_nuc}
A puzzling aspect of the line emission from regions very close to the
nucleus
is the simultaneous presence of high (e.g. [FeXI]) and low
(e.g. [SII], [OI])
ionization species. More specifically, the images of \cite{M94} clearly show
that [SII] peaks at a distance of only $\simeq$0.5\arcsec\ or 10 pc from the
nucleus (cf. Fig. 9 of \cite{M94}).
This result is incompatible with the standard idea according to which
low excitation lines are produced in clouds with low ionization parameters.
Numerically, combining
the ionizing continuum inferred from the spectrum of knot C with
$n$=1.2$\,10^4$ cm$^{-3}$, the highest density
compatible with the FIR [NeV] doublet (Table~\ref{diagnostic}),
yields $U\!\simeq\!0.5$
at $r$=10 pc from the AGN, an ionization parameter
far too large for the production of low excitation species.
Not surprisingly therefore, all the models so far developed for the
high excitation lines (\cite{O94}, \cite{M96}, \cite{B97})
predict that [SII] should peak much farther out and have a much lower
surface brightness than that observed.
\begin{figure}
\centerline{\resizebox{8.8cm}{!}{\includegraphics{modelnuc.ps}}}
\caption{
Ionization structure and spatial variation of the line emission
from two nuclear clouds exposed to the AGN
continuum of ``Model \#1''
(cf. Fig.~\ref{agn_cont})
and with identical physical parameters (\mbox{$n_{\rm H}$}=$10^4$ cm$^{-3}$ and
filling factor $f$=0.025)
except for dust. The model without dust emits most of the coronal
lines, while the dusty cloud
accounts for the prominent low excitation lines observed close to the nucleus.
However, both models fail to produce enough [OIV] and
other intermediate ionization lines which therefore require a third,
more diffuse component.
See Sect.~\ref{model_nuc} for details.
}
\label{modelnuc}
\end{figure}
A simple and indeed natural solution is to assume that
the low excitation lines are formed in dusty clouds. At these large
ionization parameters dust dominates the absorption of UV ionizing
photons and, therefore, quenches the HeIII and HII Str\"om\-gren spheres.
Consequently, the X--ray dominated partially ionized region
starts at much smaller radii, and is also slightly hotter
than in the dust free model (cf. Fig~\ref{modelnuc}).
Therefore, dusty clouds have a [SII] peak much closer to the nucleus
and a $>$10 times higher surface brightness than dust--free clouds.
It should be noted, however, that the total luminosity of low excitation
optical lines is similar in the two cases, fundamentally
because the available luminosity of soft X--rays is the same.
Although the combined emission of dusty and dust--free clouds can
account for the observed emission of low ionization and
coronal species, it falls short by large factors
in producing [OIV] and other relatively
low ionization species which form within the HeIII sphere.
This is an intrinsic limitation of ``compact models'' such as those
of Fig.~\ref{modelnuc}, and can be understood as follows.
Compact models are characterized by large ``ionization parameters''
(cf. Sect. 3 of Oliva \cite{oliva_china97} for a critical
discussion on this parameter)
and therefore have large fluxes of OIV ionizing photons
which keep oxygen in higher ionization stages (mostly OVI and OVII) at all
radii inside the HeIII Str\"omgren sphere. Outside the HeIII region,
on the contrary, oxygen cannot be ionized above OIII
because most of the OIII ionizing photons have already been absorbed by HeII.
Therefore, OIV can only exist in a very narrow range of radii, just
at the edge of the HeIII sphere, and its relative abundance is therefore
very low.
In practice, we found it impossible to construct a single model which
simultaneously produces a compact coronal line region, such as that
observed in [FeXI] (\cite{O94}), and which comes anywhere close to the
[OIV]\L25.9/[OIII]\L5007$\simeq$0.3 observed ratio.
We did indeed construct many thousands of randomly
generated dusty and dust--free models, and attempted an approach similar
to that used for knot C (Sect.~\ref{details_photion})
but, in no case, could we find a model
which satisfies these contradictory constraints.
It should also be noted that \cite{B97}
independently come to a similar conclusion.
The main conclusion therefore is that, regardless of the details
of the models, the nuclear spectrum and line spatial distribution
can only be modeled by adding
a third ``diffuse'' component (e.g. with a lower filling factor)
to the dusty and dust--free clouds discussed above and depicted in
Fig.~\ref{modelnuc}.
Given the large number of free parameters, we abandoned the idea of
using photoionization models to
constrain abundances and other physical properties of the gas,
We made some attempt to verify that a mixture of clouds exposed to the same
continuum, and with the same abundances as Model \#1 of knot C
(Table~\ref{tab_modelKNC} and Fig.~\ref{agn_cont}) could
reasonably well reproduce the observed
properties of the nuclear spectrum. However, the results are not too
encouraging and, apart from the much improved [SII] surface brightness and
[OIV]/[OIII] ratio, the solutions are
not significantly better than those already discussed
by \cite{M96} and \cite{B97},
and are not therefore discussed here.
\subsection{ Modelling other extra--nuclear knots}
\label{model_others}
An analysis similar to that used for knot C was also applied to
the other extra--nuclear knots
using the more limited number of lines available in their spectra.
The results can be summarized as follows.
The abundances derived in the Seyfert-type knots A, B, D, G, F (cf.
Fig.~\ref{knotdiag}) are similar those found in knot C
but affected by much larger errors because of the more limited numbers
of lines available for the analysis. In particular, the density
sensitive [ArIV] doublet and the U--sensitive
[ArV] line is not detected in any of these knots, and
the reddening correction for [OII]$\LL3727$ could be very uncertain
in the high extinction regions (cf. note $b$ of Table~\ref{tab_obs2}).
We also attempted to verify if the observed line ratios in these
knots could be explained as photoionization by the same AGN continuum
seen by knot C but could not find any satisfactory solution using
radiation bounded clouds exposed to the same
continuum. Adding matter bounded clouds alleviates
the problem (as already stressed by \cite{B96})
but requires an {\it ad hoc} choice of their photoelectric
opacities, i.e. the radius at which the ionization structure is cut.
In particular, explaining the drop of low excitation lines
between the adjacent knots C and D (cf. Sect.~\ref{spatial_line_distribution})
requires matter bounded clouds
cut at about 1.2$\times$ the HII Str\"omgren radius, the exact position
of the cut depending on the assumed shape of the AGN continuum.
Another parameter affecting the ratio of low--to--high
excitation lines (e.g. [OI]/[OIII]) is the iron gas phase abundance
which influences the cooling of the partially ionized region
(cf. Sect.~\ref{role_of_dust}).
If iron is more abundant in knot D, as indicated by its
stronger [FeVII] line emission
(cf. Table~\ref{tab_obs2} and Sect.~\ref{spatial_line_distribution}),
than [OI], [SII], [NII] could be depressed by the increased [FeII]
cooling.
The abundances derived for the LINER--like knots (H, I) are
very uncertain ($\pm$1.3 dex at least).
Their spectra are not compatible with illumination
from the same continuum
seen by knot C but require a harder (i.e. more X rays relative to
13--80 eV photons) spectrum which could be in principle obtained
by filtering the AGN continuum through an absorber with a
carefully tuned photoelectric opacity.
Alternatively, the weak (in surface brightness) spectrum of these
low excitation knots could be explained by shock excitation, in which
case one expects [FeII]\L16435/\mbox{H$\beta$}$\simeq$1 and a factor $>$10
larger than in the case of pure photoionization.
Finally, the oxygen lines in the highly reddened HII--like knots (E, L)
are too weak to allow any reliable abundance analysis.
\begin{figure}
\centerline{\resizebox{8.8cm}{!}{\includegraphics{figz.ps}}}
\caption{
Element abundances (in log of solar units) as derived from a
comparison between the observed line strengths in knot C, and the
predictions of a large
grid of randomly generated photoionization models.
The three columns refer to clouds with different gas density distributions,
and the ``good models'' are
those coming closer to reproducing the observed line ratios.
See text, Sects.~\ref{knc_abund} and \ref{details_photion} for details.
}
\label{figZ}
\end{figure}
\begin{figure}
\centerline{\resizebox{8.8cm}{!}{\includegraphics{figznoch.ps}}}
\caption{
Same as Fig.~\ref{figZ} but using photoionization models where
the rate coefficient for N--H charge exchange reactions were arbitrarily
set to zero.
Note that this provides a much better match for the [NII]/[NI] line ratio
while has a small effect on the Nitrogen abundance derived from
[NII] lines.
See text, Sects.~\ref{knc_abund} and \ref{details_photion} for details.
}
\label{figZnoch}
\end{figure}
\section{Discussion }
\subsection{Element abundances}
\label{knc_abund}
Deriving abundances of AGN clouds from photoionization models
has generally been considered to be unreliable
because the shape of the AGN ionizing continuum and the
density distribution of the emitting regions (clouds) are both
basically unknown.
The more typical approach therefore has been
to assume a metallicity and use photoionization models
to constrain the AGN spectrum and/or the gas density distribution,
or just to demonstrate that the gas is photoionized.
Although explicit statements concerning the nitrogen abundance are often found
in the literature (e.g. Storchi-Bergmann \& Pastoriza \cite{storchi89},
Simpson \& Ward \cite{simpson96}) these are based on a very
limited choice of photoionization model parameters and in most cases
find oxygen abundances close to solar, in disagreement with what is
derived here.
The only other piece of
work which covers a model parameter range comparable to that presented
here is that by Komossa \& Schulz (\cite{komossa97}) who
analyzes a much more limited numbers of lines, e.g. do not include
[ArIV,V], in a large sample of Seyferts. They find that, on average,
oxygen is underabundant by a factor of $\sim$2 and that the N/O
ratio is only a factor of 1.5--2.0 above the solar value.
We believe that the analysis presented here leads to more reliable
metallicity estimates.
The results presented here indicate that, in spite of the above uncertainties,
reliable metallicities can indeed be derived from spectra including a large
enough number of lines, such as that of knot C.
Our method has been described in Sect.~\ref{details_photion}
and, in short, is based on the computation of a large number of
photoionization models with
a minimal personal bias and preconceived ideas on
the AGN ionizing spectrum and the gas density distribution.
In particular, we also considered mixed models with combinations
of density and radiation bounded clouds (\cite{B96})
as well as models with multiple density components.
We spanned a very wide range of model parameters which were varied
randomly, and selected the relatively few (about 200) good models which
came closest to reproducing the observed line ratios.
The main results are summarized in Figs.~\ref{figZ}, \ref{figZnoch}
which show the
distribution of element abundances required to reproduce the
observed line ratios.
The most remarkable feature is that the metal abundances
are quite well constrained in spite of the
very different assumptions made for the gas density
distribution and shape of the AGN continua.
In other words, models with very different abundances fail to match
the observed lines ratios in knot C regardless of the AGN spectral shape
and/or gas density distribution assumed.
Another encouraging result is that lines from different ionization
stages yield similar abundances which simply reflects the
fact that the models reproduce the observed line
ratios reasonably well. There are however remarkable exceptions, such as
the [NII]/[NI] and [FeII]/[FeVII] ratios which
are both predicted too high. Possible explanations for
these differences have been discussed above
(Sects.~\ref{iron}, ~\ref{NII_NI}). We stress here, however, that
the uncertainties on [NI] have little effects
on the derived nitrogen abundance
because N$^+$ is the most abundant ion within the partially ionized region.
Therefore, the best--fit N abundance decreases by only
a factor $\simeq$1.3 once the N$^+$/N$^0$ is increased to match
the observed [NII]/[NI] ratio (cf. Fig.\ref{figZnoch}).
Note also that the He/H abundance is only poorly constrained by the models,
and although models with He/H$>$0.1 are somewhat favoured, no firm
conclusion about He overabundance can be drawn from the data.\\
\begin{table}
\caption{Metal abundances in knot C$^{(1)}$}
\label{tab_Z}
\def\SKIP#1{\noalign{\vskip#1pt} }
\begin{flushleft}
\begin{tabular}{lccccc}
\hline\hline\SKIP{2}
Element & 12+log(X/H)\rlap{$^{(2)}$} &
[X/H]\rlap{$^{(3)}$} & \hglue2pt\ &
\multicolumn{2}{c}{[X/O]\rlap{$^{(4)}$}} \\
& & & & obs & model\rlap{$^{(5)}$} \\
\SKIP{2} \hline \SKIP{1}
Nitrogen${^a}$ & 8.0 & $+0.0$ & & $+0.7$ & $+0.6$ \\
Oxygen & 8.2 & $-0.7$ & & -- & -- \\
Neon & 7.3 & $-0.8$ & & $-0.1$ & $-0.2$ \\
Sulphur & 6.9 & $-0.3$ & & $+0.4$ & \ n.i.\rlap{$^b$} \\
Argon & 6.4 & $-0.2$ & & $+0.5$ & \ n.i.\rlap{$^b$} \\
\SKIP{7}
Stellar metallicity\rlap{$^c$} & -- & $-0.7$\rlap{$^d$} & & & \\
\SKIP{2} \hline
\SKIP{2}
\end{tabular}
\def\NOTA#1#2{
\hbox{\vbox{\hbox{\hsize=0.030\hsize\vtop{\centerline{#1}}}}
\vbox{\hbox{\hsize=0.97\hsize\vtop{\baselineskip2pt #2}}}}}
\NOTA{$^{(1)}$ }{ All values are $\pm$0.2 dex. Iron is not included
because its relative abundance is very uncertain (cf. Sect.~\ref{iron})}
\NOTA{$^{(2)}$ }{ Absolute abundance by number}
\NOTA{$^{(3)}$ }{ Log abundance by number relative to
H=12.0, N=7.97, O=8.87, Ne=8.07, S=7.21, Ar=6.60, Fe=7.51,
the adopted set of solar abundances}
\NOTA{$^{(4)}$ }{ [X/O]=log(X/O)-log(X/O)$_\odot$ }
\NOTA{$^{(5)}$ }{ Predicted for a $\simeq\!3\,10^8$ yr old starburst
(Fig.~4 of Matteucci \& Padovani \cite{matteucci93}), see
Sect.~\ref{N_and_starburst} for details }
\vskip1pt
\NOTA{ $^a$}{ From Fig.~\ref{figZnoch}}
\NOTA{ $^b$}{ Element not included in the chemical evolution model}
\NOTA{ $^c$}{ Derived from CO stellar absorption features
(cf. Sect.~\ref{stellar_abundance}) }
\NOTA{ $^d$}{ $\pm$0.3 dex }
\end{flushleft}
\end{table}
The derived abundances are summarized in Table~\ref{tab_Z}
where the most striking result is the large overabundance
of nitrogen relative to oxygen, +0.7 dex above the solar value, whose
implications are discussed below.
\subsection{ Comparison with other abundance estimates}
\label{stellar_abundance}
An independent estimate of metallicity can be derived from the measured
equivalent widths of CO stellar absorption features, using
the new metallicity scale proposed and successfully applied
to young LMC/SMC
clusters by Oliva \& Origlia (\cite{oliva_origlia98}).
In short, the method is based on
the strength of the CO(6,3) band--head at 1.62 $\mu$m whose
behaviour with metallicity is modelled using
synthetic spectra of red supergiants.
The equivalent width of the stellar CO lines from the central
100~x~100 pc$^2$ of Circinus are reported in
Table 2 of Oliva et al. (\cite{oliva95}) and
yield an average metallicity of $-0.7\!\pm\!0.3$,
a value remarkably close to the
oxygen abundance derived above (cf. Table~\ref{tab_Z}).
\subsection{ Nitrogen overabundance and starburst activity }
\label{N_and_starburst}
The nitrogen overabundance is of particular interest in view of its possible
relationship with the (circum)nuclear starburst and
N--enrichment from material processed through the CNO cycle.
According to chemical evolutionary models of starburst events,
the N/O relative abundance
reaches a maximum value of [N/O]$\simeq\!+0.6$
(i.e. 4 times the solar value) at about $3\,10^8$ yr and remains
roughly constant for several $\times10^8$ years
(cf. Fig.~4 of Matteucci \& Padovani \cite{matteucci93}).
The nitrogen overabundance mostly reflects the effect of the winds from
He burning red supergiants whose surface
composition is strongly N--enriched by gas
dredged--up from the shell where hydrogen was burned through the CNO cycle.
The amount and temporal evolution of the N/O abundance depends on
model details, e.g. the shape of the IMF and the duration of the starburst,
as well as on poorly known
parameters such as the efficiency of the dredge--up
and the contribution of primary
N production by massive stars (e.g. Matteucci \cite{matteucci86}).
It is however encouraging to find that the observed N/O abundance
(Table~\ref{tab_Z})
is very close to that predicted at a time which is compatible with
the age of the starburst in Circinus (cf. Fig. 9 of Maiolino et al.
\cite{maiolino}).
It should also be noticed that the observed absolute abundances
are about an order of magnitude lower than the model predicted values,
but this
can be readily explained if the starburst transformed only
$\simeq\!10\%$ of the available gas into stars, in which case the
chemical enrichment was diluted by a similar factor.
This hypothesis is in good agreement with the fact that Circinus is
a very gas rich galaxy (e.g. Freeman et al. \cite{freeman}).
In short, the observed N/O overabundance is fully compatible with
what expected for a quite old (several $\times10^8$ yr) starburst.
The [NII]/\mbox{H$\alpha$}\ and other line ratios measured in the
cone of Circinus are similar
to those observed in many others Sy2's and several
observational results indicate that relatively old starburst events
are common in type 2 Seyferts (e.g. Maiolino et al. \cite{maiolino95}).
This may therefore indicate that nitrogen is typically overabundant in
Sy2's due to
enrichment by the starburst associated with the AGNs.
This tantalizing conclusion should be however
verified by detailed
spectroscopic studies and analysis of a sufficiently large number
of objects.
\section{Conclusions}
By modelling the rich spectrum of an extranuclear cloud in the ionization
cone of the Circinus galaxy
we have found that metal abundances are remarkably
well constrained, regardless of the assumptions made on the shape
of the ionizing continuum and gas distribution.
This new result may open a new and interesting field of research using
photoionization models to derive metallicities in AGNs
which could in turn be related to the star formation
activity in the recent past, i.e. old nuclear starbursts.
In the case of Circinus, the large N/O overabundance found here is fully
compatible with what expected from chemical evolution models of starbursts
(Sect.~\ref{N_and_starburst}).
Much less encouraging are the results on the
AGN ionizing continuum whose shape cannot be constrained by
the observed line ratios but depends on the
assumed gas density distribution. Within the limits of the model
parameters spanned here we somewhat favour an AGN spectrum with
a ``UV--bump'' but cannot exclude that, with a different and better
tuned combination of density and radiation bounded clouds,
one could achieve similarly good fits with a power law AGN continuum
(Sect.~\ref{detail_knc_model}).
We also found that photoionization models cannot reproduce the observed
[FeVII]/[FeII] and [NII]/[NI] ratios and argued that these may reflect
errors in the collision strengths for [FeVII]
and rate coefficient of N--H charge exchange reactions.
It should be noted, however, that the problem [NII]/[NI] does not
significantly influence the derived nitrogen abundance.
Finally, our data strongly indicate that shocks cannot play any important
role in exciting the gas producing the observed line emission.
\begin{acknowledgements}
We thank Roberto Maiolino and Francesca Matteucci for useful
discussions, Luis Ho and Thaisa Storchi Bergman for providing
information on template stellar spectra, Gary Ferland for making
Cloudy available to the community, and an anonymous referee
for many comments and criticisms which have been fundamental for improving
the quality of the paper.
\end{acknowledgements}
| proofpile-arXiv_065-8447 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
A primary goal of relativistic heavy-ion physics is to extract the
equation-of-state (EOS) of nuclear matter. The EOS is the relationship
between density, temperature, and pressure for nuclear matter
\cite{ck86,cs86,sg86,sto86}. Various experiments
\cite{bea85,bea86,do85,dos85,dos86,gus84,gut89,gut90,rit85,lei93,dos87} have
been undertaken in an effort to extract the EOS. In central collisions, the
overlap regions of the target and the projectile form a compression zone of
high density. Particles in this zone experience a deflection or outward flow
from the interaction region. Measurements \cite{dos86,gus84,gut89,dos87} with
the Plastic Ball-Wall detector \cite{bab82} revealed three collective flow
effects:~~the {\em bounce-off} of the spectators, the {\em side-splash} of the
participants, and the {\em squeeze-out} of the participants in directions
normal to the reaction plane. Measurements of the collective momentum flow in
the collisions, including the flow angle \cite{gus84,rit85} and the average
transverse momentum in the reaction plane \cite{bea86,do85,dos86,dos87,kea88},
were studied with a view toward extracting the nuclear EOS.
Zhang {\em et
al.} \cite{au93} investigated the collective flow of neutrons correlated to
the azimuthal distributions for semicentral Au-Au collisions at beam energies
of 150, 250, 400, and 650 MeV per nucleon. Elaasar {\em et al}.
\cite{nb93,ela94} reported the first azimuthal distributions of
triple-differential cross sections for neutrons emitted in Nb-Nb collisions
at 400A MeV and examined the maximum azimuthal anisotropy ratios of
these neutron cross sections as a probe of the nuclear equation
of state by comparison with Boltzmann-Uehling-Uhlenbeck (BUU)
calculations for {\it free} neutrons with a momentum-dependent interaction.
Madey {\it et al.}\cite{mad93} extended this comparison to 400A MeV Au-Au
collisions and found that the maximum azimuthal anisotropy ratio does not depend
on the mass of the colliding nuclei.
Welke {\em et al.} \cite{wel88} defined the
maximum azimuthal anisotropy ratio $r(\theta)$ as a ratio of the maximum
triple-differential cross section to the minimum triple-differential cross
section at each polar angle.
In this paper, we report measurements of triple-differential cross sections of
neutrons emitted in high-multiplicity La-La collisions at 250 and 400A MeV
(and Nb-Nb collisions at 400A MeV) as a function of the azimuthal angle with
respect to the reaction plane for several polar angles. From these data
and our prior data \cite{au93} for Au-Au collisions at 400A MeV,
we examined the sensitivity of the {\em maximum
azimuthal anisotropy ratio} $r(\theta)$ to the mass of three
systems [viz., Nb-Nb, La-La, and Au-Au] and to the beam energy
from 250 to 400A MeV for the La-La system;
we extracted the flow
\cite{dos86} of neutrons from the slope at mid-rapidity of the curve of the
average in-plane momentum versus the neutron center-of-mass rapidity; and
we observed the {\it out-of-plane squeeze-out} of neutrons in
three systems [viz.,
250 and 400A MeV La-La, and 400A MeV Nb-Nb].
\section{Experimental Apparatus}
Based on experience gained from experiment 848H in 1988 \cite{au93,nb93}, an
extension of experiment 848H was performed with the Bevalac at the Lawrence
Berkeley Laboratory (LBL) with the improved experimental arrangement shown in
Fig.~\ref{fig:expt}.
This experiment was improved over the original version by
adding the capability of identifying charged-particles with $Z=1$, $Z=2$, and
$Z > 2$. With this improvement, data were obtained for La-La collisions at
250 and 400A MeV and for Nb-Nb collisions at 400A MeV.
A beam telescope, consisting of two scintillators S$_{1}$ and S$_{2}$, was
used to select and count valid projectile beam ions and to provide a fiducial
timing signal for the measurement of the flight times to the time-of-flight
(TOF) wall detectors and to the neutron detectors. Scintillator S1 was
positioned 13.04~m upstream of the target in the beam. Scintillator
S2 was placed immediately before the target. A total of 16 mean-timed
neutron detectors \cite{bm80,cam79,mad78,mad83} were placed at angles from
3$^{\circ}$ to 80$^{\circ}$ with flight paths ranging from 6 to 8.4 m.
Table~\ref{tab:neuss} shows the width, polar angle, and flight path of each
neutron detector used in this experiment. The width of the neutron detector
and the flight path at each angle were selected to provide approximately equal
counting rates and energy resolutions for the highest energy neutrons at each
angle. To avoid detecting charged particles in neutron detectors, 9.5-mm
thick anticoincidence detectors were placed immediately in front of the
neutron detectors. The TOF of each detected neutron was determined by
measuring the time difference between the detection of a neutron in one of the
neutron detectors and the detection of a projectile ion in the beam telescope.
The azimuthal angle of the reaction plane was determined from the
information given by a time-of-flight (TOF) wall, which consisted of 184
plastic scintillators, each with a thickness of 9.5~mm. The overall dimensions
of the TOF wall were 5-m wide and 4.5-m high. The plastic wall was shaped as
an arc around the target, and covered angles of $\pm 37^{\circ}$ relative to
the beam line. The flight paths of the TOF wall detectors varied from about
4.0 to 5.0~m.
To assess the background, steel shadow shields were positioned approximately
half-way between the TOF wall and the neutron detectors in four different
configurations (see Table~\ref{tab:neuss}). The resulting spectra were then
subtracted from the ones without shadow shields. We used a method introduced
by Zhang {\em et al.} \cite{au93} to correct for an over-estimation of
backgrounds by this shadow-shield subtraction technique.
\section{Data Analysis}
In order to suppress the background arising from collisions of beam ions
and air molecules between the target and the TOF wall, an appropriate
multiplicity-cut had to be made. Figure~\ref{fig:mult} shows the
charged-particle multiplicity spectra with (solid line)
and without (dashed line) the target for La-La
and Nb-Nb collisions at 400A MeV and La-La collisions at 250A MeV. A proper
multiplicity-cut was chosen by comparing the charged-particle multiplicity
spectra with and without the target in place. To make meaningful comparisons
of the data for La-La and Nb-Nb collisions at 400A MeV and La-La collisions at
250A MeV, the multiplicity cuts chosen should correspond to approximately the
same range in normalized impact parameter while keeping the background
contaminations (from collisions of beam ions with the air or material in the
beam telescope) below 5\%. For La-La collisions at 400 and 250A MeV, the
appropriate charged-particle multiplicity cuts were found to be 34 and 30,
respectively, whereas a multiplicity cut of 25 was determined for
Nb-Nb collisions at 400A MeV.
From the relationship between the total number of interactions expected from
the geometric cross section of the system and with the well-known assumption
\cite{nb93,cav90} of a monotonic correlation between the impact
parameter and the fragment multiplicity, the maximum impact parameter
normalized to the radii of the projectile and target, $b_{max}/(R_{p}+R_{t})$,
was found to be about 0.5 for the above multiplicity cuts. The method of
extracting the maximum-normalized impact parameter $b_{max}/(R_{p}+R_{t})$ can
be found in reference~\cite{ela94}.
\subsection{Determination of Reaction Plane} \label{sec:rplane}
To study flow, it is necessary to determine the reaction plane, ({\em
i.e.},~~the $xz$-plane), where {\^x} is in the direction of the impact
parameter of the collision, and {\^z} is in beam direction. The reaction plane
was determined by a modified version \cite{fzg87} of the transverse momentum
method \cite{do85}. The charged particles emitted from the collisions between
beam ions and the target were detected by the TOF wall detectors.
The azimuthal angle $\Phi_R$ of the reaction plane was estimated from the vector
$\vec{Q}^\nu$, the weighted sum of the transverse velocity vectors $\vec{V}^t$
of all charged fragments detected in each collision:
\begin{equation}
\vec{Q}^{\nu}~=~\sum_{i}
\nu_{i} \left(
\frac{\vec{V}^{t}}{|\vec{V}^{t}|} \right)_{i}
\label{eq:qnu}
\end{equation}
where the weight $\nu_i$ depends on the pulse height of the ith charged
particle. The values of the $\nu_i$ are positive for $\alpha \geq \alpha_0$
and zero for $\alpha < \alpha_0$, where the quantity
$\alpha \equiv (Y/Y_{p})_{CM}$ is the rapidity
$Y$ normalized to the projectile rapidity $Y_{p}$ in the center-of-mass system,
and $\alpha_{0}$ is a threshold rapidity. The rapidity $Y$ of a charged
particle with an identified Z was calculated with an assumption that its mass
is the mass of proton, $^4He$, and $^7Li$, respectively, for
$Z = 1,\ 2$, and $Z > 2$. The dispersion angle
$\Delta\phi_R$, the angle between the {\em estimated} and {\em true}
reaction planes, is defined \cite{zha90} as
\begin{equation}
\cos \Delta \phi_{R}~=~\frac{\langle V_{x}'\rangle}{\langle V_{x}\rangle}
\label{eq:cos}
\end{equation}
\noindent
where $\langle V_{x}'\rangle$ and $\langle V_{x}\rangle$ are projections of
unit vectors onto the {\em estimated} and the {\em true} reaction plane,
respectively. All the results presented in this report are corrected
for this dispersion.
The smaller the dispersion angle, the closer the {\em estimated} reaction
plane to the {\em true} one.
By averaging the projection of the unit
vector $\left( \frac{\vec{V}^{t}}{|\vec{V}^{t}|} \right)_{i}$ onto the {\em
true} reaction plane over the total events, one can obtain
\begin{equation}
\langle V_{x}\rangle~=~\left[ \frac{\overline{Q^{2}-W}}
{\overline{W(W-1)}}
\right]^{\frac{1}{2}};~~~~\vec{Q}~=~\sum_{i}
\omega_{i} \left(
\frac{\vec{V}^{t}}{|\vec{V}^{t}|} \right)_{i},~~~~W = \sum_{i} \omega_{i},
\label{eq:vx}
\end{equation}
\noindent
where $i$ is the particle index.
The weight $\omega_{i}$ in the above
equation is equal to unity for $\alpha \geq \alpha_{0}$ and is equal to zero
for $\alpha < \alpha_{0}$.
The average of the normalized in-plane vector of all charged particles
projected onto the {\em estimated} reaction plane, $\langle V_{x}'\rangle$,
can be estimated by
\begin{equation}
\langle V_{x}'\rangle~=~\overline{\left( \frac{\vec{V}^{t}}{|\vec{V}^{t}|}
\right)_{i} \cdot
\frac{\vec{Q}^{\nu}_{i}}{|\vec{Q}^{\nu}_{i}|}};~~~~\vec{Q^{\nu}_{i}}~=~\sum_{j
\neq i} \nu_{j} \left(
\frac{\vec{V}^{t}}{|\vec{V}^{t}|} \right)_{j}
\label{eq:vx1}
\end{equation}
\noindent
where the weight $\nu_{j}$ depends on the pulse-height of the
charged-particles.
The weighting factors $\nu_j$ were chosen to minimize the dispersion angle
$\Delta\phi_R$. For different values of $\nu_j$, the dispersion angles for
La-La
collisions at 250 and 400A MeV and Nb-Nb collisions at 400A MeV exhibit a
broad minimum (around $\alpha_0 = 0.2$)
as a function of the threshold rapidity $\alpha_{0}$; to
minimize the dispersion, we chose $\alpha_{0} = 0.2$.
Figure~\ref{fig:wadc} shows a typical pulse-height spectrum for one of the 184
TOF wall detectors for La-La collisions at 400A MeV. The peak labelled $Z=1$
is from hydrogen isotopes; that labelled $Z=2$ is from helium isotopes.
The third and the following peaks labelled $Z>2$ represent charged
particles with $Z>2$. Based on these peaks, three different sets of
weights were tuned to estimate the reaction plane closest to the true reaction
plane. Figure~\ref{fig:wt} shows the dispersion angles obtained at
different sets of weights for La-La collisions at 400A MeV. The uncertainties
in this figure were not calculated because our main interest was to minimize
the values of the dispersion angles. In this work, ten sets of weights (as
shown in Fig.~\ref{fig:wt}) were used to calculate the dispersion angles.
The weights 1, 2, and 2.5 for $Z=1,
Z=2,$ and $ Z>2$ isotopes, respectively, yielded a minimum value for the
dispersion $\Delta \phi_{R}$ in $\phi_{R}$. With these weights, the
dispersion angles from equation~(\ref{eq:cos}) were found to be $31.3^{\circ}
\pm 2.4^{\circ}$ and $33.8^{\circ} \pm 2.3^{\circ}$ for La-La collisions at
400 and 250A MeV, respectively, and $39.8^{\circ} \pm 3.0^{\circ}$ for
Nb-Nb collisions at 400A MeV. Without the charged-particle identification
($i.e.$,~~$\nu_{j}=1$), equation~(\ref{eq:vx1}) corresponds to that in
references~\cite{au93,nb93}; the dispersion angles were about 3$^{\circ}$
larger in these three sets of data. After the azimuthal angle $\phi_{R}$ of
the reaction plane was determined, the neutron azimuthal angle relative to the
reaction plane ($\phi - \phi_{R}$) was obtained for each event.
\subsection{Determination of the Flow Axis}
To study the emission patterns and the event shapes of the fragments, the
sphericity method \cite{gfs82} was used.
From a set of charged particles in the center-of-mass system for each event,
a sphericity tensor is defined as
\begin{equation}
F_{ij}~=~\sum_{\nu} \frac{1}{2m_{\nu}} {\frac{V^{\nu}_i V^{\nu}_j}{|V^{\nu}|^2}},
\label{eq:ften}
\end{equation}
\noindent
where $m_{\nu}$ is the mass of the $\nu^{th}$ fragment, which can be one of the
proton-like ($Z=1$), helium-like ($Z=2$), or ``lithium-like'' ($Z > 2$)
particles identified by pulse-heights in the TOF wall as mentioned in the
previous section. In the sphericity calculation, we omitted all tracks
in the backward rapidity range ($\alpha < 0$),
then we projected the tracks in the forward
rapidity range ($\alpha > 0$) to the backward rapidity range, $\vec{V}
\rightarrow - \vec{V}$.
Our sphericity calculations reconstructed polar flow angles, which allowed
neutron squeeze-out to become visible.
By diagonalizing the flow tensor in the center-of-mass system, the event
shape can be approximated as a prolate (or cigar-shaped) ellipsoid. The angle
between the major axis of the flow ellipsoid and the beam axis is defined as a
flow angle $\theta_{F}$.
Quantitatively, the flow angle \cite{gus84} was obtained as a polar angle
corresponding to the maximum eigenvalues of the flow tensor.
Figure~\ref{fig:flow} shows the distribution of the flow angle $\theta_{F}$
for La-La and Nb-Nb collisions at 400A MeV with the same impact
parameter. For Nb-Nb and
La-La collisions at the same energy, the
peak of the neutron flow angle distribution moves to a larger angle
as the mass of the system increases. This trend to larger flow angles with
increasing target-projectile mass was observed previously for charged particles
\cite{rit85} by the Plastic Ball Spectrometer, and predicted qualitatively
from Vlasov-Uehling-Uhlenbeck calculations \cite{mhs85}.
\section{Results}
\subsection{Neutron Triple-Differential Cross Sections}
The results of the triple-differential cross sections,
$d^{3}\sigma/d\alpha\cdot d\cos \theta \cdot d(\phi - \phi_{R})$, for neutrons
emitted at a polar angle $\theta$ with a normalized center-of-mass rapidity
$\alpha \equiv (Y/Y_{p})_{CM}$ (in units of the projectile rapidity $Y_{p}$)
are presented as a function of the azimuthal angle ($\phi - \phi_{R}$) with
respect to the reaction plane. The data were summed in four rapidity bins
($\Delta \alpha$) for each detector: backward rapidities ($-1.0 \leq \alpha <
-0.2$), mid-rapidities ($-0.2 \leq \alpha < 0.2$), intermediate-forward
rapidities ($0.2 \leq \alpha < 0.7$), and projectile-like rapidities ($0.7
\leq \alpha < 1.2$). The uncertainties in the triple-differential cross
sections include both statistical and systematic uncertainties; however,
statistical uncertainties dominate systematic uncertainties \cite{ela94}.
Figure~\ref{fig:sigy1} shows the triple-differential cross sections for
neutrons emitted in the backward rapidity bin at a polar angle of 72$^{\circ}$
for La-La and Nb-Nb collisions at 400A MeV and La-La collisions at 250A MeV.
The cross sections in this rapidity bin have a minimum at $(\phi -
\phi_{R})=0^{\circ}$ and a maximum at $(\phi - \phi_{R})=\pm 180^{\circ}$.
Similar characteristics can be found at other polar angles, ranging from
3$^{\circ}$ to 80$^{\circ}$.
Figure~\ref{fig:sigy2} shows the neutron triple-differential cross sections for
the mid-rapidity bin at one of the polar angles
({\em viz.},~~$\theta=48^{\circ}$) for La-La and Nb-Nb collisions at 400A MeV
and La-La collisions at 250A MeV. In this bin, the neutrons are aligned
perpendicular to the reaction plane at $(\phi - \phi_{R})=\pm 90^{\circ}$ (so
called {\em squeeze-out}) \cite{gut89,gut90,lei93}; however, this effect is
barely noticeable in our figures. Rotating the event onto the flow axis
enhances our ability to see the squeeze-out effect \cite{gut89,har94}.
Figure~\ref{fig:sigy4} shows the neutron triple-differential cross sections for
the projectile rapidity bin at a polar angle 16$^{\circ}$ (as an example) for
La-La and Nb-Nb collisions at 400A MeV and La-La collisions at 250A MeV. At
each polar angle, the azimuthal distribution for this rapidity bin peaks at
$(\phi - \phi_{R})=0^{\circ}$ and has a minimum at $(\phi - \phi_{R})=\pm
180^{\circ}$. The peak at 0$^{\circ}$ is the result of the side-splash and
bounce-off effects, where bounce-off cannot be separated clearly from the
side-splash. The resulting cross sections in these projectile rapidities also
have contributions from both participants and spectators like those at target
or backward rapidities; however, the side-splash dominates at the larger
polar angles where the collisions are more central because of the multiplicity
cut chosen for this analysis.
Finally, Fig.~\ref{fig:sigy3} shows the neutron triple-differential cross
sections for the intermediate-forward rapidity bin at the polar angle
20$^{\circ}$ for La-La and Nb-Nb collisions at 400A MeV and La-La collisions at
250A MeV. The distributions at small polar angles in this rapidity bin
reflect the distributions in the mid-rapidity bin. The larger the polar
angles, the more noticeable the peaks at $(\phi - \phi_{R})=0^{\circ}$ and the
distributions become more like the distributions from the projectile rapidity
regions.
\subsection{Average In-plane Momentum} \label{sec:px}
Figure~\ref{fig:px400} shows the mean transverse momentum projected into the
reaction plane, $\langle P_{x} \rangle$, for neutrons as a function of
normalized center-of-mass rapidity, $\alpha = (Y/Y_{p})_{CM}$, for
La-La collisions at an energy of 400 and
250A MeV. The technique of extracting the $\langle P_{x}\rangle$ for
neutrons can be found in reference~\cite{au93}.
The data display the typical S-shaped behavior as described by Danielewicz and
Odyniec \cite{do85}.
Neutrons in the low-energy regions (below $\approx$ 55 MeV)
were not included in order to eliminate background contamination; thus, the
curve is not completely symmetric and
the slope of the average in-plane transverse momentum at negative rapidities
is steeper than that at positive rapidities because the cut on the low energy
neutrons rejects neutrons with low transverse momenta at negative rapidities.
From
Fig.~\ref{fig:px400}, the average in-plane momentum increases with increasing
bombarding energies.
In the positive rapidity region, the $\langle P_x \rangle$ vs
$\alpha$ curves are
straight lines up to $\alpha = 0.5$. We extracted the slope at mid-rapidity
(up to $\alpha = 0.5$) with a linear fit to $\langle P_x \rangle$ in the
region
unaffected by the cut on the neutron energy; Doss {\it et al.}\cite{dos86}
defined this slope as the {\it flow} F.
Because the flow is
determined at mid-rapidity, it has contributions only from the participants.
The flows found in this analysis are $145 \pm 11$ and $104 \pm 13$ MeV/c from
La-La collisions at 400 and 250A MeV, respectively.
From these data, we see that the neutron
flow increases with increasing beam energy. Previously,
Doss {\it et al.}\cite{dos86} observed that the flow of charged particles
increases with beam energy, reaches a broad maximum at about 650A MeV, and
falls off slightly at higher energies.
\subsection{Neutron Squeeze Out} \label{sec:sqz}
One of the results of this experiment is the observation of neutrons emitted
preferentially
in a direction perpendicular to the reaction plane. In their paper reporting
this component of collective flow for charge particles,
Gutbrod {\it et al.}\cite{gut89} called this collective phenomenon {\it
out-of-plane squeeze out.}
To see neutron squeeze out, we performed two coordinate rotations:
First, we rotated the x coordinate around the beam or z-axis to align it
with the summed transverse-velocity vector $\vec{Q}$ given by
equation~(\ref{eq:vx}); second, we rotated the z-axis around the y-axis
by the flow angle $\Theta_F$. After this second rotation, the $z'$-axis
of the new $x'y'z'$ coordinates, where $y'=y$, is on the major axis of
flow ellipsoid. Then, for neutrons with transverse momenta
$p'_{z}$ in the region $-0.1 \leq p'_{z} < 0.1$, where
$p'_{z}=(P'_{z}/P'_{proj})_{CM}$, neutron squeeze out became visible as
a peak at azimuthal angles $\phi'~=~\pm 90^{\circ}$ for the distribution
in the $x'y'$-plane.
Figure~\ref{fig:squez} shows the neutron azimuthal distributions in the
$x'y'$-coordinates or neutron squeeze-out of three systems: La-La and Nb-Nb
collisions at 400A MeV and La-La collisions at 250A MeV.
The spectator neutrons are excluded from Fig.~\ref{fig:squez} by
removing neutrons of high and low momenta at small angles.
By excluding spectator neutrons \cite{dos85} emitted from the
projectile and the target, neutrons evaporated \cite{mad85,mad90} from an
excited projectile at small polar angles ($\theta \leq 8^{\circ}$) were
excluded also.
Squeeze out of neutrons was observed previously\cite{lei93} for Au-Au
collisions at 400A MeV.
\section{Theoretical Comparisons}
Welke {\em et al.} \cite{wel88} examined the sensitivity of $d\sigma/d\phi$ to
the nuclear matter equation of state. Because there was a net flow in the
projectile and target rapidity bins, Welke {\em et al.} described the shapes
of the azimuthal distribution by the ratio of the maximum cross section
$(d\sigma/d\phi)_{max}$ to the minimum cross section $(d\sigma/d\phi)_{min}$.
For each polar angle, the cross sections measured in the experiment are fitted
with the function, $\sigma_{3}(\phi - \phi_{R},
\theta)~=~a(\theta)~\pm~b'(\theta)~\cos (\phi - \phi_{R})$,
where the $+(-)$ sign stands for positive (negative) rapidity bin, and
$b'(\theta)=b(\theta) e^{-(\Delta \phi_{R})^{2}/2}$ is the correction
for a finite rms dispersion $\Delta \phi_{R}$.
For positive rapidity
particles, the cross sections peak at $(\phi - \phi_{R})=0^{\circ}$ and
deplete at $(\phi - \phi_{R})=\pm 180^{\circ}$, as seen in
Fig.~\ref{fig:sigy4}. The maximum azimuthal anisotropy for positive
rapidity neutrons becomes $ r(\theta)~=~\left[ a(\theta)~+~b(\theta)\right]/
\left[ a(\theta)~-~b(\theta)\right]$. Figure~\ref{fig:smr} is the
polar-angle-dependent maximum azimuthal anisotropy ratio $r(\theta)$ for the
four sets of data as indicated; the data for Au-Au collisions at 400A MeV are
from Elaasar {\em et al}. \cite{ela94}. From Fig.~\ref{fig:smr}, the maximum
azimuthal anisotropy appears to be independent of both the mass of the
colliding nuclei and the beam energy.
For theoretical comparison, we used the Boltzmann-Uehling-Uhlenbeck (BUU)
approach \cite{bd88} with a parameterization of a momentum-dependent nuclear
mean field suggested in reference \cite{wel88}. In the calculations, the
incompressibility modulus $K$ was set to be 215 MeV, and the contributions to
the cross sections from composite fragments were subtracted by rejecting
neutrons when the distance between the neutron and another nucleon from the
same BUU ensemble \cite{bd88} was less than a critical
value $d_c$ \cite{ab85}.
Within a given BUU run, a nucleon was considered {\em free} only if no other
nucleons were found within the critical distance $d_c$. We are
justified in restricting our coalescence criterion to coordinate
space as nucleons that are far apart in momentum space will have drifted
away from each other. Our analysis is performed at the time that
the momentum space distributions begin to freeze-out\cite{zg94}.
Also we have verified quantitatively the soundness of this
argument by performing a full six-dimensional coalescence.
It is well known from transport theory calculations that the
transverse momentum generation in heavy-ion reactions begins quite early in
the history of the reaction and then saturates \cite{gal90}. For {\em free}
neutrons, the critical distance $d_c$ was chosen to be 2.8~fm for both Nb-Nb
and La-La at 400A MeV and 3.0 fm for La-La at 250A MeV by fitting the
polar-angle dependence of the double-differential cross sections. The
double-differential cross sections in the rapidity $0.7 \leq \alpha < 1.2$ bin
for La-La and Nb-Nb collisions at beam energy 400A MeV and La-La collisions at
250A MeV are shown in Fig.~\ref{fig:dcr_buu}. The filled symbols represent
the data, and the open symbols represent the BUU calculations. The BUU
calculations (with $d_c = 0$~fm) of the double differential cross sections are
significantly higher than the data because the data do not include the
composite fragments; in other words, the data contain {\em free} neutrons
only. For $d_c = 2.8$~fm, the BUU prediction is lower than the data at
small polar angles because the BUU calculations do not treat neutron
evaporation which occurs at polar angles below $\sim 9^{\circ}$, as observed
previously by Madey {\em et al}. \cite{mad90}.
By restricting the BUU calculations to {\em free} neutrons with $K = 215$~MeV,
the triple-differential cross sections are compared with data.
Figures~\ref{fig:sigy1}~--~\ref{fig:sigy3} show triple-differential cross
sections for four rapidity regions: $-1.0 \leq \alpha < -0.2$, $-0.2 \leq
\alpha < 0.2$, $0.7 \leq \alpha < 1.2$, $0.2 \leq \alpha < 0.7$. In these
figures, the open symbols represent the BUU calculations. The solid line in
each figure represents the data corrected to zero dispersion $\Delta \phi_{R}
= 0$.
In these calculations, the negative $(\phi -
\phi_{R})$ region is symmetric with respect to the positive side. In
Fig.~\ref{fig:sigy1}, the BUU results are higher than the data beyond
$(\phi - \phi_{R}) = 90^{\circ}$. In Fig.~\ref{fig:sigy2}, the BUU results
tend to peak at $(\phi - \phi_{R}) = 90^{\circ}$; in other words, the BUU
results show the characteristics of the out-of-plane squeeze-out effect for
{\em free} neutrons in the mid-rapidity bin, but it is very hard to see the
squeeze-out phenomena in the rapidity coordinates (see
section~\ref{sec:sqz}). In Figs.~\ref{fig:sigy4} and \ref{fig:sigy3}, the BUU
results generally agree with the data in these positive rapidity bins.
The BUU calculations of the polar-angle dependence of the maximum azimuthal
anisotropy ratio $r(\theta)$ for {\em free} neutrons emitted from Nb-Nb, La-La,
and Au-Au collisions at 400A MeV and La-La collisions at 250A MeV are shown in
Fig.~\ref{fig:r400buu} for $(b_{max}/2R) = 0.5$. The BUU results for Au-Au
collisions at 400A MeV are from Elaasar {\em et al}. \cite{ela94}. In this
figure, the BUU calculations with $K = 380,~215,~150$ MeV were carried out for
{\em free} neutrons (with $d_c = 2.8$ fm for Nb-Nb and La-La collisions and $d_c =
3.2$ fm for Au-Au collisions). Consistent with the experimental data (see
Fig.~\ref{fig:smr}), the BUU calculations in Fig.~\ref{fig:r400buu} show
little sensitivity to the mass and the beam energy. The BUU calculations of
the polar-angle-dependent maximum azimuthal anisotropy ratio $r(\theta)$ for
{\em free} neutrons emitted from La-La and Nb-Nb collisions at 400A MeV and
La-La collisions at 250A MeV are compared with the data in
Fig.~\ref{fig:smr_buu}. The multiplicity cuts indicated in this figure
correspond to the ratio of the maximum impact parameter to the nuclear radius
$(b_{max}/2R) = 0.5$. The filled and open symbols in this figure represent
the data and the BUU calculations (with $K = 150,~215,~380$ MeV),
respectively. Both the experimental results and the BUU calculations were for
zero dispersion $\Delta \phi_{R} = 0$. As one can see from this figure, the
polar-angle dependence of the maximum azimuthal anisotropy ratio $r(\theta)$
is insensitive to the incompressibility modulus $K$ in the nuclear equation of
state. This insensitivity was noted also in Ref\cite{zg94}.
Figure~\ref{fig:px} is the comparison between data and the BUU calculations
with $K = 215$ MeV for in-plane transverse momentum $\langle P_{x} \rangle$
for {\em free} neutrons emitted from La-La and Nb-Nb collisions at 400A MeV
and La-La collisions at 250A MeV. Similar to the other figures, the filled
symbols and open symbols represent the data and the BUU calculations,
respectively. The BUU calculations generally agree with the data within their
uncertainties, especially in the mid-rapidity region which gives the
information about flow in the unit of MeV/c (see section~\ref{sec:px}).
Another observable obtained from BUU calculations is the out-of-plane
squeeze-out of {\em free} neutrons. The comparisons are depicted in
Fig.~\ref{fig:squez_buu} for La-La and Nb-Nb collisions at 400A MeV and La-La
collisions at 250A MeV. The solid lines represent the data. The dotted,
dashed, and dot-dashed lines represent the BUU calculations with
$K = 380,~215,$ and 150~MeV, respectively. All three lines are almost on top
of each other in La-La collisions at 250A MeV. The squeeze out of
neutrons in the
normalized momentum coordinates in the center-of-mass system is compared with
the BUU model. Both the experimental results and the BUU calculations are
for zero dispersion $\Delta \phi_{R} = 0^{\circ}$.
After correcting to zero dispersion in the
experimental results, the neutron squeeze-out becomes larger.
It can be seen from
Fig.~\ref{fig:squez_buu} that the squeeze-out of {\em free} neutrons
from BUU calculations is insensitive to the incompressibility modulus $K$.
From the comparison between the BUU calculations and the neutron squeeze-out
in Fig.~\ref{fig:squez_buu}, the squeeze-out effect from the experimental data
in this work is significantly stronger than that from the BUU calculations with
$K = 380,~215,~150$ MeV. It remains to be seen whether the apparent
disagreement between the BUU squeeze-out results and the data persist as
greater statistics and accuracy are reached for the calculations. We estimate
the present uncertainties to be of the order of 25\%.
Statistically meaningful calculations of squeeze-out in BUU remain challenging
as the effect truly needs to be established at a level past fluctuations.
\section{Conclusions}
We measured triple-differential cross sections of neutrons emitted at
several polar angles in multiplicity-selected La-La collisions at
250 and 400A MeV (and Nb-Nb collisions at 400A MeV) as a function of
the azimuthal angle with respect to the reaction plane. We compared the
measured cross sections with BUU calculations for free neutrons; the
BUU calculations (with an incompressibility modulus $K = 215$ MeV)
agree with the measured cross sections, except for the smallest polar
angle where the BUU calculations do not treat neutron evaporation.
The La-La data at 400A MeV permitted us to extend our ealier studies of
the maximum azimuthal anisotropy ratio as a probe of the nuclear equation
of state, and to conclude that the maximum azimuthal anisotropy ratio
is insensitive to the beam energy from 250 to 400A MeV for the La-La system.
The uncertainties in the measurement of the maximum azimuthal anisotropy
ratio are about 15\% which do not allow us to investigate its dependence on the
mass of the projectile-target system. BUU calculations
confirm the lack of sensitivity of the maximum azimuthal anisotropy ratio
to the mass of the colliding system and to the beam energy. BUU calculations
suggest also that the maximum azimuthal anisotropy ratio is insensitive to
the incompressibility modulus K in the nuclear equation of state.
The flow of neutrons was extracted from the slope at mid-rapidity of the
curve of the average in-plane momentum verses the neutron center-of-mass
rapidity. The flow of neutrons emitted in La-La collisions at 250 and 400A
MeV increases with beam energy. BUU calculations with with an
incompressibility modulus $K = 215 $ MeV for free neutrons agree
generally with the data.
We observed the preferential emission of neutrons in a direction perpendicular
to the reaction plane in three systems [viz. 400 and 250A MeV La-La, and
400A MeV Nb-Nb]. This component of collective flow, observed first for charged
particles is known as out-of-plane squeeze out. BUU calculations of
out-of-plane squeeze out of free neutrons are insensitive to the
incompressibility modulus K for values of 150, 215, and 380 MeV. After
correcting the experimental results to zero dispersion, the observed
squeeze out of neutrons is significantly stronger than that from BUU theory.
\acknowledgements
This work was supported in part by the National Science Foundation under Grant
Nos.\ PHY-91-07064, PHY-88-02392, and PHY-86-11210, the U.S. Department of
Energy under Grant No.\ DE-FG89ER40531 and DE-AC03-76SF00098, and the
Natural Sciences and Engineering Research Council of Canada and by the Fonds
FCAR of the Quebec Government. \\
\newpage
| proofpile-arXiv_065-8449 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In this paper we estimate the size of the $C\!P$-violating rate
asymmetry for $\,\Omega^- \rightarrow \Xi \pi\,$ decays. As is
well known, such rate asymmetries for octet-hyperon decays
are small as a result of the
product of three small factors: a ratio of $\,|\Delta\bfm{I}|=3/2\,$
to $\,|\Delta\bfm{I}|=1/2\,$ amplitudes;
a small strong-rescattering phase;
and a small $C\!P$-violating phase~\cite{donpa}.
We find that for the decay channel $\,\Omega^-\rightarrow\Xi\pi\,$
all of these factors are larger than their counterparts for
octet-hyperon decays, and this results in a rate asymmetry
that could be as large as $\,2 \times 10^{-5}\,$ within the minimal
standard model.
Physics beyond the standard model could enhance this rate asymmetry
by a factor of up to ten.
Our calculation suffers from typical hadronic uncertainties in
the computation of matrix elements of four-quark operators and
for this reason it should be regarded as an order-of-magnitude
estimate.
\section{$\Omega^- \rightarrow \Xi\pi$ decay}
The measured decay distributions of these decays are consistent
with the amplitudes being mostly P-wave~\cite{pdb}.
We parametrize the P-wave amplitude in the form
\begin{eqnarray} \label{amplitude}
\mbox{$\rm i$} {\cal M}_{\Omega^-\rightarrow\Xi\pi}^{} \;=\;
G_{\rm F}^{} m_{\pi}^2\; \bar{u}_\Xi^{}\,
{\cal A}_{\Omega^-\Xi\pi}^{\rm (P)}\, k_\mu^{}\, u_\Omega^\mu
\;\equiv\;
G_{\rm F}^{} m_{\pi}^2\;
{\alpha_{\Omega^-\Xi}^{\rm (P)}\over \sqrt{2}\, f_{\!\pi}^{}}\,
\bar{u}_\Xi^{}\, k_\mu^{}\, u_\Omega^\mu \;,
\end{eqnarray}
where the $u$'s are baryon spinors, $k$ is the outgoing
four-momentum of the pion, and $f_{\!\pi}^{}$ is the pion-decay
constant.
The P-wave amplitude has both $\,|\Delta\bfm{I}|=1/2\,$ and
$\,|\Delta\bfm{I}|=3/2\,$ components which are, in general, complex.
We write
\begin{eqnarray}
\begin{array}{c} \displaystyle
\alpha_{\Omega^-\Xi^0}^{\rm (P)} \;=\;
\ratio{1}{\sqrt{3}} \left(
\sqrt{2}\, \alpha^{(\Omega)}_1
\mbox{$\rm e$}^{{\rm i}\delta_1^{} + {\rm i}\phi_1^{}}
\,-\, \alpha^{(\Omega)}_3 \mbox{$\rm e$}^{{\rm i}\delta_3^{} + {\rm i}\phi_3^{}}
\right) \;,
\vspace{2ex} \\ \displaystyle
\alpha_{\Omega^-\Xi^-}^{\rm (P)} \;=\;
\ratio{1}{\sqrt{3}} \left(
\alpha^{(\Omega)}_1 \mbox{$\rm e$}^{{\rm i}\delta_1^{} + {\rm i}\phi_1^{}}
\,+\, \sqrt{2}\, \alpha^{(\Omega)}_3
\mbox{$\rm e$}^{{\rm i}\delta_3^{} + {\rm i}\phi_3^{}}
\right) \;,
\end{array} \label{isolabels}
\end{eqnarray}
where $\alpha^{(\Omega)}_{1,3}$ are real quantities,
strong-rescattering phases of the $\Xi\pi$ system with $\,J=3/2$,
P-wave and $\,I=1/2, 3/2\,$ quantum numbers are denoted by
$\delta_{1}$, $\delta_{3}$, respectively, and
$C\!P$-violating weak phases are labeled $\phi_{1}$, $\phi_{3}$.
The corresponding expressions for the antiparticle decay
$\,\overline{\Omega}{}^-\rightarrow \overline{\Xi}\pi\,$ are
obtained by changing the sign of the weak phases $\phi_{1}$, $\phi_{3}$ in
Eq.~(\ref{isolabels}).
Summing over the spin of the $\Xi$ and averaging over the spin of
the $\Omega^-$, one derives from Eq.~(\ref{amplitude})
the decay width
\begin{eqnarray} \label{width''}
\Gamma(\Omega^-\rightarrow\Xi\pi) \;=\;
{|\bfm{k}|^3 m_{\Xi}^{}\over 6\pi m_{\Omega}^{}}
\Bigl| {\cal A}_{\Omega^-\Xi\pi}^{\rm (P)} \Bigr|^2 \,
G_{\rm F}^2 m_{\pi}^4 \;.
\end{eqnarray}
As was found in Ref.~\cite{jusak}, using the measured decay
rates~\cite{pdb} and ignoring all the phases, we can extract the ratio
$\, \alpha_{3}^{(\Omega)}/\alpha_{1}^{(\Omega)} =-0.07\pm 0.01.\,$
Final-state interactions enhance this value, but this enhancement
is not significant for the values of the scattering phases that we
estimate in the following section.
This ratio is higher than the corresponding ratios in other hyperon
decays~\cite{atv}, which range from $0.03$ to $0.06$ in magnitude,
and provides an enhancement factor for the $C\!P$-violating rate
asymmetry in this mode.
By comparing the hyperon and anti-hyperon decays, we can construct
$C\!P$-odd observables.
The one considered here is the rate asymmetry
\begin{eqnarray}
\Delta \bigl( \Xi^0\pi^- \bigr) &\!\!\equiv&\!\!
{ \Gamma \bigl( \Omega^-\rightarrow\Xi^0\pi^- \bigr) -
\Gamma \bigl( \overline{\Omega}{}^-\rightarrow\overline{\Xi}{}^0\pi^+ \bigr)
\over
\Gamma \bigl( \Omega^-\rightarrow\Xi^0\pi^- \bigr) +
\Gamma \bigl( \overline{\Omega}{}^-\rightarrow\overline{\Xi}{}^0\pi^+ \bigr) }
\nonumber \\
&\!\!\approx&\!\!
\sqrt{2}\; {\alpha_{3}^{(\Omega)}\over \alpha_{1}^{(\Omega)}}\;
\sin \bigl( \delta_3^{}-\delta_1^{} \bigr) \,
\sin \bigl( \phi_3^{}-\phi_1^{}\bigr) \;,
\label{cpobs}
\end{eqnarray}
where in the second line we have kept only the leading term in
$\,\alpha_{3}^{(\Omega)}/\alpha_{1}^{(\Omega)}.\,$
Similarly,
$\, \Delta \bigl( \Xi^-\pi^0 \bigr)
= -2 \Delta \bigl( \Xi^0\pi^- \bigr) .\,$ The current
experimental results indicate that any D-waves are very small in
these decays, and that the parameter $\alpha$ that describes
P-wave--D-wave interference is consistent with zero:
$\,\alpha \bigl( \Xi^0\pi^- \bigr) =0.09\pm 0.14\,$ and
$\,\alpha \bigl( \Xi^-\pi^0 \bigr) =0.05\pm 0.21\,$~\cite{pdb}. For this
reason we do not discuss the potential $C\!P$-odd asymmetry in
this parameter.
\section{$\Xi\pi$-scattering phases}
There exists no experimental information on the $\Xi\pi$-scattering
phases, and so we will estimate them at leading order
in heavy-baryon chiral perturbation theory.
The leading-order chiral Lagrangian for the strong interactions of
the octet and decuplet baryons with the pseudoscalar octet-mesons
is~\cite{manjen}
\begin{eqnarray} \label{L1strong}
{\cal L}^{\rm s} &\!\!=&\!\!
\ratio{1}{4} f^2\,
\mbox{$\,\rm Tr$} \!\left( \partial^\mu \Sigma^\dagger\, \partial_\mu \Sigma \right)
\;+\;
\mbox{$\,\rm Tr$} \!\left( \bar{B}_v^{}\, \mbox{$\rm i$} v\cdot {\cal D} B_v^{} \right)
\nonumber \\ && \!\!\! \!
+\;
2 D\, \mbox{$\,\rm Tr$} \!\left( \bar{B}_v^{}\, S_v^\mu
\left\{ {\cal A}_\mu^{}\,,\, B_v^{} \right\} \right)
+ 2 F\, \mbox{$\,\rm Tr$} \!\left( \bar{B}_v^{}\, S_v^\mu\,
\left[ {\cal A}_\mu^{}\,,\, B_v^{} \right] \right)
\nonumber \\ && \!\!\! \!
-\;
\bar{T}_v^\mu\, \mbox{$\rm i$} v\cdot {\cal D} T_{v\mu}^{}
+ \Delta m\, \bar{T}_v^\mu T_{v\mu}^{}
+ {\cal C} \left( \bar{T}_v^\mu {\cal A}_\mu^{} B_v^{}
+ \bar{B}_v^{} {\cal A}_\mu^{} T_v^\mu \right)
+ 2{\cal H}\; \bar{T}_v^\mu\, S_v^{}\cdot{\cal A}\, T_{v\mu}^{} \;,
\end{eqnarray}
where we follow the notation of Ref.~\cite{manjen}.
The scattering amplitudes for $\,\Xi^0\pi^-\rightarrow\Xi^0\pi^-\,$
and $\,\Xi^-\pi^0\rightarrow\Xi^-\pi^0\,$ are derived from
the diagrams shown in Figure~\ref{diagrams}.
Of these, the first two diagrams in Figure~\ref{diagrams}(a)
and the first one in Figure~\ref{diagrams}(b) do not contribute
to the $\,J=3/2\,$ channel.
From the rest of the diagrams, we can construct the amplitudes for the
$\,I=1/2\,$ and $\,I=3/2\,$ channels,
\begin{eqnarray}
\begin{array}{c} \displaystyle
{\cal M}_{I=1/2}^{} \;=\;
2{\cal M}_{\Xi^0\pi^-\rightarrow \Xi^0\pi^-}^{}
\,-\, {\cal M}_{\Xi^-\pi^0\rightarrow \Xi^-\pi^0}^{} \;,
\vspace{2ex} \\ \displaystyle
{\cal M}_{I=3/2}^{} \;=\;
-{\cal M}_{\Xi^0\pi^-\rightarrow \Xi^0\pi^-}^{}
\,+\, 2{\cal M}_{\Xi^-\pi^0\rightarrow \Xi^-\pi^0}^{} \;,
\end{array}
\end{eqnarray}
and project out the partial waves in the usual way.\footnote
See, e.g., Ref.~\cite{phases}.}
Calculating the $\,J=3/2\,$ P-wave phases, and evaluating them
at a center-of-mass energy equal to the $\Omega^-$ mass, yields
\begin{eqnarray}
\begin{array}{rcl} \displaystyle
\delta_1^{}
&\!\approx&\! \displaystyle
{-|\bfm{k}|^3 m_\Xi^{}\over 24\pi f^2 m_\Omega^{}} \left[\,
{(D-F)^2\over m_\Omega^{}-m_\Xi^{}}
\,+\,
{\ratio{1}{2}\,{\cal C}^2\over m_\Omega^{}-m_{\Xi^*}^{}}_{}^{}
\,+\,
{\ratio{1}{18}\,{\cal C}^2\over m_\Omega^{}-2m_\Xi^{}+m_{\Xi^*}^{}}
_{\vphantom{k}}^{\vphantom{k}}
\,\right] \;,
\vspace{2ex} \\ \displaystyle
\delta_3^{}
&\!\approx&\! \displaystyle
{-|\bfm{k}|^3 m_\Xi^{}\over 24\pi f^2 m_\Omega^{}} \left[\,
{-2(D-F)^2\over m_\Omega^{}-m_\Xi^{}}
\,-\,
{\ratio{1}{9}\,{\cal C}^2\over m_\Omega^{}-2m_\Xi^{}+m_{\Xi^*}^{}}
_{\vphantom{k}}^{\vphantom{k}}
\,\right] \;.
\end{array} \label{strongph}
\end{eqnarray}
\begin{figure}[t]
\hspace*{\fill}
\begin{picture}(140,50)(-70,-20)
\ArrowLine(-40,0)(0,0) \DashLine(-20,20)(0,0){3}
\DashLine(0,0)(20,20){3} \ArrowLine(0,0)(40,0) \Vertex(0,0){3}
\end{picture}
\hspace*{\fill}
\begin{picture}(140,50)(-70,-20)
\ArrowLine(-40,0)(-20,0) \DashLine(-20,20)(-20,0){3}
\ArrowLine(-20,0)(20,0) \DashLine(20,0)(20,20){3}
\ArrowLine(20,0)(40,0) \Vertex(-20,0){3} \Vertex(20,0){3}
\end{picture}
\hspace*{\fill}
\begin{picture}(140,50)(-70,-20)
\ArrowLine(-40,0)(-20,0) \Line(-20,1)(20,1) \Line(-20,-1)(20,-1)
\ArrowLine(-1,0)(1,0) \DashLine(-20,20)(-20,0){3}
\DashLine(20,0)(20,20){3} \ArrowLine(20,0)(40,0)
\Vertex(-20,0){3} \Vertex(20,0){3}
\end{picture}
\hspace*{\fill}
\\
\hspace*{\fill}
\begin{picture}(10,10)(-5,-5)
\Text(0,5)[c]{(a)}
\end{picture}
\hspace*{\fill}
\\
\hspace*{\fill}
\begin{picture}(140,50)(-70,-20)
\ArrowLine(-40,0)(-20,0) \DashLine(-20,20)(-20,0){3}
\ArrowLine(-20,0)(20,0) \DashLine(20,0)(20,20){3}
\ArrowLine(20,0)(40,0) \Vertex(-20,0){3} \Vertex(20,0){3}
\end{picture}
\hspace*{\fill}
\begin{picture}(140,50)(-70,-20)
\ArrowLine(-40,0)(-20,0) \DashLine(-20,20)(0,10){3}
\DashLine(0,10)(20,0){3} \ArrowLine(-20,0)(20,0)
\DashLine(-20,0)(0,10){3} \DashLine(0,10)(20,20){3}
\ArrowLine(20,0)(40,0) \Vertex(-20,0){3} \Vertex(20,0){3}
\end{picture}
\hspace*{\fill}
\\
\hspace*{\fill}
\begin{picture}(140,50)(-70,-20)
\ArrowLine(-40,0)(-20,0) \Line(-20,1)(20,1)
\Line(-20,-1)(20,-1) \ArrowLine(-1,0)(1,0)
\DashLine(-20,20)(-20,0){3} \DashLine(20,0)(20,20){3}
\ArrowLine(20,0)(40,0) \Vertex(-20,0){3} \Vertex(20,0){3}
\end{picture}
\hspace*{\fill}
\begin{picture}(140,50)(-70,-20)
\ArrowLine(-40,0)(-20,0) \DashLine(-20,20)(0,10){3}
\DashLine(0,10)(20,0){3} \DashLine(-20,0)(0,10){3}
\DashLine(0,10)(20,20){3} \Line(-20,1)(20,1) \Line(-20,-1)(20,-1)
\ArrowLine(-1,0)(1,0) \ArrowLine(20,0)(40,0)
\Vertex(-20,0){3} \Vertex(20,0){3}
\end{picture}
\hspace*{\fill}
\\
\hspace*{\fill}
\begin{picture}(10,10)(-5,-5)
\Text(0,5)[c]{(b)}
\end{picture}
\hspace*{\fill}
\caption{\label{diagrams
Diagrams for (a) $\,\Xi^0\pi^-\rightarrow\Xi^0\pi^-\,$
and (b) $\,\Xi^-\pi^0\rightarrow\Xi^-\pi^0.\,$
The vertices are generated by ${\cal L}^{\rm s}$ in
Eq.~(\ref{L1strong}).
A dashed line denotes a pion field, and a single (double)
solid-line denotes a $\Xi$ ($\Xi^*$) field.}
\end{figure}
The phases are dominated by the terms proportional to ${\cal C}^2$
arising from the $\Xi^*\Xi\pi$ couplings.
For this reason, we do not use the value $\,{\cal C} \approx 1.5\,$
obtained from a fit to decuplet decays at tree-level~\cite{manjen},
nor the value $\,{\cal C}\approx 1.2\,$ obtained from a one-loop
fit~\cite{butler}.
Instead, we determine the value of ${\cal C}$ from a tree-level fit
to the width of the $\,\Xi^*\rightarrow\Xi\pi\,$ decay, which gives
$\,{\cal C}=1.4 \pm 0.1.\,$
Using $\,f=f_{\!\pi}^{}\approx 92.4\;\rm MeV,\,$
isospin-symmetric masses, and the values $\,D=0.61\,$ and
$\,F=0.40,\,$ we obtain\footnote{%
We have also computed the phases in chiral perturbation
theory without treating the baryons as heavy, and found very similar
results, $\,\delta_1^{}=-13.1^{\small\rm o}\,$ and
$\,\delta_3^{}=1.4^{\small\rm o}.\,$}
\begin{eqnarray}
\delta_1^{} \;=\; -12.8^{\small\rm o}
\;, \hspace{3em}
\delta_3^{} \;=\; 1.1^{\small\rm o} \;.
\end{eqnarray}
In Figure~\ref{plot} we plot the scattering phases as a function
of the pion momentum.
\begin{figure}[ht]
\hspace*{\fill}
\begin{picture}(300,220)(-50,-120)
\LinAxis(0,90)(0,-90)(6,5,3,0,1) \LinAxis(200,90)(200,-90)(6,5,-3,0,1)
\LinAxis(0,90)(200,90)(4,5,-3,0,1) \LinAxis(0,-90)(200,-90)(4,5,3,0,1)
\DashLine(145.878,90)(145.878,-90){1}
\SetWidth{1.0}
\Curve{(0,0)(2.5,0.000361)(5.,0.0029)(7.5,0.00985)(10.,0.0236)
(12.5,0.0466)(15.,0.0817)(17.5,0.132)(20.,0.201)(22.5,0.293)
(25.,0.413)(27.5,0.566)(30.,0.76)(32.5,1.)(35.,1.3)
(37.5,1.68)(40.,2.14)(42.5,2.71)(45.,3.42)(47.5,4.31)
(50.,5.42)(52.5,6.83)(55.,8.63)(57.5,11.)(60.,14.1)
(62.5,18.5)(65.,24.8)(67.5,34.5)(70.,50.8)(72.5,82.5)
(72.8,89.3)}
\Curve{(82.9,-89.7)(84.1,-79.2)(85.,-73.2)
(87.5,-61.5)(90.,-54.1)(92.5,-49.2)(95.,-45.7)(97.5,-43.1)
(100.,-41.2)(103.,-39.8)(105.,-38.7)(108.,-37.9)(110.,-37.3)
(113.,-36.8)(115.,-36.5)(118.,-36.3)(120.,-36.1)(123.,-36.1)
(125.,-36.1)(128.,-36.2)(130.,-36.3)(133.,-36.4)(135.,-36.6)
(138.,-36.9)(140.,-37.1)(143.,-37.4)(145.,-37.7)(148.,-38.1)
(150.,-38.4)(153.,-38.8)(155.,-39.2)(158.,-39.6)(160.,-40.)
(163.,-40.4)(165.,-40.8)(168.,-41.3)(170.,-41.7)(173.,-42.2)
(175.,-42.7)(178.,-43.1)(180.,-43.6)(183.,-44.1)(185.,-44.6)
(188.,-45.1)(190.,-45.6)(193.,-46.2)(195.,-46.7)(198.,-47.2)
(200.,-47.7)}
\DashCurve{(0,0)(2.5,0.0000381)(5.,0.000304)(7.5,0.00102)(10.,0.00241)
(12.5,0.00469)(15.,0.00806)(17.5,0.0127)(20.,0.0188)(22.5,0.0265)
(25.,0.036)(27.5,0.0475)(30.,0.0609)(32.5,0.0765)(35.,0.0943)
(37.5,0.114)(40.,0.137)(42.5,0.162)(45.,0.189)(47.5,0.219)
(50.,0.252)(52.5,0.287)(55.,0.325)(57.5,0.365)(60.,0.408)
(62.5,0.453)(65.,0.501)(67.5,0.552)(70.,0.605)(72.5,0.661)
(75.,0.719)(77.5,0.78)(80.,0.843)(82.5,0.908)(85.,0.976)
(87.5,1.05)(90.,1.12)(92.5,1.19)(95.,1.27)(97.5,1.35)
(100.,1.43)(103.,1.52)(105.,1.6)(108.,1.69)(110.,1.78)
(113.,1.88)(115.,1.97)(118.,2.07)(120.,2.17)(123.,2.27)
(125.,2.37)(128.,2.47)(130.,2.58)(133.,2.69)(135.,2.8)
(138.,2.91)(140.,3.02)(143.,3.14)(145.,3.26)(148.,3.37)
(150.,3.49)(153.,3.62)(155.,3.74)(158.,3.86)(160.,3.99)
(163.,4.12)(165.,4.25)(168.,4.38)(170.,4.51)(173.,4.64)
(175.,4.78)(178.,4.91)(180.,5.05)(183.,5.19)(185.,5.33)
(188.,5.47)(190.,5.61)(193.,5.75)(195.,5.9)(198.,6.04)
(200.,6.19)}{4}
\footnotesize
\rText(-40,0)[][l]{$\delta_{2I}^{}\;(\rm degrees)$}
\Text(100,-110)[t]{$|\bfm{k}|\;(\rm GeV)$}
\Text(-5,90)[r]{$30$} \Text(-5,60)[r]{$20$} \Text(-5,30)[r]{$10$}
\Text(-5,0)[r]{$0$} \Text(-5,-30)[r]{$-10$} \Text(-5,-60)[r]{$-20$}
\Text(-5,-90)[r]{$-30$}
\Text(0,-95)[t]{$0$} \Text(50,-95)[t]{$0.1$} \Text(100,-95)[t]{$0.2$}
\Text(150,-95)[t]{$0.3$} \Text(200,-95)[t]{$0.4$}
\end{picture}
\hspace*{\fill}
\caption{\label{plot
Scattering phases as a function of the center-of-mass momentum of
the pion.
The solid and dashed curves denote $\delta_1^{}$ and $\delta_3^{}$,
respectively.
The vertical dotted-line marks the momentum in the
$\,\Omega^-\rightarrow\Xi\pi\,$ decay.}
\end{figure}
Our estimate indicates that the $\,I=1/2\,$ P-wave phase for the
$\Xi\pi$ scattering is larger than other baryon-pion
scattering phases.
Eq.~(\ref{strongph}) shows that this phase is dominated by
the $s$-channel $\Xi^*$-exchange diagram.
This is what one would expect from the fact that the $\Xi^*$ shares
the quantum numbers of the channel.
Notice, however, that the phase is not large due to the resonance
because it is evaluated at a center-of-mass energy equal to
the $\Omega^-$ mass, significantly above the $\Xi^*$ pole.
The phase is relatively large because the pion momentum in $\Omega^-$
decays is large.\footnote{In fact, this $\Xi\pi$-scattering phase
is much larger than the corresponding P-wave $\Lambda\pi$-scattering
phase $\,\delta_P\approx -1.7^{\small\rm o}\,$~\cite{wisa} because the pion
momentum is
much larger in the reaction $\,\Omega^-\rightarrow\Xi\pi\,$ than it is in
the reaction $\,\Xi\rightarrow\Lambda\pi$.}
\section{Estimate of the weak phases}
Within the standard model the weak phases $\phi_{1}$ and $\phi_{3}$
arise from the $C\!P$-violating phase in the CKM matrix.
The short-distance effective Hamiltonian describing
the $\,|\Delta S| =1\,$ weak interactions in the
standard model can be written as
\begin{equation}
{\cal H}_{\rm eff} \;=\;
{G_{\rm F}^{}\over\sqrt{2}}\, V^*_{ud} V_{us}^{}
\sum_i C_i^{}(\mu)\, Q_i^{}(\mu)
\;+\; {\rm h.c.} \;,
\end{equation}
where the sum is over all the $Q_i^{}(\mu)$ four-quark operators,
and the $\,C_i^{}(\mu)=z_i^{}(\mu)+\tau y_i^{}(\mu)\,$ are
the Wilson coefficients, with
$\,\tau=-V^*_{td} V_{ts}^{}/V^*_{ud} V_{us}^{}.\,$
We use the same operator basis of Ref.~\cite{steger} because our
calculation will parallel that one, but we use the latest values for
the Wilson coefficients from Ref.~\cite{buras}.
To calculate the phases, we write
\begin{equation}
\mbox{$\rm i$}{\cal M}_{\Omega^-\rightarrow\Xi\pi}^{} \;=\;
-\mbox{$\rm i$}{G_{\rm F}^{} \over \sqrt{2}} \, V^*_{ud} V_{us}^{}
\sum_i C_i^{}(\mu)\, \langle \Xi\pi| Q_i^{}(\mu) |\Omega^- \rangle \;.
\end{equation}
Unfortunately, we cannot compute the matrix elements of the
four-quark operators in a reliable way.
As a benchmark, we employ the vacuum-saturation method used
in Ref.~\cite{steger}.
For $\,\Omega^-\rightarrow\Xi^0 \pi^-,\,$ we obtain
\begin{equation}
{\cal M}_{\Omega^-\rightarrow\Xi^0 \pi^-}^{} \;=\;
-{G_{\rm F}^{}\over\sqrt{2}}\, V_{ud}^* V_{us}^{}
\left( M_1^{\rm P}+M_3^{\rm P} \right)
\langle \Xi^0 \bigl| \bar{u}\gamma^\mu\gamma_5^{} s
\bigr| \Omega^- \rangle
\langle \pi^- \bigl| \bar{d}\gamma_\mu^{}\gamma_5^{} u
\bigr| 0 \rangle \;,
\end{equation}
where we have used the notation
\begin{eqnarray}
M^{\rm P}_1 &\!\!=&\!\!
\ratio{1}{3} \bigl( C_1^{} - 2 C_2^{} \bigr) - \ratio{1}{2}\, C_7^{}
\,+\,
\xi \left[\, \ratio{1}{3} \bigl( -2 C_1^{} + C_2^{} \bigr) - C_3^{}
- \ratio{1}{2}\, C_8^{} \,\right]
\nonumber \\ && \!\!\! \!
+\;
{2 m^2_\pi\over (m_u^{}+m_d^{}) (m_u^{}+m_s^{})}
\left[ C_6^{} + \ratio{1}{2}\, C_8^{}
+ \xi \left( C_5^{} + \ratio{1}{2}\, C_7^{} \right) \right] \;,
\\
\nonumber \\
M^{\rm P}_3 &\!\!=&\!\!
-\ratio{1}{3} (1+\xi) (C_1^{}+C_2^{}) \,+\, \ratio{1}{2}\, C_7^{}
\,+\, \ratio{1}{2}\, \xi\, C_8^{}
\,+\, {m^2_\pi\over(m_u^{}+m_d^{})(m_u^{}+m_s^{})}
\left( \xi C_7^{} + C_8^{} \right) \;.
\end{eqnarray}
The current matrix-elements that we need are found from
the leading-order strong Lagrangian in Eq.~(\ref{L1strong}) to be
\begin{eqnarray}
\langle \Xi^0 \bigl| \bar{u}\gamma^\mu\gamma_5^{} s
\bigr| \Omega^- \rangle
\;=\; -{\cal C}\, \bar{u}_\Xi^{}\, u_\Omega^\mu
\;, \hspace{3em}
\langle \pi^- \bigl| \bar{d}\gamma_\mu^{}\gamma_5^{} u
\bigr| 0 \rangle
\;=\; \mbox{$\rm i$}\sqrt{2}\,f_{\!\pi}^{} k_\mu^{} \;,
\end{eqnarray}
and from these we obtain the matrix elements for pseudoscalar
densities as
\begin{eqnarray}
\begin{array}{c} \displaystyle
\langle \Xi^0 \bigl| \bar{u}\gamma_5^{} s \bigr| \Omega^- \rangle
\;=\;
{{\cal C}\over m_u^{}+m_s^{}}\, \bar{u}_\Xi^{}\,k_\mu^{}\, u_\Omega^\mu
\vspace{2ex} \\ \displaystyle
\langle \pi^- \bigl| \bar{d}\gamma_5^{} u \bigr| 0 \rangle
\;=\; \mbox{$\rm i$}\sqrt{2}\, f_{\!\pi}^{}\, {m^2_\pi \over m_u^{}+m_d^{}} \;.
\end{array}
\end{eqnarray}
Numerically, we will employ
$\,m_\pi^2 / \left[ (m_u^{}+m_d^{})(m_u^{}+m_s^{}) \right] \sim 10$,
$\,\xi=1/N_{\rm c}^{}=1/3,\,$ and the Wilson coefficients from
Ref.~\cite{buras}
that correspond to the values $\,\mu=1$~GeV, $\,\Lambda=215$~MeV,
and $\,m_t=170$~GeV.
Given the crudeness of the vacuum-insertion method,
we use the leading-order Wilson coefficients.
For the CKM angles, we use the Wolfenstein parameterization
and the numbers $\,\lambda =0.22$, $\,A=0.82$,
$\,\rho=0.16\,$ and $\,\eta=0.38\,$~\cite{mele}.
Putting all this together, we find
\begin{eqnarray}
\begin{array}{c} \displaystyle
\alpha_{3}^{(\Omega)} \mbox{$\rm e$}^{{\rm i}\phi_3^{}} \;=\;
-0.11 \,+\, 2.8\times10^{-6}\, \mbox{$\rm i$} \;,
\vspace{2ex} \\ \displaystyle
\alpha_{1}^{(\Omega)} \mbox{$\rm e$}^{{\rm i}\phi_1^{}} \;=\;
0.23 \,+\, 2.3\times 10^{-4}\, \mbox{$\rm i$} \;.
\end{array}
\end{eqnarray}
The $\,|\Delta\bfm{I}|=3/2\,$ amplitude predicted in vacuum
saturation is comparable to the one we extract from the data,
$\,\alpha_3^{(\Omega)} = -0.07\pm 0.01.\,$
To estimate the weak phase, we can obtain the real part of the
amplitude from experiment and the imaginary part of the amplitude
from the vacuum-saturation estimate to get
$\,\phi_3^{}\approx -4\times 10^{-5}.\,$
Unlike its $\,|\Delta\bfm{I}|=3/2\,$ counterpart,
the $\,|\Delta\bfm{I}|=1/2\,$ amplitude is predicted to be about
a factor of four below the fit.\footnote
We note here that only the relative sign between $\alpha_{1}^{(\Omega)}$
and $\alpha_{3}^{(\Omega)}$ is determined, while the overall sign
of either the predicted or experimental numbers is not.}
Taking the same approach as that in estimating $\phi_3^{}$ results in
$\,\phi_1^{}\approx 3\times 10^{-4}.\,$
We can also take the phase directly from the vacuum-saturation estimate
(assuming that both the real and imaginary parts of the amplitude
are enhanced in the same way by the physics that is missing from this
estimate) to find $\,\phi_1^{} = 0.001.\,$
For the decay of the $\Omega^-$, it is much more difficult
to estimate the phases in quark models than it is for other
hyperon decays.
For instance, to calculate the phase of the $\,|\Delta\bfm{I}|=1/2\,$
amplitude, we would need to calculate the matrix element
$\,\langle \Xi^{*-}|H_W|\Omega^- \rangle,\,$
but this vanishes for the leading $\,|\Delta\bfm{I}|=1/2\,$ operator
because the quark-model wavefunctions of the $\Omega^-$ and
the $\Xi^{*-}$ do not contain $u$-quarks.
Considering only valence quarks, these models would then predict that
the phase is equal to the phase of the leading penguin operator,\footnote{%
Early calculations obtain the amplitude as a sum of a bag model estimate
of the penguin matrix element and factorization
contributions~\cite{quarkmodels}.} or about $\,\phi_1^{}\sim 0.006$.
\section{Results and Conclusion}
Finally, we can collect all our results to estimate the $C\!P$-violating
rate asymmetry $\Delta \bigl( \Xi^0\pi^- \bigr) $.
They are
\begin{eqnarray}
\begin{array}{rcl} \displaystyle
{\alpha_{3}^{(\Omega)}\over \alpha_{1}^{(\Omega)}} &\!\approx&\!
-0.07 \;,
\vspace{2ex} \\ \displaystyle
|\sin \bigl( \delta_3^{}-\delta_1^{} \bigr) | &\!\approx&\! 0.24 \;,
\vspace{2ex} \\ \displaystyle
|\sin(\phi_3^{}-\phi_1)| &\!\approx&\!
3\times 10^{-4} \;~{\rm or}~\; 0.001 \;,
\end{array} \label{numbers}
\end{eqnarray}
where the first number for the weak phases corresponds to the
conservative approach of taking only the imaginary part of the
amplitudes from the vacuum-saturation estimate and the second
number is the phase predicted by the model.
The difference between the resulting numbers,
$\, \bigl| \Delta \bigl( \Xi^0\pi^- \bigr) \bigr| =
7\times 10^{-6} \;~{\rm or}~\; 2\times 10^{-5} ,\,$
can be taken as a crude measure of the uncertainty in the evaluation of
the weak phases.
For comparison, estimates of rate asymmetries in the octet-hyperon
decays~\cite{donpa} result in values of less than~$\,10^{-6}.$
A model-independent study of $C\!P$ violation beyond the standard
model in hyperon decays was done in Ref.~\cite{heval}.
We can use those results to find that the $C\!P$-violating rate asymmetry
in $\,\Omega^-\rightarrow\Xi^0\pi^-\,$ could be ten times larger than
our estimate above if new physics is responsible for $C\!P$ violation.
The upper bound in this case arises from the constraint imposed on
new physics by the value of $\epsilon$ because the P-waves involved
are parity conserving.
In conclusion, we find that the $C\!P$-violating rate
asymmetry in $\,\Omega^-\rightarrow\Xi^0\pi^-\,$ is about
$\,2 \times 10^{-5}\,$ within the standard model.
Although there are significant uncertainties in our estimates,
it is probably safe to say that the rate asymmetry in
$\,\Omega^-\rightarrow\Xi\pi\,$ decays is significantly larger
than the corresponding asymmetries in other hyperon decays.
\newpage
\noindent {\bf Acknowledgments} This work was supported in
part by DOE under contract number DE-FG02-92ER40730.
We thank the Department of Energy's Institute for Nuclear Theory
at the University of Washington for its hospitality and for
partial support. We also thank John F. Donoghue for helpful discussions.
| proofpile-arXiv_065-8452 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{\normalsize \bf References
}\list
{[\arabic{enumi}]}{\settowidth\labelwidth{[#1]}\leftmargin\labelwidth
\advance\leftmargin\labelsep
\usecounter{enumi}}
\def\hskip .11em plus .33em minus .07em{\hskip .11em plus .33em minus .07em}
\sloppy\clubpenalty4000\widowpenalty4000
\sfcode`\.=1000\relax}
\let\endthebibliography=\endlist
\begin{document}
\twocolumn[
\mbox{}
\vspace{50mm}
\begin{center} \LARGE
Effective mass at the surface of a Fermi liquid \\
\end{center}
\begin{center} \large
M. Potthoff and W. Nolting
\end{center}
\begin{center} \small \it
Institut f\"ur Physik,
Humboldt-Universit\"at zu Berlin,
Germany
\end{center}
\vspace{10mm}
\small
---------------------------------------------------------------------------------------------------------------------------------------------------
{\bf Abstract} \\
Using the dynamical mean-field theory, we calculate the effective
electron mass in the Hubbard model on a semi-infinite lattice.
At the surface the effective mass is strongly enhanced.
Near half-filling this gives rise to a correlation-driven
one-electron surface mode.
\vspace{2mm}
---------------------------------------------------------------------------------------------------------------------------------------------------
\vspace{10mm}
]
Important characteristics of interacting electrons on a lattice
can be studied within the Hubbard model. If symmetry-broken
phases are ignored, the electron system is generally expected
to form a Fermi-liquid state in dimensions $D>1$. The effective
mass $m^\ast$ which determines the quasi-particle dispersion
close to the Fermi energy, can be substantially enhanced for
strong interaction $U$. A further enhancement may be possible at
the surface of a $D=3$ lattice since here the reduced coordination
number $z_s$ of the surface sites results in a reduced variance
$\Delta \propto z_s$ of the free local density of states. This
implies the effective interaction $U/\sqrt{\Delta}$ to be larger
and thereby a tendency to strengthen correlation effects at the
surface. Using the dynamical mean-field theory (DMFT) \cite{dmft},
we investigate the strongly correlated Hubbard model with uniform
nearest-neighbor hopping $t$ on a semi-infinite sc(100) lattice
at zero temperature. Close to half-filling, the surface effective
mass is found to be {\em strongly} enhanced compared with the bulk
value.
We have generalized DMFT to film geometries \cite{PN98}. A film
consisting of $d=15$ layers with the normal along the [100]
direction turns out to be sufficient to simulate the actual sc(100)
surface, i.~e.\ bulk properties are recovered at the film center.
The lattice problem is mapped onto a set of $d$ single-impurity
Anderson models (SIAM) which are indirectly coupled via the
self-consistency condition of DMFT: $G_{0}^{(\alpha)}(E) = \left(
G_{\alpha \alpha}(E)^{-1} + \Sigma_\alpha(E) \right)^{-1}$.
The iterative solution starts with the layer-dependent local
self-energy $\Sigma_\alpha$ ($\alpha=1,...,d$) which determines
the layer-dependent on-site elements of the one-electron lattice
Green function $G_{\alpha \alpha}$ via the (lattice) Dyson equation.
The DMFT self-consistency equation then fixes the free impurity
Green function $G_{0}^{(\alpha)}$ of the $\alpha$-th SIAM.
Following the exact-diagonalization approach \cite{CK94}, we
take a finite number of $n_s$ sites in the impurity model.
The parameters of (the $\alpha$-th) SIAM are then obtained by
fitting $G_{0}^{(\alpha)}$. Finally, the exact numerical solution
of the $\alpha$-th SIAM yields a new estimate for $\Sigma_\alpha$
which is required for the next cycle. From the self-consistent
solution the effective mass is calculated as $m_\alpha^\ast =
1 - d \Sigma_\alpha(0) / dE$.
\begin{figure}[t]
\vspace{-3mm}
\center{\psfig{figure=fig01.eps,width=76mm,angle=0}}
\vspace{-5mm}
\parbox[]{76mm}{\small Fig.~1.
$m_\alpha^\ast$ for the top- and sub-surface layer and the bulk as
a function of the filling $n$. $U=24 |t| = 2W$ ($W$ with of the free
bulk density of states).
\label{fig:mass}
}
\end{figure}
The figure shows the bulk mass $m_b^\ast$ to increase with
increasing filling $n$. For $U=2W$ the system is a Mott-Hubbard
insulator at half-filling ($n=1$). Consequently, $m_b^\ast$ diverges
for $n \mapsto 1$. Up to $n \approx 0.98$ all layer-dependent masses
$m_\alpha^\ast$ ($\alpha = 2,3,...$) are almost equal except for the
top-layer mass $m_s^\ast$ ($\alpha=1$). For $n\mapsto 1$, $m_s^\ast$
is considerably enhanced, e.~g.\ $m_s^\ast/m_b^\ast=2.4$ for $n=0.99$
\cite{conv}.
There is an interesting consequence of this finding: Close to the
Fermi energy where damping effects are unimportant, the Green
function $G_{\alpha \beta}({\bf k}, E)$ can be obtained from the
low-energy expansion of the self-energy: $\Sigma_\alpha(E) =
a_\alpha + (1 - m_\alpha^\ast) E + \cdots$. For each wave vector
${\bf k}$ of the two-dimensional surface Brillouin zone (SBZ),
the poles of $G_{\alpha \beta}({\bf k}, E)$ yield the quasi-particle
energies $\eta_r({\bf k})$ ($r=1,...,d$). The $\eta_r({\bf k})$ are
the eigenvalues of the renormalized hopping matrix $T_{\alpha \beta}
({\bf k}) \equiv (m_\alpha m_\beta)^{-1/2} (\epsilon_{\alpha \beta}
({\bf k})+\delta_{\alpha\beta} (a_\alpha-\mu))$ where the non-zero
elements of the free hopping matrix are given by $\epsilon_\|({\bf k})
\equiv \epsilon_{\alpha \alpha}({\bf k}) = 2t (\cos k_x + \cos k_y)$
and $\epsilon_\perp({\bf k}) \equiv \epsilon_{\alpha\alpha \pm 1}
({\bf k}) = |t|$. For the semi-infinite system ($d\mapsto \infty$)
the eigenvalues form a continuum of (bulk) excitations at a given
${\bf k}$. A surface excitation can split off the bulk continuum if
$m_s^\ast / m_b^\ast$ is sufficiently large. Assuming $m_\alpha^\ast
= m_b^\ast$ for $\alpha \ne 1$ and $m_{\alpha=1}^\ast = m_s^\ast$
(cf.\ Fig.~1), a simple analysis of $T_{\alpha\beta}({\bf k})$ yields
the following analytical criterion for the existence of a surface
mode in the low-energy (coherent) part of the one-electron spectrum
($r^2 \equiv m_b^\ast/ m_s^\ast < 1$):
\begin{eqnarray}
\frac{2-r^2}{1-r^2} <
\left|
\frac{\epsilon_\|({\bf k}) - \mu + a_s}{\epsilon_\perp({\bf k})}
+ \frac{a_b - a_s}{(1-r^2) \epsilon_\perp({\bf k})}
\right| \: .
\nonumber
\end{eqnarray}
The evaluation for the present case shows that indeed a surface mode
can split off at the $(0,0)$ and the $(\pi,\pi)$ points in the
SBZ for fillings $n>0.94$ and $n>0.90$, respectively.
The surface modification of
$a_\alpha=\Sigma_\alpha(0)$ (second term on the r.h.s)
turns out to be unimportant for the effect.
The excitation found is thus a correlation-driven surface excitation
which is caused by the strong specific surface renormalization of the
effective mass.
\newline
\newline
| proofpile-arXiv_065-8453 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Flavour non-conservation in charged weak current interactions allows
mixing between the \particle{B}{s}{0}\ and \anti{B}{s}{0}\ flavour states.
The proper-time probability density function of a \particle{B}{s}{0}\ meson
which is known to have mixed
oscillates.
The oscillation frequency is proportional to \mbox{$\Delta m_{\rm s}$}, the
mass difference between the mass eigenstates.
Within the framework of the Standard Model, a measurement of
the ratio \mbox{$\Delta m_{\rm s}$}/\mbox{$\Delta m_{\rm d}$}\
(\mbox{$\Delta m_{\rm d}$}\ being the mass difference in the \oscil{B}{d}{0} system)
would allow the extraction of the ratio of
Cabibbo-Kobayashi-Maskawa (CKM) quark mixing matrix elements
$|V_{\rm ts}/V_{\rm td}|$.
Although the slower \particle{B}{d}{0}\ oscillations are now well established,
the faster \particle{B}{s}{0}\ oscillations remain to be detected. Previous
ALEPH analyses searching for \particle{B}{s}{0}\ oscillations have either been based
on semi-exclusive selections in which a \particle{D}{s}{-}\ is fully
reconstructed~\cite{ALEPH-DS-LEPTON,ALEPH-DSHAD} or on more inclusive lepton
selections~\cite{ALEPH-DILEPTON,ALEPH-LEPTON-JET-WISCONSIN,
ALEPH-WARSAW-COMBINATION}.
Although the latter suffer from a lower \particle{B}{s}{0}\ purity and poorer
proper time resolution they have the advantage of larger statistics.
The analysis presented here is also based on an inclusive lepton sample.
Compared to the previous ALEPH inclusive lepton
analysis~\cite{ALEPH-LEPTON-JET-WISCONSIN}, the following
improvements are made to increase the sensitivity to \particle{B}{s}{0}\ mixing.
\begin{itemize}
\item {\bf Decay length resolution:}
An improved decay length resolution is
obtained by applying tight selection cuts to remove events likely to
have misassigned tracks between the primary and the \particle{B}{s}{0}\ vertex.
In addition an estimate of the decay length uncertainty is used on an event-by-event basis,
rather than assuming the same average decay length uncertainty for all events,
as used in previous analyses.
\item{\bf Boost resolution:}
A nucleated jet algorithm is used for an improved estimate of
the momentum of the \particle{b}{}{}-hadrons.
\item{\bf \boldmath \particle{B}{s}{0}\ purity classes:}
Various properties of the events, such as the charge of the reconstructed
\particle{b}{}{}-hadron vertex and the presence of kaons are used to enhance
the fraction of \particle{B}{s}{0}\ in subsamples of the data.
\item {\bf Initial and final state tagging:}
The \particle{b}{}{}-flavour tagging method previously used for the \particle{D}{s}{-}\ based analyses
\cite{ALEPH-DS-LEPTON,ALEPH-DSHAD} is applied.
In this method discriminating variables are used to construct
mistag probabilities and sample composition fractions
estimated on an event-by-event basis.
As a result, all events are tagged and the effective mistag rate is reduced.
\end{itemize}
This paper details these improvements and is organized as follows.
After a brief description of the ALEPH detector, the event
selection is described in \Sec{eventsel} and the \particle{B}{s}{0}\ purity classification
procedure in \Sec{enrichment}. The next two sections explain
the proper time reconstruction and the procedure for tagging the initial and
final state b quark charge.
The likelihood function is presented in \Sec{likelihood}
and the $\Delta m_{\mathrm s}$ results in \Sec{results}.
In \Sec{sec_syst} the systematic uncertainties
are described, and in \Sec{sec_checks} additional checks of the
analysis presented. Finally the combination of this analysis
with the ALEPH \particle{D}{s}{-}\ based analyses is described in \Sec{combination}.
\section{The ALEPH detector} \labs{detector}
The ALEPH detector and its performance from 1991 to 1995
are described in detail elsewhere~\cite{ALEPH-DETECTOR,ALEPH-PERFORMANCE},
and only a brief
overview of the apparatus is given here. Surrounding the beam pipe,
a high resolution vertex detector (VDET) consists of two layers of
double-sided silicon microstrip detectors,
positioned at average radii of 6.5~cm and 11.3~cm,
and covering 85\% and 69\% of the solid angle respectively.
The spatial resolution for the $r\phi$ and
$z$ projections (transverse to and along the beam axis, respectively) is
12~\mbox{$\mu$m}\ at normal incidence. The vertex detector is surrounded
by a drift chamber with eight coaxial wire layers with an outer
radius of 26~cm and by a time projection chamber
(TPC) that measures up to 21~three-dimensional points per track at radii
between 30~cm and 180~cm. These detectors are immersed
in an axial magnetic field of 1.5~T and together measure the momenta of
charged particles with a resolution
$\sigma (p)/p = 6 \times 10^{-4} \, p_{\mathrm T}
\oplus 0.005$ ($p_{\mathrm T}$ in \mbox{GeV$/c$}).
The resolution of the three-dimensional impact parameter in the
transverse and longitudinal view,
for tracks having information from all tracking detectors and two
VDET hits (a VDET ``hit'' being defined as having information
from both $r\phi$ and $z$ views), can be parametrized as
$\sigma = 25\, \mu{\mathrm m} + 95\, \mu{\mathrm m}/p$ ($p$ in \mbox{GeV$/c$}).
The TPC also provides up to 338~measurements of the specific ionization
of a charged particle. In the following, the \mbox{$dE/dx$}\ information is
considered as available if more than 50 samples are present.
Particle identification is based on the \mbox{$dE/dx$}\ estimator \mbox{$\chi_{\pi}$}\ (\mbox{$\chi_{\k}$}),
defined as the difference between the measured and expected ionization
expressed in terms of standard deviations for the $\pi$ (\particle{K}{}{}) mass hypothesis.
The TPC is surrounded by a lead/proportional-chamber electromagnetic
calorimeter segmented into $0.9^{\circ} \times 0.9^{\circ}$ projective towers
and read out in three sections in depth, with energy resolution
$\sigma (E)/E = 0.18/\sqrt{E} + 0.009 $ ($E$ in GeV). The iron return
yoke of the magnet is instrumented with streamer tubes to
form a hadron calorimeter, with a thickness of over 7
interaction lengths and is surrounded by two additional double-layers
of streamer tubes to aid muon identification.
An algorithm combines all these measurements to provide a determination
of the energy flow~\cite{ALEPH-PERFORMANCE} with an uncertainty
on the total measurable energy of
\mbox{$\sigma(E) = (0.6\sqrt{E/{\mathrm {GeV}}} + 0.6)~{\rm GeV}$.}
\section{Event selection} \labs{eventsel}
This analysis uses approximately 4 million hadronic \particle{Z}{}{} events
recorded by the ALEPH detector from 1991 to 1995 at centre of mass energies
close to the \particle{Z}{}{} peak and selected
with the charged particle requirements described in \Ref{ALEPH-HADRONIC}.
It relies on Monte Carlo samples of fully simulated \Z{q} events.
The Monte Carlo generator is based
on JETSET 7.4~\cite{LUND} with updated branching ratios for heavy flavour
decays. Monte Carlo events are reweighted to the
physics parameters listed in \Table{phyparams}.
Events for which the cosine of the angle between the thrust
axis and the beam axis is less than 0.85 are selected.
Using the plane perpendicular to the thrust axis,
the event is split into two hemispheres.
Electrons and muons are identified using
the standard ALEPH lepton selection criteria~\cite{lepton-ID}.
Events containing at least one such lepton with
momentum above 3~\mbox{GeV$/c$}\ are kept.
The leptons are then associated to their closest jet (constructed using the
JADE algorithm~\cite{JADE} with $y_{\mbox{\scriptsize cut}}=0.004$)
and a transverse momentum $p_T$ with respect to the jet is calculated
with the lepton momentum removed from the jet.
Only leptons with $p_T >1.25$~\mbox{GeV$/c$}~are selected.
In the case that more than one lepton in an event satisfies this
requirement, only the lepton with the highest momentum is used as a
candidate for a \particle{B}{s}{0}\ decay product.
The \particle{e}{}{+}\particle{e}{}{-} interaction point is
reconstructed on an event-by-event basis using the constraint of the
average beam spot position and envelope~\cite{1993_Rb_paper}.
A charm vertex is then reconstructed in the lepton hemisphere using the
algorithm described in~\Ref{ALEPH-DILEPTON}. Charged particles
in this hemisphere (other than the selected lepton)
are assigned to either the interaction point or a single
displaced secondary vertex. A three-dimensional grid search is performed for
the secondary vertex position to find the combination of
assignments that has the greatest reduction in $\chi^2$
as compared to the case when all tracks are assumed to come from the
interaction point. Tracks are required to come within $3\sigma$ of their
assigned vertex. The position resolution of this ``charm vertex'' is subsequently improved
by removing those tracks having a momentum below 1.5~\mbox{GeV$/c$}\ or an impact parameter
significance relative to the charm vertex larger than $1.4\sigma$. The remaining tracks are
then re-vertexed to form the reconstructed ``charm particle''.
If only one track passes the requirements, it
serves as the charm particle. The event is rejected
if no track remains, or none of the tracks assigned to the charm vertex
have at least one vertex detector hit.
The charm particle is
then combined with the lepton to form a candidate \particle{b}{}{}-hadron vertex. The
lepton is required to have at least one vertex detector hit and the $\chi^2$ per
degree of freedom of the reconstructed \particle{b}{}{}-hadron
vertex is required to be less than 25.
The energy $E_{\mathrm c}$ of the charm particle is estimated by clustering
a jet, using the JADE algorithm, around the charged tracks at the charm
vertex until a mass of $2.7~\mbox{GeV$/c^2$}$ is reached.
To reduce the influence of fragmentation particles on the estimate of $E_{\mathrm c}$,
charged and neutral particles with energies less than
0.5~GeV are excluded from the clustering~\cite{DR}.
The neutrino energy $E_{\nu}$ is estimated from the missing energy in the
lepton hemisphere taking into account the
measured mass in each hemisphere~\cite{Bs_lifetime}.
Assuming the direction of flight of the \particle{b}{}{}-hadron to be
that of its associated jet,
an estimate of the \particle{b}{}{}-hadron mass can be calculated
from the energy of the neutrino and the four-vectors of the
charm particle and the lepton.
\begin{table}
\tabcaption{Values of the physics parameters assumed in this analysis.}
\vskip 0.1cm
\begin{center}
\begin{tabular}{|c|c|c| }
\hline
Physics parameter & Value and uncertainty & Reference \\
\hline \hline
\particle{B}{s}{0}\ lifetime & $1.49 \pm 0.06 $~ps & \cite{Schneider} \\
\particle{B}{d}{0}\ lifetime & $1.57 \pm 0.04 $~ps & \cite{Schneider} \\
\particle{B}{}{+}\ lifetime & $1.67 \pm 0.04 $~ps & \cite{Schneider} \\
\particle{b}{}{}-baryon lifetime & $1.22 \pm 0.05 $~ps & \cite{Schneider} \\
\mbox{$\Delta m_{\rm d}$} & $0.463 \pm 0.018~\mbox{ps$^{-1}$}$ & \cite{Schneider} \\
\hline \rule{0pt}{11pt}
$\mbox{$R_{\rm b}$} = \br{\Z{b}}/\br{\Z{q}}$ & $0.2170 \pm 0.0009$ & \cite{LEPEWWG} \\
$\mbox{$R_{\rm c}$} = \br{\Z{c}}/\br{\Z{q}}$ & $0.1733 \pm 0.0048$ & \cite{LEPEWWG}\\
$\mbox{$f_\bs$} = \BR{\anti{b}{}{}}{\particle{B}{s}{0}}$ & $ 0.103^{+0.016}_{-0.015}$ & \cite{Schneider} \\
$\mbox{$f_\bd$} = \mbox{$f_\bu$} = \BR{\anti{b}{}{}}{\Bd,\Bu}$ & $ 0.395^{+0.016}_{-0.020}$ &\cite{Schneider} \\
$\mbox{$f_{\mbox{\scriptsize \particle{b}{}{}-baryon}}$} = \BR{\particle{b}{}{}}{\mbox{\particle{b}{}{}-baryon}}$ & $ 0.106^{+0.037}_{-0.027}$ & \cite{Schneider}
\\[1pt]
\hline
$\BR{\particle{b}{}{}}{\ell}$ & $0.1112 \pm 0.0020$ & \cite{LEPEWWG}\\
$\br{\particle{b}{}{} \rightarrow \particle{c}{}{} \rightarrow \ell}$ & $0.0803 \pm 0.0033$ & \cite{LEPEWWG}\\
$\br{\particle{b}{}{} \rightarrow \anti{c}{}{} \rightarrow \ell}$ & $0.0013 \pm0.0005$ & \cite{LEPHF}\\
$\BR{\particle{c}{}{}}{\ell}$ & $0.098 \pm 0.005$ & \cite{LEPHF} \\
\hline
$\langle X_E \rangle$ & $0.702 \pm 0.008$ & \cite{LEPHF} \\
\hline
\end{tabular}
\end{center}
\labt{phyparams}
\end{table}
In order to improve the rejection of non-\particle{b}{}{} background
or \particle{b}{}{} events with a badly estimated decay length error,
the following additional cuts are applied~\cite{OL_thesis}:
\begin{itemize}
\item the momentum of the charm particle must be larger than 4~\mbox{GeV$/c$}; this cut is
increased to 8~\mbox{GeV$/c$}\ when the angle between the charm particle and the lepton is
less than $10^\circ$;
\item the reconstructed mass of the \particle{b}{}{}-hadron must be less than 8~\mbox{GeV$/c^2$};
\item the missing energy in the lepton hemisphere must be larger than $-2$~GeV;
\item the angle between the charm particle and the lepton must be between $5^\circ$ and
$30^\circ$;
\item the angle between the charm particle and the jet must be less than $20^\circ$.
\end {itemize}
Although the total efficiency of these additional requirements is 35\%,
the average decay length resolution of the remaining events is
improved by a factor of 2 and the amount of non-\particle{b}{}{} background
in the sample reduced by a factor close to 4. In addition the average momentum
resolution of the sample is significantly improved.
A total of 33023 events survive after all cuts.
\begin{table}
\tabcaption{Lepton candidate sources (\%), as estimated
from Monte Carlo. Quoted uncertainties are statistical only.}
\labt{compo}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
\particle{B}{s}{0}\ & \particle{B}{d}{0}\ & other \particle{b}{}{}-hadrons & charm & \particle{uds}{}{} \\
\hline
$10.35 \pm 0.08$ & $38.53 \pm 0.13$ & $47.86 \pm 0.14$ & $2.31 \pm 0.06$ &
$0.95\pm0.05$ \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\tabcaption{Definition of the eleven \particle{B}{s}{0}\ purity classes.
Column 1 gives the number of charged tracks at the charm vertex.
Column 2 shows whether the charge of these tracks
are the same ({\it S}) or opposite ({\it O}) to that of the
lepton, the tracks being ranked in order of decreasing momentum.
Column 3 indicates the subclasses based on the presence of kaon or $\phi$
candidates at the charm vertex.
Column 4 shows the fraction of data events in each class.
Column 5 gives the \particle{B}{s}{0}\ purity in each class, as estimated
from Monte Carlo. Quoted uncertainties are statistical only.}
\labt{enrichment}
\begin{center}
\begin{tabular}{|c|c|c|r@{\hspace{4.5ex}}|r@{\hspace{2ex}}|}
\hline
\begin{tabular}{@{}c@{}} Number \\[-2pt] of tracks \end{tabular} &
\begin{tabular}{@{}c@{}} Charge \\[-2pt] correlation \end{tabular} &
\begin{tabular}{@{}c@{}} Kaon \\[-2pt] requirements \end{tabular} &
\begin{tabular}{@{}c@{\hspace{-3.5ex}}} Fraction in \\[-2pt] data (\%) \end{tabular} &
\multicolumn{1}{|c|}{\particle{B}{s}{0}\ purity (\%)} \\[6pt]
\hline\hline
1 & $O$ & \multicolumn{1}{|@{}c@{}|}{\begin{tabular}{c}
1 kaon \\ 0 kaon \end{tabular}}
& \multicolumn{1}{|@{}r@{\hspace{4.5ex}}|}{\begin{tabular}{r@{}}
3.8 \\ 14.9 \end{tabular}}
& \multicolumn{1}{|@{}r@{}|}{\begin{tabular}{r@{\hspace{2ex}}}
24.0 $\pm$ 0.6 \\ 14.7 $\pm$ 0.3 \end{tabular}} \\
\hline
& $OS,SO$ & $\phi$ & 1.2 & 21.1 $\pm$ 1.0 \\
& $OS,SO$ & 0 kaon & 17.8 & 7.0 $\pm$ 0.2 \\
2 & $OS,SO$ & 1 kaon & 17.4 & 5.2 $\pm$ 0.1 \\
& $OS,SO$ & ~2 kaons & 2.3 & 8.4 $\pm$ 0.5 \\
\cline{2-5}
& $OO$ & & 8.3 & 16.7 $\pm$ 0.4 \\
\hline
& $OOS$ & & 2.9 & 19.4 $\pm$ 0.6 \\
3 & $OSO$ & & 3.8 & 18.0 $\pm$ 0.5 \\
& $SOO$ & & 3.9 & 14.5 $\pm$ 0.5 \\
\hline
\multicolumn{3}{|c|}{remainder} & 23.6 & 5.7 $\pm$ 0.1 \\
\hline
\end{tabular}
\end{center}
\end{table}
\section{\boldmath \particle{B}{s}{0}\ purity classes}\labs{enrichment}
\Table{compo} shows the composition of the final event sample obtained
assuming the physics parameters listed in \Table{phyparams}
and reconstruction efficiencies determined from Monte Carlo.
The average \particle{B}{s}{0}\ purity in the sample is estimated to be 10.35\%.
The sensitivity of the analysis
to \particle{B}{s}{0}\ mixing is increased by splitting the data into subsamples
with a \particle{B}{s}{0}\ purity larger or smaller than the average
and then making use of this knowledge in the likelihood fit.
Classes are constructed based on (i) the track
multiplicity at the charm vertex, (ii) the number of identified kaon candidates
and (iii) the charge correlation between the tracks at the charm vertex
and the lepton.
The definition of the eleven classes used in this analysis
is given in \Table{enrichment}.
As the last class contains those events which do not satisfy the
criteria of the preceding classes,
the classification procedure does not reject events.
For an odd (even) number of charged tracks assigned to the charm vertex,
the reconstructed charge of the b-hadron vertex is more likely to be zero
(non-zero), and therefore the probability for the hemisphere to contain
a neutral b-hadron is enhanced (reduced).
For events having two oppositely charged tracks at the charm vertex, the \particle{B}{s}{0}\ purity is
6.7\%, which is lower than the average purity.
For this large subsample of events, the presence of kaon candidates
and consistency with the $\phi$ mass are used to recover some
sensitivity to the \particle{B}{s}{0}.
In this procedure, kaon candidates are defined as charged tracks with
momentum above 2~\mbox{GeV$/c$}\ satisfying $\mbox{$\chi_{\pi}$}+\mbox{$\chi_{\k}$}<0$ and $|\mbox{$\chi_{\k}$}|<2$,
and a $\phi$ candidate is defined as a pair of oppositely charged tracks
with an invariant mass between 1.01 and 1.03~\mbox{GeV$/c^2$}\ (assuming kaon masses
for the two tracks).
Monte Carlo studies indicate that this classification
procedure is effectively equivalent to increasing the statistics of the
sample by $28\%$.
\section{Proper time reconstruction and resolution}\labs{proper_time}
\begin{figure}
\vspace{-1.0cm}
\begin{center}
\makebox[0cm]{\psfig{file=fig1.ps,width=1.05\textwidth,
bbllx=0pt,bblly=271pt,bburx=560pt,bbury=560pt}}
\end{center}
\vspace{-1.0cm}
\figcaption{Decay length resolution (a) and relative boost term resolution (b)
for all \particle{b}{}{}-hadrons;
the curves are the result of fits of the sum of two Gaussian functions
with relative fractions and widths as indicated.}
\labf{resol}
\end{figure}
\begin{table}[t]
\tabcaption{Double-Gaussian parametrizations of the decay length
pull and relative boost term resolution obtained from Monte Carlo.}
\labt{lres_gres}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
\multicolumn{3}{|c|}{Parametrization of $(l-\l_{0})/\sigma_l$} \\
$\alpha$ & Fraction $f_l^\alpha$ & Sigma $S_l^\alpha$ \\
\hline \hline
1 & $0.849 \pm 0.003$ & $ 1.333 \pm 0.005$ \\
2 & $0.151 \pm 0.003$ & $ 4.365 \pm 0.033$ \\
\hline
\end{tabular}
\hspace{0.5cm}
\begin{tabular}{|c|c|c|}
\hline
\multicolumn{3}{|c|}{Parametrization of $(g-g_{0})/g_{0}$} \\
$\beta$ & Fraction $f_g^\beta$ & Sigma $S_g^\beta$ \\
\hline \hline
1 & $0.723 \pm 0.004$ & $ 0.0713 \pm 0.0003$ \\
2 & $0.277 \pm 0.004$ & $ 0.2101 \pm 0.0012$ \\
\hline
\end{tabular}
\end{center}
\end{table}
An estimate, $l$, of the decay length of each \particle{b}{}{}-hadron candidate
is calculated as the distance from the
interaction point to the \particle{b}{}{}-hadron vertex projected onto the direction
of the jet associated to the lepton.
This decay length includes a global correction of $-78~\mu$m,
determined using Monte Carlo events. This small offset is due to the
vertex reconstruction algorithm, which assumes that all lepton candidates
in \particle{b}{}{} events come from direct $\particle{b}{}{} \to \ell$ decays.
\Figure{resol}a shows the Monte Carlo distribution of $l-\l_{0}$
for \particle{b}{}{} events,
where $\l_{0}$ is the true decay length.
An event-by-event decay length uncertainty, $\sigma_l$, is
estimated from the covariance matrices of the tracks
attached to the vertices. This
can be compared with the true error, $(l-\l_{0})$, by
constructing the pull distribution, $(l-\l_{0})/\sigma_l$.
A fit to this Monte Carlo distribution of the sum of two Gaussian
functions ($\alpha=1,2$) yields the
fractions, $f_l^\alpha$, and sigmas, $S_l^\alpha$, indicated in \Table{lres_gres}.
These parameters are used to describe the observed tails when constructing
the resolution function.
The true boost term is defined as $g_{0}=t_{0}/\l_{0}$,
where $t_{0}$ is the true proper time.
An estimate of the boost term is formed using
$g=m_{\mathrm B}/p_{\mathrm B} + 0.36~$ps/cm.
The average \particle{b}{}{}-hadron mass, $m_{\mathrm B}$, is assumed to be 5.3~\mbox{GeV$/c^2$}\
and the reconstructed momentum is calculated as
$p_{\mathrm B}=\sqrt{(E_{\mathrm c}+E_\nu+E_\ell)^2-m_{\mathrm B}^2}$
where $E_\ell$ is the measured lepton energy.
The constant term
is an average offset correction determined using Monte Carlo events;
this results from the choice of the mass cut-off used in the nucleated
jet algorithm described in \Sec{eventsel}, which optimizes the relative boost
term resolution.
The distribution of $(g-g_{0})/g_{0}$, shown in \Fig{resol}b,
is parametrized with the sum of
two Gaussian functions; \Table{lres_gres} shows the corresponding
fractions, $f_g^\beta$, and sigmas, $S_g^\beta$, determined with
Monte Carlo events.
\begin{figure}[tp]
\vspace{-2cm}
\begin{center}
\makebox[0cm]{\psfig{file=fig2.ps,width=1.05\textwidth}}
\end{center}
\vspace{-1.0cm}
\figcaption{The proper time resolution for \particle{b}{}{} events in various intervals
of true proper time $t_0$ (in ps). The curves display the corresponding
resolution assumed in the likelihood as obtained from \Eq{eqres}.
The RMS values indicated, are derived from the data points shown.}
\labf{tres}
\end{figure}
The proper time of each \particle{b}{}{}-hadron candidate
is computed from the estimated decay length and boost term as
\begin{equation}
t = l g \, ,
\end{equation}
and its proper time resolution function is parametrized with
the sum of four Gaussian components,
\begin{equation}
{\mathrm{Res}}(t,t_{0}) = \sum_{\alpha=1}^{2} \sum_{\beta=1}^2 f_l^{\alpha'} f_g^\beta
\frac{1}{\sqrt{2\pi} \sigma^{{\alpha\beta}} (t_{0} ) }
\exp \left [ -\frac{1}{2} \left ( \frac{t-t_{0}}{\sigma^{\alpha\beta}(t_{0} )}
\right )^2
\right ] \, ,
\labe{eqres}
\end{equation}
where $f_l^{2'}=f_l^{\mbox{\scriptsize dat}} f_l^2$ and $f_l^{1'}=1-f_l^{2'}$, and
where the event-by-event resolution $\sigma^{\alpha\beta}$
of each component, given by
\begin{equation}
\sigma^{\alpha\beta} (t_{0}) = \sqrt{
\left(g S_l^{\mbox{\scriptsize dat}} S_l^\alpha \sigma_l\right)^2
+ \left(t_{0} S_g^{\mbox{\scriptsize dat}} S_g^\beta \right)^2 } \, ,
\labe{treso}
\end{equation}
includes the explicit dependence on $t_{0}$.
This parametrization
implicitly assumes that any correlation between the decay length resolution
and the relative boost resolution is small, as confirmed by Monte Carlo
studies.
The scale factors $S_l^{\mbox{\scriptsize dat}}$ and $f_l^{\mbox{\scriptsize dat}}$
are introduced to account for a possible discrepancy between data and
Monte Carlo, both in the amount of tail in the decay length pull
($f_l^{\mbox{\scriptsize dat}}$) and in the estimate of $\sigma_l$ itself ($S_l^{\mbox{\scriptsize dat}}$).
In a similar fashion, the inclusion
of the parameter $S_g^{\mbox{\scriptsize dat}}$
allows possible systematic uncertainties due to the boost resolution
to be studied.
By definition all
these factors are set to unity when describing the resolution of simulated
events.
\Figure{tres} shows, for various intervals of true proper time, the proper
time resolution of simulated \particle{b}{}{} events together with
the parametrization obtained from \Eq{eqres}.
The parametrization is satisfactory, especially for small proper times.
\begin{figure}[t]
\begin{center}
\vspace{-0.5cm}
\psfig{file=fig3.ps,width=1.\textwidth,
bbllx=0pt,bblly=189pt,bburx=560pt,bbury=560pt}
\end{center}
\vspace{-0.7cm}
\figcaption{The reconstructed proper time distributions of the selected
events in data.
The contributions from the various components are indicated. The curve is
the result of the fit described in \Sec{proper_time}.}
\labf{fig_lifetime}
\end{figure}
In order to measure $S_l^{\mbox{\scriptsize dat}}$ and $f_l^{\mbox{\scriptsize dat}}$ in the data,
a fit is performed to the reconstructed proper time
distribution of the selected sample of real events.
This is performed using
the likelihood function described in \Sec{likelihood}, modified
to ignore tagging information.
Fixing all physics parameters to their central values given
in \Table{phyparams},
the likelihood is maximized with respect to $S_l^{\mbox{\scriptsize dat}}$ and $f_l^{\mbox{\scriptsize dat}}$.
The fit reproduces well the
negative tail of the proper time distribution (see \Fig{fig_lifetime}),
showing that the resolution is satisfactorily described by the chosen
parametrization.
The fitted values $S_l^{\mbox{\scriptsize dat}}=1.02\pm0.03$ and $f_l^{\mbox{\scriptsize dat}}=1.20\pm0.09$
indicate that the decay length resolution in the data is somewhat worse than
that suggested by the Monte Carlo simulation.
\section{Initial and final state tagging} \labs{tagging}
\defQ_{\mathrm o}{Q_{\mathrm o}}
\def\tilde{Q}_{\mathrm s}{\tilde{Q}_{\mathrm s}}
\def${p_{\rm T}^\ell}${${p_{\rm T}^\ell}$}
\def$S(\Qoppo)${$S(Q_{\mathrm o})$}
\def$S(\K)${$S(\particle{K}{}{})$}
\def$S(\ell)${$S(\ell)$}
\def$S(\ell_{\mathrm o})${$S(\ell_{\mathrm o})$}
\def$S(\ell_{\mathrm s})${$S(\ell_{\mathrm s})$}
\def${\Qsame\times S(\Qoppo)}${${\tilde{Q}_{\mathrm s}\times S(Q_{\mathrm o})}$}
\def${\Qoppo\times S(\Qoppo)}${${Q_{\mathrm o}\times S(Q_{\mathrm o})}$}
\def${\Qsame\times S(\K)}${${\tilde{Q}_{\mathrm s}\times S(\particle{K}{}{})}$}
\def${\Qoppo\times S(\K)}${${Q_{\mathrm o}\times S(\particle{K}{}{})}$}
\def${Z_{\k}}${${Z_{{\partsize \K}}}$}
\def${\Qoppo\times S(\ell)}${${Q_{\mathrm o}\times S(\ell)}$}
\def${\Qsame\times S(\ell)}${${\tilde{Q}_{\mathrm s}\times S(\ell)}$}
\defx^{\mbox{\scriptsize eff}}{x^{\mbox{\scriptsize eff}}}
The flavour state of the decaying \particle{B}{s}{0}\ candidate
is estimated from the charge of the reconstructed lepton.
This final state tag is incorrect if the
lepton is from the $\particle{b}{}{} \rightarrow \particle{c}{}{}
\rightarrow \ell$ decay
(6.1\% of the \particle{b}{}{} events in the sample) as the charge of the
lepton is reversed.
The flavour state at production is estimated using three initial state tags.
A \particle{B}{s}{0}\ candidate is ``tagged as unmixed (mixed)'' when the
reconstructed initial and final flavour states are the
same (different).
By definition, candidates from charm, \particle{uds}{}{},
or non-oscillating \particle{b}{}{}-hadron backgrounds are correctly tagged
if they are tagged as unmixed.
The tagging power
is enhanced by the means of discriminating variables which
have some ability to distinguish between correctly tagged and
incorrectly tagged candidates.
This approach was first used in the ALEPH
\particle{D}{s}{-}--lepton analysis~\cite{ALEPH-DS-LEPTON} and refined for the
\particle{D}{s}{-}--hadron analysis~\cite{ALEPH-DSHAD}.
In contrast to what was performed in \Refss{ALEPH-DS-LEPTON}{ALEPH-DSHAD},
an event is considered to be mistagged if either the initial or final state is
incorrectly tagged, but not both.
\begin{figure}
\begin{center}
\mbox{\psfig{file=fig4.ps,width=15cm}}
\end{center}
\figcaption{Schematic drawing indicating the initial and final state
tags used in this analysis.}
\labf{fig_decay}
\end{figure}
For each \particle{B}{s}{0}\ candidate, one of the tags described below is
used to determine the initial state (see also \Fig{fig_decay}).
\begin{itemize}
\item {\bf Opposite lepton tag:}
Leptons with momentum larger than 3~\mbox{GeV$/c$}\ are searched for in the
hemisphere opposite to the \particle{B}{s}{0}\ candidate.
The sign of the lepton with the highest transverse momentum $p_T(\ell_{\mathrm o})$
tags the nature of the initial b quark in the opposite hemisphere.
It takes precedence over the other tags if it is available.
\item {\bf Fragmentation kaon tag:}
The fragmentation kaon candidate is defined as
the highest momentum charged track within $45^{\circ}$ of
the \particle{B}{s}{0}\ direction, identified, using the vertexing algorithm described
in Section 2, as being more likely to come
from the interaction point than the charm vertex, and
satisfying $\mbox{$\chi_{\k}$} < 0.5 $ and $\mbox{$\chi_{\k}$} - \mbox{$\chi_{\pi}$} > 0.5$.
The sign of the fragmentation kaon candidate tags the sign of the b quark
in the same hemisphere. It is used if no opposite
hemisphere lepton tag is found.
\item {\bf Opposite hemisphere charge tag:}
The opposite hemisphere charge is defined as
\begin{equation}
Q_{\mathrm o} =
\frac{\displaystyle \sum_i^{\rm oppo} q_i \, \vert p^i_{\parallel} \vert ^{\kappa}}
{\displaystyle \sum_i^{\rm oppo} \vert p^i_{\parallel} \vert ^{\kappa}} \, ,
\labe{Qoppo}
\end{equation}
where the sum is over all charged particles in the opposite hemisphere,
$p^i_{\parallel}$ is the momentum of the $i^{\rm th}$
track projected on the thrust axis, $q_i$ its charge and
$\kappa = 0.5$. The sign of $Q_{\mathrm o}$
tags the initial state of the b quark in the opposite hemisphere.
This tag is always available but has the largest mistag probability of
the three tags. It is used only if no other tag is available.
\end{itemize}
\newcommand{used}{used}
\newcommand{}{}
\begin{table}
\begin{center}
\figcaption{The tag and discriminating variables used in each class.
The quantities $S(\Qoppo)$, $S(\K)$\ and $S(\ell_{\mathrm o})$\ are the signs of the
opposite hemisphere charge, the fragmentation kaon and the opposite
side lepton. Classes 3--5 all use the sign of the opposite hemisphere
lepton as the initial state tag. For Class 3 no fragmentation kaon candidate
is identified. For Class 4 (Class 5) a fragmentation kaon candidate is found
whose charge is the same as (opposite to) the charge of the opposite
hemisphere lepton.
Purity and mistag rates are estimated from Monte Carlo.
Quoted uncertainties are statistical only.}
\labt{classes}
\vspace{-0.5cm}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Tagging class & 1 & 2 & 3 & 4 & 5 \\
\hline
Available initial & $S(\Qoppo)$\ & $S(\Qoppo)$\ & $S(\Qoppo)$\ & $S(\Qoppo)$\ & $S(\Qoppo)$\ \\
state tags & & $S(\K)$\ & $S(\ell_{\mathrm o})$\ & $S(\ell_{\mathrm o})$=--$S(\K)$\ & $S(\ell_{\mathrm o})$=$S(\K)$\ \\
\hline\hline
Intial state tag used & $S(\Qoppo)$\ & $S(\K)$\ & $S(\ell_{\mathrm o})$\ & $S(\ell_{\mathrm o})$=--$S(\K)$\ & $S(\ell_{\mathrm o})$=$S(\K)$\ \\
\hline
& $|Q_{\mathrm o}|$ & & & & \\
& $S(Q_{\mathrm o})\tilde{Q}_{\mathrm s}$ & & & & \\
& & $S(K)Q_{\mathrm o}$ & & & \\
& & $S(K)\tilde{Q}_{\mathrm s}$ & & & \\
Discriminating& & $\chi_\pi$ & & $\chi_\pi$ & $\chi_\pi$ \\
variables & & ${Z_{\k}}$\ & & ${Z_{\k}}$\ & ${Z_{\k}}$\ \\
used & & & $S(\ell_{\mathrm o})$$Q_{\mathrm o}$ & $S(\ell_{\mathrm o})$$Q_{\mathrm o}$ & $S(\ell_{\mathrm o})$$Q_{\mathrm o}$ \\
& & & $S(\ell_{\mathrm o})$$\tilde{Q}_{\mathrm s}$ & $S(\ell_{\mathrm o})$$\tilde{Q}_{\mathrm s}$ & $S(\ell_{\mathrm o})$$\tilde{Q}_{\mathrm s}$ \\
& & & $p_T(\ell_{\mathrm o})$ & $p_T(\ell_{\mathrm o})$ & $p_T(\ell_{\mathrm o})$ \\
& & $t$ & & $t$ & $t$ \\
& $p_T(\ell_{\mathrm s})$ & $p_T(\ell_{\mathrm s})$ & $p_T(\ell_{\mathrm s})$ & $p_T(\ell_{\mathrm s})$ & $p_T(\ell_{\mathrm s})$ \\
\hline \hline
Fraction in data (\%)& 71.4 $\pm$ 0.2 & 11.9 $\pm$ 0.2 & 14.2 $\pm$ 0.2 & 1.3
$\pm$ 0.1 & 1.2 $\pm$ 0.1 \\
\hline
\particle{B}{s}{0}\ purity (\%)& 9.8 $\pm$ 0.1 & 13.1 $\pm$ 0.3 & 10.1 $\pm$ 0.2 &
15.6 $\pm$ 1.0 & 11.8 $\pm$ 0.8 \\
\particle{B}{s}{0}\ mistag (\%)& 38.6 $\pm$ 0.5 & 28.9 $\pm$ 1.0 & 34.0 $\pm$ 1.1 &
16.1 $\pm$ 2.3 & 55.9 $\pm$ 3.5 \\
$\!\!\!$ \particle{B}{s}{0}\ effective mistag (\%) $\!\!\!$& 32.4 & 24.0 & 24.5 & 12.5 & 22.3 \\
\hline
\particle{B}{d}{0}\ mistag (\%)& 38.4 $\pm$ 0.2 & 48.5 $\pm$ 0.7 & 35.4 $\pm$ 0.5 &
35.5 $\pm$ 2.0 & 39.9 $\pm$ 2.0 \\
other B mistag (\%)& 37.6 $\pm$ 0.2 & 61.4 $\pm$ 0.5 & 34.2 $\pm$ 0.5 &
43.8 $\pm$ 1.8 & 24.1 $\pm$ 1.4 \\
charm mistag (\%)& 38.2 $\pm$ 1.4 & 54.2 $\pm$ 3.2 & 14.2 $\pm$ 3.1 &
50.0 $\pm$ 50.0 & 8.6 $\pm$ 8.2 \\
\particle{uds}{}{}\ mistag (\%)& 47.8 $\pm$ 2.8 & 56.9 $\pm$ 6.0 & 46.0 $\pm$ 12.9 &
50.0 $\pm$ 50.0 & 50.0 $\pm$ 50.0 \\
\hline
\end{tabular}
\end{center}
\end{table}
The events are sorted into five exclusive classes
based on the availability and results of the three tags.
The definition of these tagging classes and the list of the discriminating
variables associated with each class are given in \Table{classes}.
The variable $\tilde{Q}_{\mathrm s}$
is the sum of the charges of all the tracks in the same hemisphere
and carries information on the initial state of the \particle{B}{s}{0}.
As the sum of charges of tracks originating from the decay of a neutral
particle is zero,
it is independent of whether the \particle{B}{s}{0}\ decays as a \particle{B}{s}{0}\ or a \anti{B}{s}{0}.
The variable ${Z_{\k}}$\ is defined as ${Z_{\k}}$$= p_{\mathrm K} / (E_{\mathrm beam} - E_{\mathrm B})$,
where $p_{\mathrm K}$ is the kaon momentum,
$E_{\mathrm beam}$ the beam energy and $E_{\mathrm B}$ the \particle{B}{s}{0}\ candidate energy.
The inclusion of the reconstructed \particle{B}{s}{0}\ proper time $t$
takes into account that the mistag probability of the fragmentation kaon tag
increases as the \particle{B}{s}{0}\ vertex approaches the primary vertex,
due to the misassignment of tracks between the primary and
secondary vertices. The use, for all classes, of the variable $p_T(\ell_{\mathrm s})$,
the transverse momentum of the lepton from the \particle{B}{s}{0}\ candidate decay,
reduces the deleterious effect of
$\particle{b}{}{} \rightarrow \particle{c}{}{} \rightarrow \ell$ on the final
state mistag.
The mistag probability, $\eta$, for the \particle{B}{s}{0}\ signal events in each class,
as well as the probability distributions
of each discriminating variable $x_i$
for correctly and incorrectly tagged signal events,
$r_i(x_i)$ and $w_i(x_i)$,
are estimated from Monte Carlo.
The various discriminating variables chosen in each class, $x_1, x_2, \ldots$,
are combined into a single effective discriminating variable $x^{\mbox{\scriptsize eff}}$,
according to the prescription developed for the
\particle{D}{s}{-}\ based analyses~\cite{ALEPH-DS-LEPTON,ALEPH-DSHAD}.
This new variable is defined as
\begin{equation}
x^{\mbox{\scriptsize eff}} = \frac{
\eta \, w_1(x_1) \, w_2(x_2) \, \cdots }{
(1-\eta) \, r_1(x_1) \, r_2(x_2) \, \cdots \, +
\eta \, w_1(x_1) \, w_2(x_2) \, \cdots } \, ,
\labe{xeff}
\end{equation}
and takes values between 0 and 1.
A small value indicates that the \particle{B}{s}{0}\ oscillation is
likely to have been correctly tagged.
To allow use of the discriminating variables in the likelihood fit,
the probability density functions $G^c_{jkl}(x^{\mbox{\scriptsize eff}})$ of $x^{\mbox{\scriptsize eff}}$ are
determined for each lepton source $j$, in each tagging class $k$ and in each
\particle{B}{s}{0}\ purity class $l$,
separately for the correctly ($c=+1$) and incorrectly ($c=-1$) tagged events.
This determination (as well as the estimation of the corresponding
mistag probabilities $\eta_{jkl}$) is based on Monte Carlo.
The enhancement of the tagging power provided by the variable, $x^{\mbox{\scriptsize eff}}$,
depends on the difference between the $G^+_{jkl}(x^{\mbox{\scriptsize eff}})$ and $G^-_{jkl}(x^{\mbox{\scriptsize eff}})$
distributions, and can be quantified in terms of
effective mistag rates, as described in \Ref{ALEPH-DS-LEPTON}.
The effective mistag rates for the \particle{B}{s}{0}\ signal in the five tagging classes
are given in \Table{classes}.
This table also indicates \particle{B}{s}{0}\ purity and the mistags for
all background components.
The overall average \particle{B}{s}{0}\ effective mistag is 29\%.
\Figure{fig_xeffcomp}
displays the distribution of $x^{\mbox{\scriptsize eff}}$ in each of the tagging classes;
a good agreement is observed between data and Monte Carlo.
The systematic uncertainties associated with the
tagging procedure are considered in \Sec{sec_syst}.
\begin{figure}
\begin{center}
\vspace{-2.0cm}
\mbox{\psfig{file=fig5.ps,width=15cm}}
\end{center}
\figcaption{Distribution of $x^{\mbox{\scriptsize eff}}$ in each
tagging class in data (points) and Monte Carlo (histogram).}
\labf{fig_xeffcomp}
\end{figure}
\section{Likelihood function} \labs{likelihood}
Each \particle{b}{}{}-hadron source has a different probability distribution function
for the true proper time, $t_0$, and for
the discrete variable, $\mu_0$, defined to take the value $-1$
for the mixed case or $+1$ for the unmixed case.
Assuming CP conservation and equal decay widths
for the two CP eigenstates in each neutral \particle{B}{}{}-meson system, the joint
probability distribution of $t_0$ and $\mu_0$
can be written as
\begin{equation}
p_j(\mu_0,t_0) =
\frac{e^{-t_0/\tau_j}}{2 \tau_j} \,
\left[1 + \mu_0 \cos{(\Delta m_j \, t_0)}
\right] \, ,
\labe{p}
\end{equation}
where $\tau_j$ and $\Delta m_j$ are the lifetime and oscillation frequency of
\particle{b}{}{}-hadron source $j$
(with the convention that $\Delta m_j = 0$ for non-oscillating \particle{b}{}{}-hadrons).
The efficiency for reconstructing the
\particle{b}{}{}-hadron vertex depends on the true proper time.
The stringent selection cuts described in \Sec{eventsel}
are designed to reduce the fraction
of fragmentation tracks assigned to the charm vertex,
consequently causing a loss of efficiency at small proper times.
Similarly at large proper times the efficiency also decreases as one is
less likely to include a fragmentation track at the charm vertex
and therefore more likely to fail the requirement of the charm vertex
being assigned at least one track.
The efficiencies $\epsilon_j (t_{0})$ are parametrized separately
for each \particle{b}{}{}-hadron component $j$. They are
independent of whether the \particle{b}{}{}-hadron candidate is tagged as mixed or unmixed.
The joint probability distribution of the reconstructed
proper time $t$ and of $\mu_0$ is obtained as the convolution of
$p_j(\mu_0,t_0)$ with the event-by-event resolution function ${\mathrm{Res}}(t,t_{0})$
(\Sec{proper_time}) and takes into account the observed dependence of
the selection efficiency on true proper time:
\begin{equation}
h_{j}(\mu_0,t) = \frac{\displaystyle
\int_0^{\infty} \epsilon_j (t_{0})
p_j(\mu_0,t_{0}) {\mathrm{Res}} (t,t_{0})
\,dt_{0} }
{\displaystyle \int_0^{\infty} \epsilon_j (t_{0})
\frac{1}{\tau_j}e^{-t_{0}/\tau_j}
\,dt_{0} } \, .
\labe{eqh}
\end{equation}
For the lighter quark backgrounds,
$h_{j}(-1,t)=0$ as these sources are unmixed by definition, and
$h_{j}(+1,t)$ are the reconstructed proper time distributions.
These distributions are determined from Monte Carlo samples
and are parametrized as the sum of three Gaussian functions.
The likelihood function used in this analysis is based on the values
taken by three
different variables in the selected data events.
These variables are the reconstructed proper time $t$,
the tagging result $\mu$, taking the value
$-1$ for events tagged as mixed or $+1$ for those tagged as unmixed,
and the effective discriminating variable $x^{\mbox{\scriptsize eff}}$.
The use of the discriminating variable
$x^{\mbox{\scriptsize eff}}$ in the likelihood function is reduced to the use of two sets of
functions of $x^{\mbox{\scriptsize eff}}$, $X_{jkl}(x^{\mbox{\scriptsize eff}})$ and $Y_{jkl}(x^{\mbox{\scriptsize eff}})$ (described below),
whose values can be interpreted as event-by-event mistag probabilities and
fractions of the different lepton sources respectively.
The likelihood of the total sample is written as
\begin{equation}
\mbox{$\cal L$} = C \prod_l^{\scriptsize \mbox{~11 purity~}}
\prod_k^{\scriptsize \mbox{~5 tagging~}}
\prod_i^{\scriptsize \mbox{~$N_{kl}$ events~}}
f_{kl}(x^{\mbox{\scriptsize eff}}_{ikl},\mu_{ikl},t_{ikl}) \, ,
\labe{eqlike}
\end{equation}
where $C$ is a constant independent of \particle{b}{}{}-hadron oscillation
frequencies and lifetimes, $N_{kl}$ is the number of selected candidates
from \particle{B}{s}{0}\ purity class $l$ falling in tagging class $k$, and where
\begin{equation}
f_{kl}(x^{\mbox{\scriptsize eff}},\mu,t) = \sum_j^{\scriptsize \mbox{5 sources}}
Y_{jkl}(x^{\mbox{\scriptsize eff}}) \left[
\left(1-X_{jkl}(x^{\mbox{\scriptsize eff}})\right) h_{j}(\mu,t) +
X_{jkl}(x^{\mbox{\scriptsize eff}}) h_{j}(-\mu,t) \right]
\,
\labe{pdf}
\end{equation}
sums over the 5 different lepton sources considered to comprise the sample (see \Table{compo}).
The event-by-event quantities $X_{jkl}(x^{\mbox{\scriptsize eff}})$ and $Y_{jkl}(x^{\mbox{\scriptsize eff}})$
are computed from the
distributions $G^c_{jkl}(x^{\mbox{\scriptsize eff}})$ and mistag probabilities $\eta_{jkl}$
introduced in \Sec{tagging},
\begin{equation}
X_{jkl}(x^{\mbox{\scriptsize eff}}) = \eta_{jkl} \, \frac{G^-_{jkl}(x^{\mbox{\scriptsize eff}})}{G_{jkl}(x^{\mbox{\scriptsize eff}})} \, ,
\mbox{~~~}
Y_{jkl}(x^{\mbox{\scriptsize eff}}) = \alpha_{jkl} \, \frac{G_{jkl}(x^{\mbox{\scriptsize eff}})}{\sum_{j'}
\alpha_{j'kl} G_{j'kl}(x^{\mbox{\scriptsize eff}})} \, ,
\end{equation}
where $G_{jkl}(x^{\mbox{\scriptsize eff}}) = (1-\eta_{jkl}) G^+_{jkl}(x^{\mbox{\scriptsize eff}}) +
\eta_{jkl} G^-_{jkl}(x^{\mbox{\scriptsize eff}})$
and where $\alpha_{jkl}$ are the source fractions,
satisfying $\sum_{j=1}^{\mbox{\scriptsize 5 sources}} \alpha_{jkl} = 1$.
\section{Results for \boldmath \mbox{$\Delta m_{\rm s}$}} \labs{results}
\begin{figure}
\vspace{-2cm}
\begin{center}
\makebox[0cm]{\psfig{file=fig6.ps,width=1.05\textwidth,
bbllx=0pt,bblly=266pt,bburx=560pt,bbury=560pt}}
\end{center}
\figcaption{Negative log-likelihood difference with respect to the minimum
as a function of \mbox{$\Delta m_{\rm s}$}.}
\labf{likel}
\end{figure}
\begin{figure}
\vspace{-2.3cm}
\begin{center}
\makebox[0cm]{\psfig{file=fig7.ps,width=1.05\textwidth}}
\end{center}
\vspace{-0.6cm}
\figcaption{Measured \particle{B}{s}{0}\ oscillation amplitude as a function of \mbox{$\Delta m_{\rm s}$}\ for this analysis.
The error bars represent the 1$\sigma$ total uncertainties, and
the shaded bands show the one-sided \CL{95} contour, with and without
systematic effects included.}
\labf{amplitude}
\end{figure}
Assuming the values for the physics parameters given in \Table{phyparams},
the variation in the data of the log-likelihood,
as a function of the free parameter \mbox{$\Delta m_{\rm s}$}, is shown in \Fig{likel}.
The difference in log-likelihood is plotted relative to its global minimum and
remains constant for \mbox{$\Delta m_{\rm s}$}\ larger than 20~\mbox{ps$^{-1}$}.
The global minimum occurs at $\mbox{$\Delta m_{\rm s}$}=15.9\pm1.6$(stat.)~\mbox{ps$^{-1}$}\ but is not
sufficiently deep to claim a measurement.
In order to extract a lower limit on \mbox{$\Delta m_{\rm s}$}\ and to facilitate combination with
other analyses, the results are also presented in terms of
the ``amplitude'' fit. In this method~\cite{HGM_NIM} the
magnitude of \particle{B}{s}{0}\ oscillations is measured at fixed values
of the frequency \mbox{$\Delta m_{\rm s}$}, using a modified likelihood function that
depends on a new parameter, the \particle{B}{s}{0}\ oscillation amplitude ${\cal A}$.
This is achieved by replacing the probability density function of
the \particle{B}{s}{0}\ source given in \Eq{p} with
\begin{equation}
\frac{e^{-t_0/\mbox{$\tau_{\rm s}$}}}{2 \mbox{$\tau_{\rm s}$}} \,
\left[1 + \mu_0 {\cal A} \cos{(\mbox{$\Delta m_{\rm s}$} \, t_0)}
\right] \, .
\end{equation}
For each value of \mbox{$\Delta m_{\rm s}$}, the new negative log-likelihood is then minimized with
respect to ${\cal A}$, leaving all other parameters (including \mbox{$\Delta m_{\rm s}$}) fixed.
The minimum is well behaved and very close to parabolic. At each value of
\mbox{$\Delta m_{\rm s}$}\ one can thus obtain a measurement of the amplitude with Gaussian error,
${\cal A} \pm \mbox{$\sigma^{\mbox{\scriptsize stat}}_{\cal A}$}$.
If $\mbox{$\Delta m_{\rm s}$}$ is close to the true value, one expects ${\cal A} = 1 $ within
the estimated uncertainty; however, if \mbox{$\Delta m_{\rm s}$}\ is far from its
true value, a measurement consistent with ${\cal A} = 0$ is expected.
The amplitude fit results are displayed in \Fig{amplitude}
as a function of \mbox{$\Delta m_{\rm s}$}. A peak in the amplitude, corresponding to the
minimum observed in the negative log-likelihood, can be seen around
$\mbox{$\Delta m_{\rm s}$}=16~\mbox{ps$^{-1}$}$. At this value, the measured amplitude
is $2.2\,\sigma$ away from zero;
as for the likelihood,
this is not significant enough to claim a measurement of \mbox{$\Delta m_{\rm s}$}.
A value of \mbox{$\Delta m_{\rm s}$}\ can be excluded at \CL{95} if
${\cal A}+1.645~\sigma_{\cal A}<1$.
Taking into account all systematic uncertainties described
in the next section, all values of \mbox{$\Delta m_{\rm s}$}\ below 9.5~\mbox{ps$^{-1}$}\ are excluded
at \CL{95}. The sensitivity, estimated from the data as the value
of \mbox{$\Delta m_{\rm s}$}\ at which $1.645\,\sigma_{\cal A}=1$, is 9.6~\mbox{ps$^{-1}$}.
Ignoring systematic uncertainties would increase the \CL{95} lower limit
and sensitivity by 0.1~\mbox{ps$^{-1}$}\ and 0.6~\mbox{ps$^{-1}$}\ respectively.
\section{Systematic uncertainties}
\labs{sec_syst}
The systematic uncertainties on the \particle{B}{s}{0}\ oscillation amplitude $\mbox{$\sigma^{\mbox{\scriptsize syst}}_{\cal A}$}$
are calculated, using the prescription of \Ref{HGM_NIM}, as
\begin{equation}
\mbox{$\sigma^{\mbox{\scriptsize syst}}_{\cal A}$} ={\cal A}^{\mbox{\scriptsize new}}-
{\cal A}^{\mbox{\scriptsize nom}}
~ + ~ (1-{\cal A}^{\mbox{\scriptsize nom}} )
\frac{\sigma^{\mbox{\scriptsize new}}_{{\cal A}}-
\sigma^{\mbox{\scriptsize nom}}_{{\cal A}}}
{\sigma^{\mbox{\scriptsize nom}}_{{\cal A}} }
\end{equation}
where the superscript ``\mbox{\small nom}''
refers to the amplitude values and statistical
uncertainties obtained
using the nominal values for the various parameters and
``\mbox{\small new}'' refers
to the new amplitude values obtained when a single parameter is changed and
the analysis repeated (including a re-evaluation of the distributions of the
discriminating variables used for the \particle{b}{}{}-flavour tagging).
The total systematic uncertainty is the quadratic sum of the
following contributions.
\begin{itemize}
\item{\bf Sample composition:}
The systematic uncertainty on the sample composition is obtained by
varying the assumed values for the \particle{b}{}{}-hadron fractions \mbox{$f_\bs$}, $\mbox{$f_{\mbox{\scriptsize \particle{b}{}{}-baryon}}$}$ and
the various lepton sources ($\particle{b}{}{} \rightarrow \ell$,
$\particle{b}{}{} \rightarrow \particle{c}{}{} \rightarrow \ell$, etc \ldots) by
the uncertainties quoted in \Table{phyparams}.
The statistical uncertainty on the purities determined from Monte Carlo
is also propagated.
A comparison of data and Monte Carlo fractions for the different \particle{B}{s}{0}\
purity classes shows small deviations, the largest relative difference of
16\% occurring in the first class of \Table{enrichment}.
The systematic uncertainty due to the \particle{B}{s}{0}\ purity classification procedure
is evaluated by shifting, in each class, all five
purities (\particle{B}{s}{0}, \particle{B}{d}{0}, \ldots)
in the direction of their respective overall averages,
$\overline{\alpha}_j$ given in \Table{compo}, by a fraction
$\gamma =\pm 20\%$ of their differences with respect to these averages:
\begin{equation}
\alpha_{jkl} \rightarrow \alpha_{jkl} + \gamma (\alpha_{jkl} - \overline{\alpha}_j) \, .
\end{equation}
As this is performed coherently in
all \particle{B}{s}{0}\ purity classes, the procedure is rather conservative and ensures
that the overall average purities remain unchanged.
Not using the \particle{B}{s}{0}\ purity classification would decrease the \mbox{$\Delta m_{\rm s}$}\ statistical
sensitivity by 0.7~\mbox{ps$^{-1}$}.
For the fraction of charm and \particle{uds}{}{}\ backgrounds a relative variation of $\pm 25\%$
is propagated, as suggested from a comparison between data and Monte Carlo
performed in \Ref{high_pt_lepton}.
\item{\bf Proper time resolution:}
For the systematic uncertainty on the proper time resolution,
the correction factors presented in Tables 4 and 5 are varied by $\pm1\sigma$.
The scale factors
($S_l^{\mbox{\scriptsize dat}}=1.02\pm0.03$ and $f_l^{\mbox{\scriptsize dat}}=1.20\pm0.09$) for the decay length resolution,
obtained from the lifetime fit to the data, are also varied by their measured
uncertainty.
In addition, a possible bias of $\pm 0.055$~ps/cm is considered on the
determination of the boost term; this value corresponds to the observed
shift between the measured and simulated boost term distributions and
represents approximately 1\% of the average boost term.
Finally the boost term resolution is given a relative variation of $\pm10\%$
($S_g^{\mbox{\scriptsize dat}}=1.0\pm0.1$), which is conservative given the close agreement between
the measured and simulated boost distributions.
\item{\bf \boldmath \particle{b}{}{}-quark fragmentation:}
The average fraction of energy taken by a \particle{b}{}{}-hadron during the
fragmentation process,
$\langle X_E \rangle =0.702\pm0.008$,
is varied by its measured uncertainty.
The corresponding effects on the
sample composition, mistags and resolutions are propagated.
\item{\bf Mistag:}
Based on data/Monte Carlo comparisons of the tagging variables,
performed for the \particle{D}{s}{-}-based analyses~\cite{ALEPH-DS-LEPTON,ALEPH-DSHAD},
absolute variations of $\pm0.8\%$ for the first tagging class
(opposite hemisphere charge)
and $\pm2\%$ for all other classes (fragmentation kaon and opposite lepton) are applied to the mistag rates.
In addition, the $\pm 1\sigma$ statistical uncertainty from Monte Carlo is
propagated.
The changes in mistag due to variation
of the $\particle{b}{}{} \rightarrow \particle{c}{}{} \rightarrow \ell$
fraction are included as part of the sample composition systematic uncertainty.
\item{\bf Lifetimes, \boldmath{\mbox{$\Delta m_{\rm d}$}, \mbox{$R_{\rm b}$}\ and \mbox{$R_{\rm c}$}}:}
The values assumed for the various \particle{b}{}{}-hadron lifetimes, \mbox{$\Delta m_{\rm d}$}, \mbox{$R_{\rm b}$}\ and \mbox{$R_{\rm c}$}\
are varied within the uncertainties quoted in \Table{phyparams}.
\item{\bf Difference in decay width:}
A possible decay width difference \mbox{$\Delta\Gamma_{\rm s}/\Gamma_{\rm s}$}\ between the two mass
eigenstates of the \particle{B}{s}{0}\ meson has been ignored in the likelihood fit. The fit
is therefore repeated with a modified likelihood assuming $\mbox{$\Delta\Gamma_{\rm s}/\Gamma_{\rm s}$} = 0.27$,
equal to the theoretical prediction of \Ref{DELTA_GAMMA},
$\mbox{$\Delta\Gamma_{\rm s}/\Gamma_{\rm s}$} = 0.16^{+0.11}_{-0.09}$, plus its quoted positive uncertainty.
\item{\bf Cascade bias:}
In the likelihood expression of \Eq{eqlike} each \particle{b}{}{}-hadron component
is treated using a single resolution function and mistag. No attempt is
made to treat separately the $\particle{b}{}{} \rightarrow \ell$ (direct)
and \mbox{$\particle{b}{}{} \rightarrow \particle{c}{}{} \rightarrow \ell$}
(cascade) decays.
While the former is characterized by a good proper time resolution and mistag,
the latter has a degraded decay length resolution and a somewhat biased
decay length because of the charm lifetime. In addition, the sign of the lepton
is changed, leading to a different total mistag.
To study the possible bias arising from the correlation between the poor decay length
resolution and degraded tagging performance of the cascade events,
two different fast Monte Carlo experiments are generated
with a true value of \mbox{$\Delta m_{\rm s}$}\ equal to $50~\mbox{ps$^{-1}$}$. In the first
the \particle{b}{}{}-hadron decays are generated using the average mistag and resolution;
in the second, the primary and cascade components are generated separately,
each with their appropriate mistag and resolution.
For both experiments, the corresponding amplitude plot is obtained
using the likelihood described in \Sec{likelihood}, i.e.\ with
average mistags and resolutions.
The fast Monte Carlo experiment
generated using the average \particle{b}{}{}-hadron properties,
yields an amplitude spectrum consistent with zero, as expected (since
the fitting function is based on the same probability distributions
as the fast Monte Carlo generator).
In contrast, the experiment in which the direct and
cascade decays are generated separately
shows a small amplitude bias at low and very large \mbox{$\Delta m_{\rm s}$}.
Since the bias is small, especially in the region where the limit is set,
and would cause the limit and sensitivity to be slightly underestimated,
no attempt is made to correct for this effect;
instead the deviations of the amplitude from zero observed are
treated as a systematic uncertainty.
\end{itemize}
\begin{table}
\tabcaption{Measurement of the \particle{B}{s}{0}\ oscillation amplitude, ${\cal A}$, for
various oscillation frequencies together with the statistical uncertainty,
\mbox{$\sigma^{\mbox{\scriptsize stat}}_{\cal A}$}, and the total systematic uncertainty, \mbox{$\sigma^{\mbox{\scriptsize syst}}_{\cal A}$};
a breakdown of \mbox{$\sigma^{\mbox{\scriptsize syst}}_{\cal A}$}\
in several categories of systematic effects is also given. }
\labt{syst}
\begin{center}
\begin{tabular}{|l|c|c|c|c|}
\hline
\mbox{$\Delta m_{\rm s}$}\ & 0~\mbox{ps$^{-1}$}\ & 5~\mbox{ps$^{-1}$}\ & 10~\mbox{ps$^{-1}$}\ & 15~\mbox{ps$^{-1}$}\
\\
\hline
${\cal A}$ & $-0.030$ & $-0.065$ & 0.303 & 2.291 \\
\rule{0pt}{15 pt}
\mbox{$\sigma^{\mbox{\scriptsize stat}}_{\cal A}$} & $\pm 0.099 $ & $\pm0.267$ & $\pm0.590$ & $\pm1.271 $
\\
\rule{0pt}{15 pt}
\mbox{$\sigma^{\mbox{\scriptsize syst}}_{\cal A}$} & $^{+0.340}_{-0.340}$ &
$^{+0.223}_{-0.235}$ & $^{+0.232}_{-0.324}$ & $^{+0.801}_{-0.582}$ \\
\hline
Systematic contributions: & \ & \ & \ & \\
\rule{0pt}{15 pt}
-- $R_{\rm b}$, $R_{\rm c}$ & $^{+0.001}_{-0.001}$ & $^{+0.002}_{-0.001}$ & $^{+0.001}_{-0.002}$ & $^{+0.001}_{-0.005}$ \\
\rule{0pt}{15 pt}
{-- $\mbox{$f_\bs$} = \BR{\anti{b}{}{}}{\particle{B}{s}{0}}$} & { $^{+0.046}_{-0.035}$} &
{ $^{+0.146}_{-0.112}$ } & { $^{+0.133}_{-0.109}$ } & { $^{+0.217}_{-0.173}$ } \\
\rule{0pt}{15 pt}
-- $\mbox{$f_{\mbox{\scriptsize \particle{b}{}{}-baryon}}$} = \BR{\particle{b}{}{}}{\mbox{\particle{b}{}{}-baryon}}$ & $^{+0.008}_{-0.010}$ & $^{+0.026}_{-0.018}$ & $^{+0.028}_{-0.023}$ & $^{+0.007}_{-0.002}$ \\
\rule{0pt}{15 pt}
-- charm fraction & $^{+0.012}_{-0.012}$ & $^{+0.019}_{-0.016}$ & $^{+0.021}_{-0.018}$ & $^{+0.051}_{-0.043}$ \\
\rule{0pt}{15 pt}
-- \particle{uds}{}{}\ fraction & $^{+0.008}_{-0.008}$ & $^{+0.023}_{-0.026}$ & $^{+0.032}_{-0.038}$ & $^{+0.078}_{-0.091}$ \\
\rule{0pt}{15 pt}
-- $ \particle{b}{}{} \rightarrow \ell,
\particle{b}{}{} \rightarrow \particle{c}{}{} \rightarrow \ell,
\particle{b}{}{} \rightarrow \anti{c}{}{} \rightarrow \ell,
\particle{c}{}{} \rightarrow \ell$ & $^{+0.065}_{-0.013}$ & $^{+0.000}_{-0.055}$ & $^{+0.000}_{-0.121}$ & $^{+0.464}_{-0.000}$ \\
\rule{0pt}{15 pt}
-- purities (MC stat.) & $^{+0.047}_{-0.041}$ & $^{+0.078}_{-0.070}$ & $^{+0.076}_{-0.075}$ & $^{+0.104}_{-0.108}$ \\
\rule{0pt}{15 pt}
-- \particle{B}{s}{0}\ purity classes & $^{+0.017}_{-0.009}$ & $^{+0.000}_{-0.007}$ & $^{+0.010}_{-0.018}$ & $^{+0.140}_{-0.187}$ \\
\rule{0pt}{15 pt}
-- \mbox{$\Delta m_{\rm d}$}\ & $^{+0.037}_{-0.037}$ & $^{+0.002}_{-0.002}$ & $^{+0.001}_{-0.001}$ & $^{+0.000}_{-0.003}$ \\
\rule{0pt}{15 pt}
-- \particle{b}{}{}-hadron lifetimes & $^{+0.033}_{-0.000}$ & $^{+0.000}_{-0.046}$ & $^{+0.027}_{-0.037}$ & $^{+0.282}_{-0.000}$ \\
\rule{0pt}{15 pt}
-- decay length resolution & $^{+0.000}_{-0.000}$ & $^{+0.025}_{-0.025}$ &
$^{+0.054}_{-0.057}$ & $^{+0.050}_{-0.021}$ \\
\rule{0pt}{15 pt}
-- boost term resolution & $^{+0.010}_{-0.010}$ & $^{+0.030}_{-0.033}$ &
$^{+0.048}_{-0.059}$ & $^{+0.205}_{-0.191}$ \\
\rule{0pt}{15 pt}
-- \particle{b}{}{}-fragmentation & $^{+0.023}_{-0.000}$ & $^{+0.012}_{-0.070}$ & $^{+0.067}_{-0.085}$ & $^{+0.509}_{-0.403}$ \\
\rule{0pt}{15 pt}
{-- \particle{b}{}{}-flavour tagging} & { $^{+0.317}_{-0.332}$} &
{ $^{+0.138}_{-0.132}$ }& { $^{+0.132}_{-0.207}$ }&
{ $^{+0.233}_{-0.219}$ } \\
\rule{0pt}{15 pt}
-- $\mbox{$\Delta\Gamma_{\mathrm s}$}/\mbox{$\Gamma_{\mathrm s}$}$ & $^{+0.000}_{-0.002}$ &
$^{+0.012}_{-0.000}$ & $^{+0.011}_{-0.000}$ & $^{+0.018}_{-0.000}$ \\
\rule{0pt}{15 pt}
-- cascade bias & $^{+0.060}_{-0.000}$ & $^{+0.000}_{-0.087}$ & $^{+0.000}_{-0.085}$ & $^{+0.000}_{-0.069}$ \\
\hline
\end{tabular}
\end{center}
\end{table}
The relative importance of the various systematic uncertainties, as a function of
\mbox{$\Delta m_{\rm s}$}, is shown in \Table{syst}.
Except at low \mbox{$\Delta m_{\rm s}$}\ the systematic uncertainties are generally small compared to
the statistical uncertainty.
At $\Delta m_{\mathrm s}=10~\mbox{ps$^{-1}$}$, the most important contributions are from \mbox{$f_\bs$}\ and
the \particle{b}{}{}-flavour tagging.
\section{Checks}
\labs{sec_checks}
Using a fast Monte Carlo generator which takes into account all
details of the sample composition, the resolution functions,
the mistag rates and the distributions of $x^{\mbox{\scriptsize eff}}$,
the average amplitude over many fast Monte Carlo experiments is
found to be consistent with unity for $\mbox{$\Delta m_{\rm s}$}= \mbox{$\Delta m^{\rm true}_{\rm s}$}$ and with
zero for any value of \mbox{$\Delta m_{\rm s}$}\ if $\mbox{$\Delta m^{\rm true}_{\rm s}$}=\infty$.
The estimate, $\mbox{$\sigma^{\mbox{\scriptsize stat}}_{\cal A}$}$, of the statistical uncertainty
on the amplitude has also been verified by studying the distribution of
${\cal A}/\mbox{$\sigma^{\mbox{\scriptsize stat}}_{\cal A}$}$ for cases where ${\cal A}=0$ is expected.
The mean value and RMS of such a distribution obtained with fast Monte Carlo
experiments generated with $\mbox{$\Delta m^{\rm true}_{\rm s}$}=\infty$ are found to be
consistent with 0 and 1.
A likelihood fit for \mbox{$\Delta m_{\rm s}$}\ performed on a \Z{q} Monte Carlo sample having the
same statistics as the data and generated with a true value of \mbox{$\Delta m_{\rm s}$}\ of
3.33~\mbox{ps$^{-1}$}\ yields $\mbox{$\Delta m_{\rm s}$} = 3.31 \pm 0.12$(stat.)~\mbox{ps$^{-1}$}, in
agreement with the input value.
Performing an amplitude fit on the same Monte Carlo events yields the
results shown in \Fig{dms_mc}. As expected, the amplitude is consistent
with 1 at the true value of \mbox{$\Delta m_{\rm s}$}.
The sensitivity estimated from this Monte Carlo sample
(ignoring systematic uncertainties) is 10.6~\mbox{ps$^{-1}$}, a little higher
than that obtained from the data, 10.2~\mbox{ps$^{-1}$},
due to the slightly better decay length
resolution in Monte Carlo.
\begin{figure}
\vspace{-2cm}
\begin{center}
\makebox[0cm]{\psfig{file=fig8.ps,width=1.05\textwidth,
bbllx=0pt,bblly=282pt,bburx=560pt,bbury=560pt}}
\end{center}
\figcaption{Measured \particle{B}{s}{0}\ oscillation amplitude as a function of \mbox{$\Delta m_{\rm s}$}\ in the \Z{q} Monte Carlo.
The error bars represent the 1$\sigma$ statistical uncertainties,
the solid curve the one-sided \CL{95} contour (systematic effects
not included). The dotted line is $1.645\,\sigma$. The generated value of \mbox{$\Delta m_{\rm s}$}\
is 3.33~\mbox{ps$^{-1}$}. }
\labf{dms_mc}
\end{figure}
As a further check of the assumed mistags and sample composition,
the analysis is used to measure \mbox{$\Delta m_{\rm d}$}\ in the data.
Fixing \mbox{$\Delta m_{\rm s}$}\ to 50~\mbox{ps$^{-1}$}\ and minimizing the
negative log-likelihood with respect to
\mbox{$\Delta m_{\rm d}$}\ gives $\mbox{$\Delta m_{\rm d}$} = 0.451 \pm 0.024$(stat.)~\mbox{ps$^{-1}$},
consistent with the latest world average of $0.463 \pm
0.018~\mbox{ps$^{-1}$}$~\cite{Schneider}.
\Figure{dmd_amp} shows that the
fitted \particle{B}{d}{0}\ oscillation amplitude is consistent with that observed in
the \Z{q} Monte Carlo and has the expected value of 1 at the
minimum of the negative log-likelihood.
To check that the sample composition and mistags assumed for each
\particle{B}{s}{0}\ purity class and tagging class are reasonable, a
fit for the \particle{B}{d}{0}\ oscillation amplitude is performed separately
in each class.
At $\mbox{$\Delta m_{\rm d}$} = 0.451~\mbox{ps$^{-1}$}$ a value of ${\cal A}$ consistent with 1 is found in
all classes; the largest deviation being
$1.5\,\sigma_{\mbox{\scriptsize stat}}$ in the last \particle{B}{s}{0}\ purity
class (``remainder'').
\begin{figure}
\vspace{-2cm}
\begin{center}
\makebox[0cm]{\psfig{file=fig9.ps,width=1.05\textwidth,
bbllx=0pt,bblly=285pt,bburx=560pt,bbury=560pt}}
\end{center}
\figcaption{Measured \particle{B}{d}{0}\ oscillation amplitude as a function of \mbox{$\Delta m_{\rm d}$}\ in (a) the data and (b)
the \Z{q} Monte Carlo.
The error bars represent the 1$\sigma$ total uncertainties and
the curves the one-sided \CL{95} contour (systematic effects not included).}
\labf{dmd_amp}
\end{figure}
\section{\boldmath Combination with \particle{D}{s}{-}\ analyses} \labs{combination}
\begin{figure}
\vspace{-1.7cm}
\begin{center}
\makebox[0cm]{\psfig{file=fig10.ps,width=1.05\textwidth}}
\end{center}
\vspace{-0.3cm}
\figcaption{Measured \particle{B}{s}{0}\ oscillation amplitude as a function of \mbox{$\Delta m_{\rm s}$}\ for the
combination of this analysis with the ALEPH \particle{D}{s}{-}\ based analyses.}
\labf{amp_comb}
\end{figure}
The amplitudes measured in this
analysis and in the two ALEPH \particle{D}{s}{-}\ analyses~\cite{ALEPH-DS-LEPTON,ALEPH-DSHAD}
are combined.
The small number of events common to both this analysis and the \particle{D}{s}{-}--lepton
analysis are removed from the inclusive lepton sample before combining the results.
The following sources of systematic uncertainty are treated as fully
correlated:
the values assumed for \mbox{$f_\bs$}, \mbox{$f_{\mbox{\scriptsize \particle{b}{}{}-baryon}}$}, \mbox{$\Delta m_{\rm d}$}\ and the various \particle{b}{}{}-hadron lifetimes,
the \particle{b}{}{} fragmentation,
the decay length resolution bias in the Monte Carlo simulation $S_l^{\mbox{\scriptsize dat}}$ and $f_l^{\mbox{\scriptsize dat}}$,
the mistag probabilities, and the use of the effective discriminating variable.
Since the physics parameters assumed in the three analyses are slightly
different, the \particle{D}{s}{-}\ results are adjusted to the more recent
set of physics parameters listed in \Table{phyparams} before averaging.
The combined amplitude plot is displayed in \Fig{amp_comb} and the corresponding
numerical values are listed in \Table{amplitude_combined}.
All values of \mbox{$\Delta m_{\rm s}$}\ below 9.6~\mbox{ps$^{-1}$}\ are excluded at \CL{95}.
The combined sensitivity is 10.6~\mbox{ps$^{-1}$}.
As the statistical correlation between this analysis and the previous
ALEPH dilepton and lepton-kaon analyses \cite{ALEPH-DILEPTON, ALEPH-WARSAW-COMBINATION}
is very large, no significant improvement in sensitivity is expected
if these latter analyses were included in the combination.
\begin{table}
\tabcaption{Combined measurements of the \particle{B}{s}{0}\ oscillation amplitude ${\cal A}$
as a function of \mbox{$\Delta m_{\rm s}$}\ (in \mbox{ps$^{-1}$}),
together with the statistical uncertainty $\mbox{$\sigma^{\mbox{\scriptsize stat}}_{\cal A}$}$ and
the total systematic uncertainty $\mbox{$\sigma^{\mbox{\scriptsize syst}}_{\cal A}$}$.}
\labt{amplitude_combined}
\begin{center}
\begin{tabular}{|c|r@{$\,\pm$}r@{$\,\pm$}r|c|c|r@{$\,\pm$}r@{$\,\pm$}r|c|c|r@{$\,\pm$}r@{$\,\pm$}r|}
\cline{1-4} \cline{6-9} \cline{11-14}
\mbox{$\Delta m_{\rm s}$}\ & $\rule[-5pt]{0pt}{18pt} {\cal A}$ & $\mbox{$\sigma^{\mbox{\scriptsize stat}}_{\cal A}$}$ & $\mbox{$\sigma^{\mbox{\scriptsize syst}}_{\cal A}$}$ & &
\mbox{$\Delta m_{\rm s}$}\ & $\rule[-5pt]{0pt}{18pt} {\cal A}$ & $\mbox{$\sigma^{\mbox{\scriptsize stat}}_{\cal A}$}$ & $\mbox{$\sigma^{\mbox{\scriptsize syst}}_{\cal A}$}$ & &
\mbox{$\Delta m_{\rm s}$}\ & $\rule[-5pt]{0pt}{18pt} {\cal A}$ & $\mbox{$\sigma^{\mbox{\scriptsize stat}}_{\cal A}$}$ & $\mbox{$\sigma^{\mbox{\scriptsize syst}}_{\cal A}$}$ \\
\cline{1-4} \cline{6-9} \cline{11-14} \\[-12pt]
\cline{1-4} \cline{6-9} \cline{11-14}
$ 0.00$ & $+0.03$ & $ 0.08$ & $ 0.18$ & & $ 7.00$ & $-0.15$ & $ 0.30$ & $ 0.18$ & & $14.00$ & $+1.21$ & $ 0.86$ & $ 0.47$ \\
$ 1.00$ & $+0.16$ & $ 0.11$ & $ 0.18$ & & $ 8.00$ & $-0.24$ & $ 0.35$ & $ 0.21$ & & $15.00$ & $+1.98$ & $ 0.99$ & $ 0.54$ \\
$ 2.00$ & $-0.13$ & $ 0.13$ & $ 0.17$ & & $ 9.00$ & $-0.05$ & $ 0.40$ & $ 0.23$ & & $16.00$ & $+2.76$ & $ 1.16$ & $ 0.57$ \\
$ 3.00$ & $+0.13$ & $ 0.16$ & $ 0.19$ & & $10.00$ & $+0.30$ & $ 0.46$ & $ 0.27$ & & $17.00$ & $+2.86$ & $ 1.37$ & $ 0.61$ \\
$ 4.00$ & $+0.00$ & $ 0.18$ & $ 0.16$ & & $11.00$ & $+0.37$ & $ 0.54$ & $ 0.37$ & & $18.00$ & $+2.22$ & $ 1.61$ & $ 0.77$ \\
$ 5.00$ & $-0.10$ & $ 0.22$ & $ 0.18$ & & $12.00$ & $+0.47$ & $ 0.64$ & $ 0.39$ & & $19.00$ & $+1.85$ & $ 1.88$ & $ 0.98$ \\
$ 6.00$ & $-0.20$ & $ 0.25$ & $ 0.17$ & & $13.00$ & $+0.65$ & $ 0.75$ & $ 0.42$ & & $20.00$ & $+2.02$ & $ 2.19$ & $ 1.29$ \\
\cline{1-4} \cline{6-9} \cline{11-14}
\end{tabular}
\end{center}
\end{table}
\section{Conclusion}
From a sample of 33023 inclusive lepton events,
all values of \mbox{$\Delta m_{\rm s}$}\ below 9.5~\mbox{ps$^{-1}$}\ are excluded
at \CL{95} using the amplitude method. This analysis supersedes the previous
ALEPH inclusive lepton analysis \cite{ALEPH-LEPTON-JET-WISCONSIN} and provides the
highest sensitivity and highest \CL{95} lower limit on \mbox{$\Delta m_{\rm s}$}\ of any \particle{B}{s}{0}\ mixing
analysis published to date~\cite{ALEPH-DS-LEPTON,ALEPH-DSHAD,ALEPH-DILEPTON,
ALEPH-LEPTON-JET-WISCONSIN,DELPHI-DMS-COMBINATION,OPALDMS}.
Taking into account correlated systematic uncertainties the combination
with the ALEPH \particle{D}{s}{-}\ based analyses yields $\mbox{$\Delta m_{\rm s}$}>9.6~\mbox{ps$^{-1}$}$ at \CL{95}.
\section*{Acknowledgements}
It is a pleasure to thank our colleagues in the accelerator divisions of CERN
for the excellent performance of LEP.
Thanks are also due to the technical
personnel of the collaborating institutions for their support in constructing
and maintaining the ALEPH experiment. Those of us not from member
states wish to thank CERN for its hospitality.
| proofpile-arXiv_065-8456 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\zerarcounters
Even though finite rotations can be represented by a magnitude (equal
to the angle of rotation) and a direction (that of the axis of rotation),
they do not act like vectors. In particular, finite rotations do not
commute: The summation of a number of finite rotations, not about the
same axis, is dependent on the order of addition.
\newline
The anticommutivity property of finite rotations is made clear
in introductory texts by showing that two successive finite
rotations, when effected in different order, produce different
final results \cite{Slater}.
\newline
However, when rotations are small -- indeed, infinitesimal --,
the combination of two or more individual rotations is unique,
regardless of the order they are brought about (This fact allows
for the definition of angular velocity as the time-derivative of
an angular coordinate [2, p.675]).
\newline
Here we show how the order rotations are carried out becomes
irrelevant -- that is, rotations become commutative -- as the
angles of rotation diminish.
\newline
In Fig. 1 we have represented two successive rotations of a rigid
body. The first rotation is around the axis {\it OZ} through an
angle $\phi$, which takes {\it OA} into {\it OB}. For simplicity,
we take the plane {\it OAB} to be the horizontal {\it XY} plane. The
second rotation is around the axis {\it OX} through an angle $\theta$,
which takes {\it OB} into {\it OC}.
\newline
Since the angle between the axes {\it OB} and the axis of rotation
{\it OX} is not $90^{\circ}$, the plane {\it OBC} cuts the plane
{\it XY} at an angle. Let this angle be $\beta$, represented in
Fig. 1 as the angle formed by the sides {\it PQ} and {\it QR} of
the triangle {\it PQR}.
\newline
After these two rotations, the initial point {\it A} is brought to
the final position {\it C}. This same final result can be accomplished
by just one rotation through an angle $(A,C)$ around an axis
perpendicular to both {\it OB} and {\it OC}.
\newline
To obtain a relation between the angles $\phi, \theta, \beta$ and
{\it (A,C)}, we have drawn in Fig. 2 four triangles, derived from
Fig. 1, which are relevant to our analysis.
\newline
From the two right triangles {\it OQP} and {\it OQR}, we have the
relations
\beq
\cos \phi=\frac{OQ}{OP}, \;\;\;\;\;\;\;\;\;\;
\sin \phi=\frac{PQ}{OP} \label{1}
\eeq
and
\beq
\cos \theta=\frac{OQ}{OR}, \;\;\;\;\;\;\;\;\;\;
\sin \theta=\frac{QR}{OR}. \label{2}
\eeq
The law of cosines applied to the triangles {\it OPR} and {\it PQR}
yields
\beq
{PR}^{2}={OP}^{2}+{OR}^{2} -2 (OP)(OQ)\cos(A,C)
\label{3}
\eeq
and
\beq
{PR}^{2}={PQ}^{2}+{QR}^{2} -2 (PQ)(QR)\cos\beta.
\label{4}
\eeq
Substituting for {\it PQ} and {\it QR} their values given in
(\ref{1},\ref{2}), Eq. (\ref{4}) becomes
\beq
{PR}^{2}={OP}^{2}\sin^{2}{\phi} + {OR}^{2}\sin^{2}{\theta}
-2 (OP)(OR)\sin{\phi} \sin{\theta} \cos\beta \label{5}
\eeq
On equating expressions (\ref{3}) and (\ref{5}) for {\it PR},
using (\ref{1},\ref{2}), we get
\beq
\cos(A,C)=\cos{\phi}\cos{\theta} + \sin{\phi}\sin{\theta} \cos\beta
\label{6}
\eeq
Now we effect the rotations in the reverse order, taking the first
rotation around the axis {\it OX} through an angle $\theta$,
followed by a rotation around the axis {\it OZ} through an angle
$\phi$. In this case, the point {\it A} moves to the new final
position {\it E}, instead of {\it C}, as indicated in the sketch
accompanying Fig. 1. The same final result can again be accomplished
by just one rotation, now through an angle $(A,E)$ around an axis
perpendicular to both {\it OD} and {\it OE}.
\newline
A moment's reflection shows that the relation between the
angles now is analogous to (\ref{6}), with no need to repeat
the above procedure. The cosine of the angle $(A,E)$ is given by
$\cos(A,E)=\cos{\theta}\cos{\phi} + \sin{\theta}\sin{\phi}
\cos{\beta'}$, with the difference that $\beta'$ is the angle the
plane {\it AOD} makes with the horizontal plane {\it DOE}. Thus
we have
\beq
\cos(A,C)-\cos(A,E)=\sin{\phi}\sin{\theta}(\cos{\beta}-\cos{\beta'})
\label{8}
\eeq
If we set $\beta'=\beta+\Delta \beta$, and expand $\cos\beta'$
in (\ref{8}) we get
\beq
\cos(A,C)-\cos(A,E)=\sin{\phi}\sin{\theta}[\cos{\beta}
(1-\cos{\Delta \beta}) +\sin{\beta} \sin{\Delta \beta}]
\label{9}
\eeq
To see how commutivity is obtained when the angles involved are small,
we use that, for $x \ll 1$, $\sin{x} \approx x$, $\cos{x} \approx 1$.
Eq. (\ref{9}) becomes
\beq
\cos(A,C)-\cos(A,E) \approx \phi~ \theta~ {\Delta \beta}~ \sin{\beta}
\label{10}
\eeq
This means that the difference between the two final positions
vanishes more rapidly than either of the single rotations.
\newline
{\it Remark:} It is not necessary to assume that $\Delta \beta$ is
small. Our result holds whether it is small or not, since in
(\ref{8}) we could have simplified our analysis by using that $\mid
\cos{\beta}-\cos{\beta'} \mid \leq 2$, and getting the same conclusion
above.
| proofpile-arXiv_065-8488 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In the early 1980s, the Infrared Astronomy Satellite (IRAS) detected
thermal emission from dust grains with temperatures of 50-125 K and
fractional luminosities ($L_{grains}/L_{*}$) in the range
$10^{-5}$-$10^{-3}$ around four main sequence stars: Vega, Fomalhaut,
$\beta$ Pictoris, and $\epsilon$ Eridani. Coronagraphic observations
of $\beta$ Pic confirmed that the grains do indeed lie in a disk,
perhaps associated with a young planetary system (Smith \& Terrile 1984).
Subsequent surveys of IRAS data have revealed
over 100 other main sequence stars of all spectral classes with
far-infrared excesses indicative of circumstellar disks (Aumann 1985;
Sadakane \& Nishida 1986; Cote 1987; Jascheck et al. 1991; Oudmaijer et al.
1992; Cheng et al. 1992; Backman \& Paresce 1993; Mannings \& Barlow 1998).
In most ``Vega-type'' stars, the dust grains responsible for the infrared
emission are thought to be continually replenished by
collisions and sublimation of larger bodies, because the timescales for grain
destruction by Poynting-Robertson (PR) drag and ice sublimation are much
shorter than the stellar main sequence lifetimes (Nakano 1988; Backman
\& Paresce 1993). In other words, the disks around main sequence stars are
likely to be debris or remnant disks rather than protoplanetary structures.
These debris disks contain much less dust (and gas) than the massive,
actively accreting disks frequently observed around young pre-main-sequence
stars (e.g., Strom et al. 1989; Beckwith et al. 1990). Thus, it is likely
that circumstellar disks evolve from masssive, optically
thick, actively-accreting structures to low-mass optically thin structures
with inner holes, and that disk evolution is closely linked to planet
formation (For a popular review, see Jayawardhana 1998).
\section{Recent Discoveries}
For nearly 15 years, only one debris disk --that around $\beta$ Pic-- had
been imaged. Coronagraphic surveys at optical and near-infrared
wavelengths had failed to detect scattered light from other Vega-type disks
(Smith et al. 1992; Kalas \& Jewitt 1996). However, recent advances in
infrared and sub-millimeter detectors have led to a batch of dramatic
new discoveries.
Using the OSCIR mid-infrared camera on the 4-meter telescope at the Cerro
Tololo Inter-American Observatory in Chile, we recently imaged a dust disk
around the young A star HR 4796A at 18$\mu$m (Jayawardhana et al. 1998;
Koerner et al. 1998; also see Jura et al. 1998, and references therein).
HR 4796A is unique among Vega-type stars
in that its low-mass binary companion --at an apparent separation of 500AU--
shows that the system is relatively young. With a variety of constraints such
as lithium abundance, rotational velocity, H-R diagram position and coronal
activity, Stauffer et al. (1995) infer that the companion is only $8\pm3$ Myrs
old, an age comparable to the $\sim$10-Myr timescale estimated for planet
formation (Strom, Edwards, \& Skrutskie 1993; Podosek \& Cassen 1994).
Interestingly, mid-infrared images of the HR 4796A disk do indeed suggest
the presence of an inner cavity of solar system dimensions, as one would
expect if dust in that region has coagulated into planets or planetesimals.
\begin{figure}
{\psfig{figure=figure1.ps,height=3.7in,width=3.7in}}
\caption{The HR 4796A dust disk at 18$\mu$m.}
\end{figure}
Using a revolutionary new camera known as SCUBA on the James Clerk Maxwell
Telescope in Hawaii, the four prototype debris disks --those around Vega,
$\beta$ Pic, Fomalhaut and $\epsilon$ Eri-- have also been resolved for
the first time in the sub-millimeter (Holland et al. 1998;
Greaves et al. 1998). These disks, whose parent stars are likely to be
older than HR 4796A, also show inner holes, which may persist due to
planets within the central void consuming or perturbing grains inbound
under the influence of the PR drag (Holland et al. 1998; Greaves et al.
1998). A surprising result is the discovery of giant ``clumps''
within three of the disks. The origin and nature of these
apparent density enhancements remain a mystery.
Another exciting new result is the discovery of dust emission around
55 Cancri, a star with one, or possibly two, known radial-velocity planetary
companions (Butler et al. 1997). From observations made by the Infrared
Space Observatory
(ISO) at 25$\mu$m and 60$\mu$m, Dominik et al. (1998) concluded that
55 Cancri, a 3-5 billion year old G8 star, has a Vega-like disk
roughly 60 AU in diamater. Recent near-infrared coronographic observations
of Trilling \& Brown (1998) have resolved the scattered light from the 55
Cancri dust disk and confirm that it extends to at least 40 AU
(3.24'') from the star. Their findings suggest that a significant amount of
dust --perhaps a few tenths of an Earth mass-- may be present even in
the Kuiper Belts of mature planetary systems.
\section{What next?}
While these remarkable new findings have given us unprecedented glimpses
into circumstellar debris disks at a variety of stages and conditions,
the timescale for disk evolution is still highly uncertain, and may not even
be universal. The primary obstacle to determining evolutionary timescales is
the difficulty in assigning reliable ages to isolated main sequence stars.
Fortunately, as in the case of HR 4796, it is possible to obtain
reliable ages for Vega-type stars which have low-mass binary or common
proper motion companions (Stauffer et al. 1995; Barrado y Navascues et
al. 1997). Therefore, if we are able to image disks surrounding
a sample of Vega-type stars whose ages are known from companions, it may
be possible to place them in an evolutionary sequence, and to constrain
the timescale(s) for planet formation.
The recent identification of a group of young stars associated with TW Hydrae
offers a valuable laboratory to study disk evolution and planet formation
(Kastner et al.1997; Webb et al. 1998). Being the nearest group of young
stars, at a distance of $\sim$55 pc, the TW Hydrae Association is ideally
suited for sensitive disk searches in the mid-infrared. Furthermore, its
estimated age of $\sim$10 Myr would provide a strong constraint on disk
evolution timescales and fill a significant gap in the age sequence between
$\sim$1-Myr-old T Tauri stars in molecular clouds like Taurus-Auriga and
Chamaeleon and the $\sim$30-Myr-old open clusters such as IC 2602 and
IC 2391. Since several of the TW Hya members are close binary
systems, it will also be possible to study the effects of companions on disk
evolution. It could well be that a companion's presence dramatically
accelerates disk depletion or causes disk assymmetries.
The current mid-infrared cameras when used on Keck should also be able to
conduct sensitive searches for thermal emission from dust associated with
Kuiper Belts of extrasolar planetary systems like 55 Cancri. In the
sub-millimeter, the beam sizes are still too large to spatially resolve
such disks, but sub-mm flux measurements would place important
constraints on the amount and nature of their dust.
We are just beginning to study the diversity of debris disks, their
evolution and their close connection to planets. New instruments like
the Sub-Millimeter Array (SMA) and upcoming space missions such as the
Space Infrared Telescope Facility (SIRTF) will no doubt assist in that
quest.
\acknowledgments
I am most grateful to my collaborators Lee Hartmann, Giovanni Fazio,
Charles Telesco, Scott Fisher and Robert Pi\~na. It is also my pleasure
to acknowledge useful discussions with David Barrado y Navascues,
Wayne Holland, Jane Greaves, Geoffrey Marcy, and David Trilling.
| proofpile-arXiv_065-8510 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{int}
The advent of the new class of 10-m ground based telescopes is
having a strong impact on the study of galaxy evolution. For instance,
instruments as LRIS at the Keck allow observers to regularly secure
redshifts for dozens of $I\approx 24$ galaxies in several hours of exposure.
Technical advances in the instrumentation, combined with the
proliferation of similar telescopes
in the next years guarantees a vast increase in the number of
galaxies, bright and faint, for which spectroscopical redshifts will be
obtained in the near future. Notwithstanding this progress
in the sheer numbers of available spectra, the $I\approx 24$ `barrier'
(for reasonably complete samples) is likely to stand for a time,
as there are not foreseeable dramatic improvements in the telescope
area or detection techniques.
Despite the recent spectacular findings of very high redshift galaxies,
(\cite{dey}, \cite{fra}, \cite{fry}), it is extremely difficult
to secure redshifts for such objects. On the other hand, even
moderately deep ground based imaging routinely contain many high
redshift galaxies (although hidden amongst myriads of foreground ones),
not to mention the Hubble Deep Field or the images that will be
available with the upcoming Advanced Camera. To push further in redshift
the study of galaxy evolution is therefore very important to develop
techniques able to extract galaxy redshifts from multicolor photometry data.
This paper applies the methods of Bayesian probability theory to
photometric redshift estimation. Despite the efforts of
Thomas Loredo, who has written stimulating reviews on the subject
(Loredo 1990, 1992), Bayesian methods are still far from being one of
the staple statistical techniques in Astrophysics. Most courses and
monographs on Statistics only include a small section on Bayes' theorem,
and perhaps as a consequence of that, Bayesian techniques are frequently
used {\it ad hoc}, as another tool from the
available panoply of statistical methods. However, as any reader of the
fundamental treatise by E.T. Jaynes (1998) can learn, Bayesian probability
theory represents an unified look to probability and statistics, which
does not intend to complement, but to fully substitute the traditional,
`frequentist' statistical techniques (see also Bretthorst 1988, 1990)
One of the fundamental differences between `orthodox' statistics and Bayesian
theory, is that the probability is not defined as a frequency of occurrence,
but as a reasonable degree of belief. Bayesian probability theory is
developed as a rigorous full--flegded alternative to traditional
probability and statistics based on this definition and
three {\it desiderata}: a)Degrees of belief should be represented by
real numbers, b)One should reason consistently, and c)The theory should
reduce to Aristotelian logic when the truth values of hypothesis are known.
One of the most attractive features of Bayesian inference lies on its
simplicity. There are two basic rules to manipulate probability, the
product rule
\begin{equation}
P(A,B|C)=P(A|C)P(B|A,C)
\end{equation}
and the sum rule
\begin{equation}
P(A+B|C)=P(A|C)+P(B|C)-P(A,B|C)
\end{equation}
where ``$A,B$'' means ``$A$ and $B$ are true'', and ``$A+B$''
means ``either $A$ or $B$ or both are true''. From the product rule,
and taking into account that the propositions ``$A,B$'' and
``$B,A$'' are identical, it is straightforward to derive Bayes' theorem
\begin{equation}
P(A|B,C)={P(A|C)P(B|A,C)\over P(B|C)}
\end{equation}
If the set of proposals $B=\{ B_i\}$ are mutually exclusive and
exhaustive, using the sum rule one can write
\begin{equation}
P(A,B|C)=P(A,\{B_i\}|C)=\sum_i P(A,B_i|C)
\label{mar}
\end{equation}
which is known as Bayesian marginalization. These are the basic
tools of Bayesian inference. Properly used and combined with the
rules to assign prior probabilities, they are in principle enough
to solve most statistical problems.
There are several differences between the methodology presented in
this paper and that of \cite{kod}, the most significant being the
treatment of priors (see Sec. \ref{bpz}). The procedures developed
here offer a major improvement in the redshift estimation and based on them
it is possible to generate new statistical methods for applications
which make use of photometric redshifts (Sec. \ref{appli}).
The outlay of the paper is the following: Sec. 2 reviews the current
methods of photometric redshifts estimation making emphasis on
their main sources of error. Sec. 3 introduces an expression
for the redshift likelihood slightly different from the one used by other
groups when applying the SED--fitting technique. In Sec. 4 it is
described in detail how to apply Bayesian probability to photometric
redshift estimation; the resulting method is called BPZ. Sec 5
compares the performance of traditional statistical techniques, as
maximum likelihood, with BPZ by applying both methods to the HDF
spectroscopic sample and to a simulated catalog. Sec. 6 briefly
describes how BPZ may be developed to deal with problems in galaxy
evolution and cosmology which make use of photometric redshifts.
Sec 7 briefly summarizes the main conclusions of the paper.
\section{Photometric redshifts: training set vs. SED fitting methods}
\label{sed}
There are two basic approaches to photometric redshift estimation.
Using the terminology of \cite{yee}, they may be termed `SED fitting'
and `empirical training set' methods. The first technique (\cite{koo},
\cite{lan}, \cite{gwy}, \cite{pel}, \cite{saw}, etc.) involves compiling
a library of template spectra, empirical or generated with population
synthesis techniques . These templates, after being redshifted and
corrected for intergalactic extinction, are compared with the galaxy
colors to determine the redshift $z$ which best fits the
observations. The training set technique (\cite{bru}, \cite{con95}, \cite{wan})
starts with a multicolor galaxy sample with apparent
magnitudes $m_0$ and colors $C$ which has been spectroscopically identified.
Using this sample, a relationship of the kind $z=z(C,m)$ is determined using
a multiparametric fit.
It should be said that these two methods are more similar than what
it is usually considered. To understand this, let's analyze
how the empirical training set method works. For simplicity,
let's forget about the magnitude dependence and let's suppose that only
two colors $C=(C_1,C_2)$ are enough to estimate the photometric redshifts,
that is, given a set of spectroscopic redshifts $\{z_{spec}\}$ and
colors $\{C\}$, the training set method tries to fit a surface
$z=z(C)$ to the data. It must be realized that this method makes a
very strong assumption, namely that the surface $z=z(C)$ is a
{\it function } defined on the color space: each value of $C$ is
assigned one and only one redshift. Visually this means that
the surface $z=z(C)$ does not `bend' over itself in the redshift
direction. Although this functionality of the redshift/color
relationship cannot be taken for granted in the general case
(at faint magnitudes there are numerous examples of galaxies with
very similar colors but totally different redshifts), it seems to be
a good approximation to the real picture at $z<1$ redshifts and
bright magnitudes (\cite{bru} ). A certain scatter around this surface
is allowed: galaxies with the same value of $(C)$ may have slightly
different redshifts and it seems to be assumed implicitly that this
scatter is what limits the accuracy of the method.
The SED fitting method is based on the color/redshift relationships
generated by each of the library templates $T$, $C_T=C_T(z)$.
A galaxy at the position $C$ is assigned the redshift corresponding
to the closest point of any of the $C_T$ curves in the color space.
If these $C_T$ functions are inverted, one ends up with the curves
$z_T=z_T(C_T)$, which, in general, are not functions; they
may present self--crossings (and of course they may also cross each other).
If we limit ourselves to the region in the color/redshift space
in which the training set method defines the surface $z=z(C)$,
for a realistic template set the curves $z_T=z_T(C_T)$ would be
embedded in the surface $z=z(C)$, conforming its `skeleton' and
defining its main features.
The fact that the surface $z=z(C)$ is continuous, whereas the
template-defined curves are sparsely distributed, does not have a
great practical difference. The gaps may be filled by finely interpolating
between the templates (\cite{saw}), but this is not strictly necessary:
usually the statistical procedure employed to search for the best redshift
performs its own interpolation between templates. When the colors of a
galaxy do not exactly coincide with one of the templates, $\chi^2$ or
the maximum likelihood method will assign the redshift corresponding to
the nearest template in the color space.
This is equivalent to the curves $z_T=z_T(C_T)$ having extended
`influence areas' around them, which conform a sort of step--like
surface which interpolates across the gaps, and also extends beyond
the region limited by them in the color space. Therefore, the SED-fitting
method comes with a built-in interpolation (and extrapolation) procedure.
For this reason, the accuracy of the photometric redshifts does not
change dramatically when using a sparse template set as
the one of \cite{cww} (\cite{lan}) or a fine grid of template spectra
(\cite{saw}). The most crucial factor is that the template library,
even if it contains few spectra, adequately reflects the main features
of real galaxy spectra and therefore the main `geographical accidents'
of the surface $z=z(C)$
The intrinsic similarity between both photometric redshift
methods explains their comparable performance, especially at
$z\lesssim 1$ redshift (\cite{hog}). When the topology of the
color--redshift relationship is simple, as apparently happens at
low redshift, the training set method will probably work slightly
better than the template fitting procedure, if only because it avoids
the possible systematics due to mismatches between the predicted
template colors and the real ones, and also partially because it
includes not only the colors of the galaxies, but also their
magnitudes, what helps to break the color/redshift degeneracies
(see below). However, it must be kept in mind that although
the fits to the spectroscopic redshifts give only a dispersion
$\delta z\approx 0.06$ (\cite{con97}), there is not a
strong guarantee that the predictive capabilities of the training set
method will keep such an accuracy, even within the same magnitude
and redshift ranges. As a matter of fact, they do not seem to work
spectacularly better than the SED fitting techniques (\cite{hog}),
even at low and intermediate redshifts.
However, the main drawback of the training set method is that, due to
its empirical and {\it ad hoc} basis, in principle it can
only be reliably extended as far as the spectroscopic redshift
limit. Because of this, it may represent a cheaper method of
obtaining redshifts than the spectrograph, but which cannot
really go much fainter than it.
Besides it is difficult to transfer the information obtained with
a given set of filters, to another survey which uses a different set.
Such an extrapolation has to be done with the help of templates, what
makes the method lose its empirical purity. And last but not least,
it is obvious that as one goes to higher redshifts/fainter magnitudes
the topology of the color-redshift distribution $z=z(C,m_0)$ displays
several nasty degeneracies, even if the near-IR information is included,
and it is impossible to fit a single functional form to the
color-redshift relationship.
Although the SED fitting method is not affected by some of these
limitations, it also comes with its own set of problems. Several
authors have analyzed in detail the main sources of errors affecting this
method (\cite{saw},\cite{fsoto}). These errors may be divided into two
broad classes:
\subsection {Color/redshift degeneracies}
\begin{figure}[h]
\epsscale{0.5}
\plotone{figcolors.eps}
\caption{a) On the left, VI vs. IK for the templates used
in Sec \ref{test} in the interval $1<z<5$. The size of the filled squares
grows with redshift, from $z=1$ to $z=5$. If these were the only colors
used for the redshift estimation every crossing of the lines
would correspond to a color/redshift degeneracy. b) To the right,
the same color--color relationships `thickened' by a $0.2$ photometric
error. The probability of color/redshift degeneracies highly increases.}
\label{colors}
\end{figure}
Fig. \ref{colors}a. shows $VI$ vs $IK$ for the morphological
types employed in Sec \ref{test} and $0<z<5$. The color/redshift
degeneracies happen when the line
corresponding to a single template intersects itself or when two
lines cross each other at points corresponding to different redshifts
for each of them (these cases correspond to ``bendings'' in the
redshift/color relationship $z=z(C)$). It is obvious that the likelihood
of such crossings increases with the extension of the considered redshift
range and the number of templates included.
It may seem that even considering a very extended redshift range, such
confusions could in principle be easily avoided by using enough
filters. However, the presence of color/redshift
degeneracies is highly increased by random photometric errors,
which can be visualized as a blurring or thickening of the $C_T(z_T)$
relationship (fig. \ref{colors}b): each point of the curves in fig.
\ref{colors}a is expanded into a square of size $\delta C$, the
error in the measured color. The first consequence of this is a `continuous'
($\delta z \approx { \partial C \over \partial z} \delta C$) increase in
the rms of the `small-scale' errors in the redshift estimation, and,
what it is worse, the overlaps in the color-color space become more
frequent, with the corresponding rise in the number of `catastrophic'
redshift errors. In addition, multicolor information may often
be degenerate, so increasing the number of filters does not break
the degeneracies; for instance, by applying a simple PCA analysis to the
photometric data of the HDF spectroscopic sample it can be shown that
the information contained in the seven $UBVIJHK$ filters for the HDF galaxies
can be condensed using only three parameters, the coefficients of the
principal components of the flux vectors (see also \cite{con95}).
Therefore, if the photometric errors are large, it is not always possible
to get totally rid of the degeneracies even increasing the number of
filters. This means that the presence of color/redshift degeneracies is
unavoidable for faint galaxy samples. The training set method somehow
alleviates this problem by introducing an additional parameter in
the estimation, the magnitude, which in some cases
may break the degeneracy. However, it is obvious that color/redshift
degeneracies also affect galaxies with the same magnitude, and the
training set method does not even contemplate the possibility of their
existence!
The SED--fitting method at least allows for the existence
of this problem, although it is not extremely efficient in dealing
with it, especially with noisy data. Its choice of redshift is
exclusively based on the goodness of fit between the observed
colors and the templates. In cases as the one described above, where two
or more redshift/morphological type combinations have practically the
same colors, the value of the likelihood ${\mathcal L}$ would have two or more
approximately equally high maxima at different redshifts (see
fig. \ref{peaks}). Depending on the
random photometric error, one maximum would prevail over the others, and a
small change in the flux could involve a catastrophic change in
the estimated redshift (see fig. \ref{peaks}). However, in many cases
there is additional information, discarded by ML, which could potentially
help to solve such conundrums. For instance, it may be known from previous
experience that one of the possible redshift/type combinations is much
more likely than any other given the galaxy magnitude, angular size,
shape, etc. In that case, and since the likelihoods are not informative
enough, it seems clear that the more reasonable decision would be to choose
the option which is more likely {\it a priori} as the best estimate. Plain
common sense dictates that one should compare all the possible hypotheses
with the data, as ML does, but simultaneously keeping in mind the degrees of
plausibility assigned to them by previous experience. There is not a
simple way of doing this within ML, at best one may remove or change the
redshift of the problematic objects by hand or devise {\it ad hoc} solutions
for each case. In contrast, Bayesian probability theory allows to include
this additional information in a rigorous and consistent way, effectively
dealing with this kind of errors (see Sec \ref{bpz})
\subsection{Template incompleteness}
In some cases, the spectra of observed galaxies have no close
equivalents in the template library. Such galaxies will be
assigned the redshift corresponding to the nearest template in
the color/redshift space, no matter how distant from the observed color
it is in absolute terms. The solution is obvious, one has to include
enough templates in the library so that all the possible galaxy types
are considered.
As was explained above, the SED fitting techniques perform their own
`automatic' interpolation and extrapolation, so once the main spectral
types are included in the template library, the results are not greatly
affected if one finely interpolates among the main spectra. The effects
of using a correct but incomplete set of spectra are shown in Sec
\ref{test}.
Both sources of errors described above are exacerbated at high redshifts.
High redshift galaxies are usually faint, therefore with large photometric
errors, and as the color/redshift space has a very extended range in $z$,
the degeneracies are more likely; in addition the template incompleteness
is worsened as there are few or no empirical spectra with which compare
the template library.
The accuracy of any photometric redshift technique is usually
established by contrasting its output with a sample of galaxies with
spectroscopic redshifts. It should be kept in mind, though, that
the results of this comparison may be misleading, as the available
spectroscopic samples are almost `by definition' especially well suited for
photometric redshift estimation: relatively bright (and thus with small
photometric errors) and often filling a privileged niche in the
color-redshift space, far from degeneracies
(e.g. Lyman-break galaxies). Thus, it is risky to extrapolate the
accuracy reached by current methods as estimated from spectroscopic
samples (and this also applies to BPZ) to fainter magnitudes.
This is especially true for the training set methods, which deliberately
minimize the difference between the spectroscopic and photometric
redshifts.
\section{Maximum likelihood (ML) redshift estimates}\label{ml}
Photometric redshift techniques based on template fitting look for the
best estimate of a galaxy redshift from the comparison of its measured
fluxes in $n_c+1$ filters $\{f_\alpha\}$, $\alpha=0,n_c$, with a set of
$n_T$ template spectra which try to represent the different morphological
types, and which have fluxes $f_{T\alpha}(z)$. These methods find their
estimate $z_{ML}$ by maximizing the likelihood ${\mathcal L}$ (or
equivalently minimizing $\chi^2$) over all the possible values of the
redshift $z$, the templates $T$ and the normalization constant $a_0$.
\begin{equation}
-\log({\mathcal L})+{\rm const} \propto \chi^2(z,T,a_0) =
\sum_\alpha{(f_\alpha-a_0f_{T\alpha})^2\over 2\sigma_{f_\alpha}^2}
\label{li}
\end{equation}
Since the normalization constant $a_0$ is considered a free parameter,
the only information relevant to the redshift determination is contained
in the ratios among the fluxes $\{f_\alpha\}$, that is, in the galaxy colors.
The definition of the likelihood in eq. (\ref{li}) is not convenient
for applying Bayesian methods as it depends on a normalization
parameter $a_0$, which is not convenient to define useful priors
either theoretically or
from previous observations. Here we prefer to normalize the
total fluxes in each band by the flux in a `base' filter, e.g.
the one corresponding to the band in which the galaxy sample
was selected and is considered to be complete.
Then the `colors' $C=\{c_i\}$, are defined as $c_i=f_i/f_0$
$i=1,n_c$, where $f_0$ is the base flux. The exact way in which the
colors are defined is not relevant, other combinations of filters are
equally valid. Hereinafter the magnitude $m_0$ (corresponding to the flux
$f_0$) will be used instead of $f_0$ in the expressions for the
priors. And so, assuming that the magnitude errors
$\{\sigma_{f_\alpha}\}$ are gaussianly distributed, the likelihood can
be defined as
\begin{equation}
{\mathcal L}(T,z)=
{1 \over \sqrt{{(2\pi)}^{n_c}|\Lambda_{ij}|}}e^{-{\chi^2 \over 2}}
\end{equation}
where
\begin{equation}
\chi^2=\sum_{i,j}\Lambda_{ij}^{-1}[c_i-c_{Ti}(z)][c_j-c_{Tj}(z)]
\end{equation}
and the matrix of moments $\Lambda_{ij}\equiv<\sigma_{c_i} \sigma_{c_j}>$
can be expressed as
\begin{equation}
\Lambda_{ij}=f_0^{-4}(f_i f_j \sigma_{f_0}^2 + f_0^2
\delta_{ij}\sigma_{f_i}\sigma_{f_j})
\end{equation}
By normalizing by $f_0$ instead of $a_0$, one reduces the computational
burden as it is not necessary to maximize over $f_0$, which is already
the `maximum likelihood' estimate for the value of the galaxy flux in that
filter. It is obvious that this assumes that the errors in the colors are
gaussian, which in general is not the case, even if the flux errors are.
Fortunately, the practical test performed below (Sec. \ref{test})
shows that there is little change between the results using both
likelihood definitions (see fig. \ref{comparison}a).
\section{Bayesian photometric redshifts (BPZ)}\label{bpz}
Within the framework of Bayesian probability, the problem of photometric
redshift estimation can be posed as finding the probability $p(z|D,I)$,
i.e., the probability of a galaxy having redshift $z$ given the data
$D=\{C,m_0\}$, {\it and} the prior information $I$. As it was mentioned
in the introduction, Bayesian theory states that {\it all} the
probabilities are conditional; they do not
represent frequencies, but states of knowledge about hypothesis, and
therefore always depend on other data or information (for a detailed
discussion of this and many other interesting issues see Jaynes, 1998).
The prior information $I$ is an ample term which in general
should include any knowledge that may be relevant to the hypothesis
under consideration and is not already included in the data $C,m_0$.
Note that in Bayesian probability the relationship between the prior
and posterior information is {\it logical}; it does not have to be temporal
or even causal. For instance, data from a new observation may
be included as prior information to estimate the photometric redshifts of
an old data set. Although some authors recommend that the $`|I'$ should not
be dropped from the expressions of probability (as a remainder of the fact
that all probabilities are conditional and especially to avoid confusions
when two probabilities based on different prior informations are considered
as equal), here the rule of simplifying the mathematical notation
whenever there is no danger of confusion will be followed, and from
now $p(z)$ will stand for $p(z|I)$, $p(D|z)$ for $p(D|z,I)$ etc.
As a trivial example of the application of Bayes's theorem, let's
consider the case if which there is only one template and the
likelihood ${\mathcal L}$ only depends on the redshift
$z$. Then, applying Bayes theorem
\begin{equation}
p(z|C,m_0)={p(z|m_0) p(C|z) \over p(C)} \propto p(z|m_0)p(C|z)
\label{1}
\end{equation}
The expression $p(C|z)\equiv {\mathcal L}$ is simply the likelihood:
the probability of observing the colors $C$ if the galaxy
has redshift $z$ (it is assumed for simplicity that ${\mathcal L}$ only
depends on the redshift and morphological type, and not on $m_0$)
The probability $p(C)$ is a normalization constant, and usually
there is no need to calculate it.
The first factor, the {\it prior} probability $p(z|m_0)$, is the
redshift distribution for galaxies with magnitude $m_0$. This function
allows to include information as the existence of upper or lower limits
on the galaxy redshifts, the presence of a cluster in the field, etc.
The effect of the prior $p(z|m_0)$ on the estimation depends on how
informative it is. It is obvious that for a constant prior
(all redshifts equally likely {\it a priori}) the estimate obtained from
eq. (\ref{1}) will exactly coincide with the ML result. This is also
roughly true if the prior is `smooth' enough and does not
present significant structure. However, in other cases, values of
the redshifts which are considered very improbable from the prior
information would be ``discriminated''; they must fit the data much
better than any other redshift in order to be selected.
Note that in rigor, one should write the prior in eq. (\ref{1}) as
\begin{equation}
p(z|m_0)
=\int d\hat{m_0}p(\hat{m_0})p(m_0|\hat{m_0})p(z|\hat{m_0})
\end{equation}
where $\hat{m_0}$ is the `true' value of the observed magnitude $m_0$,
$p(\hat{m_0})$ is proportional to the number counts as a function
of the magnitude $m_0$ and $p(m_0|\hat{m_0})\propto
\exp[(m_0-\hat{m_0})^2/2\sigma_{m_0}^2]$, i.e, the probability of
observing $m_0$ if the true magnitude is $\hat{m_0}$.
The above convolution accounts for the uncertainty in
the value of the magnitude $m_0$, which has the effect of slightly
`blurring' and biasing the redshift distribution $p(z|m_0)$.
To simplify our exposition this effect would not be consider hereinafter,
and just $p(z|m_0)$ and its equivalents will be used.
\subsection{Bayesian Marginalization}
It may seem from eq. \ref{1} (and unfortunately it is quite a
widespread misconception) that the only difference between Bayesian
and ML estimates is the introduction of a prior, in this case,
$p(z|m_0)$. However, there is more to Bayesian probability than just
priors.
The galaxy under consideration may belong
to different morphological types represented by a set of $n_T$ templates.
This set is considered to be {\it exhaustive}, i.e including all possible
types, and {\it exclusive}: the galaxy cannot belong to two types at
the same time. In that case, using Bayesian marginalization (eq. \ref{mar})
the probability $p(z|D)$ can be `expanded' into a `basis' formed by the
hypothesis $p(z,T|D)$ (the probability of the galaxy redshift
being $z$ {\it and} the galaxy type being $T$). The sum over all
these `atomic' hypothesis will give the total probability $p(z|D)$.
That is,
\begin{equation}
p(z|C,m_0)=\sum_T p(z,T|C,m_0)\propto
\sum_T p(z,T|m_0)p(C|z,T)
\label{bas}
\end{equation}
$p(C|z,T)$ is the likelihood of the data $C$ given $z$ and $T$.
The prior $p(z,T|m_0)$ may be developed using the product rule.
For instance
\begin{equation}
p(z,T|m_0)=p(T|m_0)p(z|T,m_0)
\label{pri}
\end{equation}
where $p(T|m_0)$ is the galaxy type fraction as a function
of magnitude and $p(z|T,m_0)$ is the redshift distribution for
galaxies of a given spectral type and magnitude.
\begin{figure}[h]
\epsscale{0.5}
\plotone{figpeaks.eps}
\caption{An example of the main probability
distributions involved in BPZ for a galaxy at $z=0.28$ with
an Irr spectral type and $I\approx 26$ to which random photometric
noise is added. From top
to bottom: a) The likelihood functions $p(C|z,T)$ for the different
templates used in Sec \ref{test}. Based on ML, the redshift chosen for
this galaxy would be $z_{ML}=2.685$ and its spectral type would correspond
to a Spiral. b) The prior probabilities $p(z,T|m_0)$ for each of the
spectral types (see text). Note that the probability of finding
a Spiral spectral type with $z>2.5$ and a magnitude $I=26$ is almost
negligible. c) The probability distributions
$p(z,T|C,m_0)\propto p(z,T|m_0)p(C|z,T)$ , that is, the likelihoods in
the top plot multiplied by the priors. The high redshift peak due to
the Spiral has disappeared, although there is still a little chance of
the galaxy being at high redshift if it has a Irr spectrum, but the
main concentration of probability is now at low redshift.
d) The final Bayesian probability $p(z|C,m_0)=\sum_T p(z,T|C,m_0)$,
which has its maximum at $z_B=0.305$. The shaded area corresponds to
the value of $p_{\Delta z}$, which estimates the reliability of $z_B$ and
yields a value of $\approx 0.91$.}
\label{peaks}
\end{figure}
Eq. (\ref{bas}) and fig. \ref{peaks} clearly illustrate the main
differences between the
Bayesian and ML methods. ML would just pick the
highest maximum over all the $p(C|z,T)$ as the best redshift estimate,
without looking at the plausibility of the corresponding values of
$z$ or $T$. On the contrary, Bayesian probability averages all these
likelihood functions after weighting them by their prior probabilities
$p(z,T|m_0)$. In this way the estimation is not affected by spurious
likelihood peaks caused by noise as it is shown in fig.
\ref{peaks} (see also the results of Sec. \ref{test}). Of course
that in an ideal situation with perfect, noiseless observations (and a
nondegenerate template space, i.e, only one $C$ for each $(z,T)$ pair)
the results obtained with ML and Bayesian inference would be the same.
Instead of a discrete set of templates, the comparison library may contain
spectra which are a function of continuous parameters. For instance,
synthetic spectral templates depend on the metallicity $Z$, the dust
content, the star formation history, etc. Even starting from a set of a
few templates, they may be expanded using the principal component analysis
(PCA) technique (\cite{sod}). In general,
if the spectra are characterized by $n_A$ possible parameters
$A=\{a_1...a_{n_A}\}$ (which may be physical characteristics of the models
or just PCA coefficients), the probability of $z$ given $F$ can be expressed
as
\begin{equation}
p(z|C,m_0)=\int dA p(z,A|C,m_0)
\propto\int dA p(z,A|m_0)p(C|z,A)
\label{cont}
\end{equation}
\subsection{`Bookmaker' odds}
Sometimes, instead of finding a `point' estimate for a galaxy redshift,
one needs to establish if that redshift belongs within a certain interval.
For instance, the problem may be to determine whether the galaxy has
$z>z_t$, where $z_t$ is a given threshold, or whether its redshift falls
within a given $z \pm \Delta z$, e.g. in the selection
of cluster members or background galaxies for lensing studies.
As an example, let's consider the classification of galaxies into the
background-foreground classes with respect to a certain redshift
threshold $z_{th}$.
One must choose between the hypothesis $H_{th}=\{ z > z_{th}\}$ and
its opposite, $\bar{H}_{th}=\{ z < z_{th}\}$. The corresponding
probabilities may be written as
\begin{equation}
P(H_{th}|D)=\int_0^{z_{th}}dz p(z|D)
\end{equation}
And
\begin{equation}
P(\bar{H}_{th}|D)=\int_{z_{th}}^{\infty}dz p(z|D)
\end{equation}
The (`bookmaker') odds of hypothesis $H_{th}$ are defined as the probability
of $H_{th}$ being true over the probability of $H_{th}$ being false (Jaynes 1998)
\begin{equation}
O(H_{th}|D)={P(H_{th}|D)\over P(\bar{H}_{th}|D)}
\end{equation}
When $O(H_{th}|D)\approx 1$, there is not enough information to choose
between both hypothesis. A galaxy is considered to have $z>z_{th}$ if
$O(H_{th}|D)>O_{d}$, where $O_{d}$ is a certain decision threshold. There
are no fixed rules to choose the value of $O_{d}$, and the most
appropriate value depends on
the task at hand; for instance, to be really sure that no foreground
galaxy has sneaked into the background sample, $O_{d}$ would have to be
high, but if the main goal is selecting all the background galaxies and
one does not mind including some foreground ones, then $O_{d}$ would be
lower, etc. Basically this is a problem concerning decision theory.
In the same way, the cluster galaxies can be selected by choosing a
redshift threshold $\Delta z$ which defines whether a galaxy belongs
to the cluster. The corresponding hypothesis would be
$H_c=\{|z-z_c| < \Delta z\}$.
\begin{equation}
P(H_c|D)=\int_{z_c-\Delta z}^{z_c+\Delta_z} dz p(z|D)
\end{equation}
And
\begin{equation}
P(\bar{H}_c|D=\int_0^{z_c-\Delta z}dz p(z|D)+
\int_{z_c+\Delta z}^{\infty} dz p(z|D)
\end{equation}
Similarly, the odds of $H_c$ are defined as
\begin{equation}
O(H_c|D)={P(H_c|D)\over P(\bar{H}_c|D}
\end{equation}
\subsection{Prior calibration}
In those cases where the prior information is vague and
does not allow to choose a definite expression prior probability,
Bayesian inference offers the possibility of ``calibrating'' the prior,
if needed using the very data sample under consideration.
Let's suppose that the distribution $p(z,T,m_0)$ is parametrized using
$n_\lambda$ continuous parameters $\lambda$. They may be the coefficients
of a polynomial fit, a wavelet expansion, etc. In that case, including
$\lambda$ in eq. (\ref{bas}), the probability can be written as
\begin{equation}
p(z|C,m_0)=\int d\lambda\sum_T p(z,T,\lambda|C,m_0)\propto
\int d\lambda p(\lambda)\sum_T p(z,T,m_0|\lambda)p(C|z,T)
\label{20}
\end{equation}
where $p(\lambda)$ is the prior probability of $\lambda$, and
$p(z,T,m_0|\lambda)$ is the prior probability of $z,T$ and $m_0$
as a function of the parameters $\lambda$. The latter have not been
included in the likelihood expression since $C$ is totally determined
once the values of $z$ and $T$ are known.
Now let's suppose that the galaxy belongs to a sample
containing $n_g$ galaxies. Each $j-$th galaxy has a `base' magnitude
$m_{0j}$ and colors $C_j$. The sets ${\bf C}\equiv\{C_j\}$ and
${\bf m_0}\equiv \{m_{0j}\}, j=1,n_g$ contain respectively the colors
and magnitudes of all the galaxies in the sample.
Then, the probability of the $i-$th galaxy
having redshift $z_i$ given the full sample data
${\bf C}$ and ${\bf m_0}$ can be written as
\begin{equation}
p(z_i|{\bf C},{\bf m_0})=
\int d\lambda\sum_{T} p(z_i,T,\lambda|C_i,m_{0i},{\bf C'},{\bf m_0'})
\end{equation}
The sets ${\bf C'}\equiv \{C_j\}$ and ${\bf m_0'}\equiv\{m_{0j}\}$,
$j=1,n_g, j\neq i$ are identical to ${\bf C}$ and ${\bf m_0}$
except for the exclusion of the data $C_i$ and $m_{0i}$.
Applying Bayes' theorem, the product rule and simplifying
\begin{equation}
p(z_i|{\bf C},{\bf m_0})\propto
\int d\lambda p(\lambda|{\bf C'},{\bf m_0'})
\sum_{T} p(z_i,T,m_{0i}|\lambda)p(C|z_i,T)
\end{equation}
where as before it has been considered that the likelihood of $C_i$
only depends on $z_i,T$ and that the probability of $z_i$ and $T$
only depend on ${\bf C'}$ and ${\bf m_0'}$ through $\lambda$.
The expression to which we arrived is very similar to eq. (\ref{20})
only that now the shape of the prior is estimated from
the data ${\bf C'},{\bf m_0'}$. This means that even if one starts with
a very sketchy idea about the shape of the prior, the very galaxy sample
under study can be used to determine the value of the parameters $\lambda$,
and thus to provide a more accurate estimate of the individual galaxy
characteristics. Assuming that the data ${\bf C'}$ (as well as
${\bf m_0}$) are independent among themselves
\begin{equation}
p(\lambda|{\bf C'},{\bf m_0'})\propto p(\lambda)
p({\bf C'},{\bf m_0'}|\lambda)= p(\lambda)
\prod_{j,j\neq i} p(C_j,m_{0j}|\lambda)
\label{23}
\end{equation}
where
\begin{equation}
p(C_j,m_{0j}|\lambda)= \int dz_j\sum_{T_j} p(z_j,{T_j},C_j,m_{0j}|\lambda)
\propto
\int dz_j\sum_{T_j} p(z_j,{T_j},m_{0j}|\lambda)p(C|z_j,T_j)
\end{equation}
If the number of galaxies in our sample is large enough, it can be
reasonably assumed that the prior probability
$p(\lambda|{\bf C'},{\bf m_0'})$ will not
change appreciably with the inclusion of the data $C_i,m_{0i}$
belonging to a single galaxy. In that case, a time-saving approximation
is to use as a prior the probability $p(\lambda|{\bf C},{\bf m_0})$,
calculated using the whole data set, instead of finding
$p(\lambda|{\bf C'},{\bf m_0'})$ for each galaxy. In addition,
it should be noted that $p(\lambda|{\bf C},{\bf m_0})$ represents
the Bayesian estimate of the parameters which define the shape of
the redshift distribution (see fig. \ref{nz}).
\subsection{Including spectroscopical information}
In some cases spectroscopical redshifts $\{z_{si}\}$
are available for a fraction of the galaxy sample. It is
straightforward to include them
in the prior
calibration procedure described above, using a delta--function
likelihood weighted by the probability of the galaxy belonging to a
morphological type, as it is done to determine the priors
in Sec \ref{test}. This gives the spectroscopical subsample a
(deserved) larger weight in the determination of the redshift and
morphological priors in comparison with the rest of the galaxies, at
least within a certain color and magnitude region, but, unlike what
happens with the training set method, the information contained in the
rest of the sample is not thrown away.
If nevertheless one wants to follow the training set approach and
use only the spectroscopic sample, it is easy to develop a Bayesian
variant of this method. As before, the
goal is to find an expression of the sort $p(z|C,m_0)$, which
would give us the redshift probability for a galaxy given its colors and
magnitude. If the color/magnitude/redshift multidimensional
surface $z=z(C,m_0)$ were infinitely thin, the probability
would just be $p(z|C,m_0)\equiv \delta(z(C,m_o))$, where $\delta(...)$
is a delta-function. But in the real world there is always some
scatter around the surface defined by $z(C,m_0)$ (even without taking
into account the color/redshift degeneracies), and it is therefore
more appropriate to describe $p(z|C,m_0)$ as e.g. a gaussian of
width $\sigma z$ centered on each point of the surface $z(C,m_0)$.
Let's assume that all the parameters which define the shape of this
relationship, together with $\sigma z$ are included in the set $\lambda_z$.
Using the prior calibration method introduced above, the probability
distribution for these parameters $p(\lambda_z|D_T)$ can be
determined from the training set $D_T\equiv \{z_{si},Ci,m_{0i}\}$.
\begin{equation}
p(\lambda_z|D_T)\propto p(\lambda_z)
\prod_i p(z_{si}|C_i,m_{0i},\lambda_z)
\end{equation}
The expression for the redshift probability of a galaxy with colors
$C$ and $m_0$ would then be
\begin{equation}
\label{25}
p(z|C,m_0)=\int d\lambda_z p(\lambda_z|D_T)p(z|C,m_0,\lambda_z)
\end{equation}
The redshift probability obtained from eq. (\ref{25}) is compatible with
the one obtained in eq. (\ref{bas}) using the SED--fitting
procedure. Therefore it is possible to combine them
in a same expression. As an approximation, let's suppose that
both of them are given equal weights, then
\begin{equation}
p(z|C,m_0)\propto \sum_T p(z,T|m_0)p(C|z,T)
+\int d\lambda_z p(\lambda_z|D)p(z|C,m,\sigma_z,\lambda_z)
\end{equation}
In fact, due to the above described redundancy
between the SED--fitting method and the training set method (Sec.
\ref{sed}), it
would be more appropriate to combine both probabilities using
weights which would take these redundancies into account in a
consistent way, roughly using eq.(\ref{25}) at brighter magnitudes,
where the galaxies are well studied spectroscopically and
leaving eq.(\ref{bas}) for fainter magnitudes. The exploration of
this combined, training set/SED-fitting approach will be left for
a future paper, and in the practical tests performed below the
followed procedure uses the SED--fitting likelihood.
\section{A practical test for BPZ}\label{test}
\begin{figure}[h]
\epsscale{0.5}
\plotone{figCWW.eps}
\caption{a)To the left, the photometric redshifts obtained by applying our
ML algorithm to the HDF spectroscopic sample using a template library which
contains only the four CWW main types, E/SO, Sbc, Scd and Irr.
These results are very similar to those of Fern\'andez-Soto,
Lanzetta \& Yahil, 1998. b)
The right plot shows the significant improvement (without using BPZ yet)
obtained by just including two of the Kinney et al. 1996 spectra of
starburst galaxies, SB2 and SB3, in the template set. One of the outliers
disappears, the `sagging' or systematic offset between $1.5<z<3.5$ is
eliminated and the general scatter of the relationship decreases from
$\Delta z/(1+z_{spec})=0.13$ to $\Delta z/(1+z_{spec})=0.10$.}
\label{comparison}
\end{figure}
The Hubble Deep Field (HDF; \cite{wil}) has become {\it the}
benchmark in the development of photometric redshift techniques.
In this section BPZ will be applied to the HDF and
its performance contrasted with the results obtained with the standard
`frequentist' (in the Bayesian terminology) method, the procedure
usually applied to the HDF (\cite{gwy},\cite{lan},
\cite{saw}, etc.). The photometry used for the HDF is that of
\cite{fsoto}, which, in addition to magnitudes in the four
HDF filters includes JHK magnitudes from the observations of \cite{dic}.
$I_{814}$ is chosen as the base magnitude. The colors are defined as
described in Sec. \ref{ml}.
The template library was selected after several tests with the HDF
subsample which has spectroscopic redshifts (108 galaxies), spanning the
range $z<6$. The set of spectra which worked best is similar to that
used by \cite{saw}. It contains four \cite{cww}
templates (E/S0, Sbc, Scd, Irr), that is the same spectral types used
by \cite{fsoto}, plus the spectra of 2 starbursting galaxies from
\cite{kin}(\cite{saw} used two very blue SEDs from GISSEL).
All the spectra were extended to the UV using a linear extrapolation
and a cutoff at $912\AA$, and to the near--IR using GISSEL synthetic
templates. The spectra are corrected for intergalactic absorption
following \cite{mad}.
\begin{figure}[h]
\epsscale{0.5}
\plotone{figpriors.eps}
\caption{The prior in redshift $p(z|m_0)$ estimated from the HDF
data using the prior calibration procedure described in Sec 4.,
for different values of the magnitude $m_0$ ($I_{814}=21$ to
$I_{814}=28$)}
\label{priors}
\end{figure}
It could seem in principle that a synthetic template set which takes
(at least tentatively) into account galaxy evolution is more appropriate
than `frozen' template library obtained at low redshift and then
extrapolated to very high redshifts. However, as
\cite{yee} has convincingly shown, the extended CWW set offers much
better results than the GISSEL synthetic models \cite{bc}.
I have also tried to use the RVF set of spectra and the agreement
with the spectroscopic redshifts is considerably worse than using
the empirical template set. And if the synthetic models do not work well
within the magnitude range corresponding to the HDF spectroscopic sample
is relatively bright, there is little reason to suppose that their
performance will improve at fainter magnitudes.
However, even working with empirical templates, it is important
to be sure that the template library is complete enough.
Fig. \ref{comparison} illustrates the effects of template incompleteness
in the redshift estimation. The left plot displays the results obtained
using ML (Sec \ref{ml}) redshift estimation using only the four
CWW templates (this plot is very similar to the $z_{phot}-z_{spec}$
diagram shown in \cite{fsoto}, which confirms the validity of the
expression for the likelihood introduced in Sec \ref{ml}).
On the right, the results obtained also using ML (no BPZ yet) but
including two more templates, SB2 and SB3 from \cite{kin}. It can be
seen that the new templates almost do not affect the low redshift range,
but the changes at $z>2$ are quite dramatic, the `sagging' of the
CWW--only diagram disappears and the general scatter of the diagram
decreases by $20\%$. This shows how important it is to include enough
galaxy types in the template library. No matter how sophisticated
the statistical treatment is, it will do little to improve the results
obtained with a deficient template set.
The first step in the application of BPZ is choosing the shape of
the priors. Due to the depth of the HDF there is little previous
information about the redshift priors, so this is a good example in
which the prior calibration procedure described in Sec\ref{bpz} has
to be applied. It will be assumed that the early types (E/S0)
and spirals (Sbc,Scd) have a spectral type prior (eq. \ref{pri} )
of the form
\begin{equation}
p(T|m_0)=f_t e^{-k_t (m_0-20)}
\label{par}
\end{equation}
with $t=1$ for early types and $t=2$ for spirals.
The irregulars (the remaining three templates; $t=3$) complete
the galaxy mix. The fraction of early types at
$I=20$ is assumed to be $f_1=35\%$ and that of spirals $f_2=50\%$.
The parameters $k_1$ and $k_2$ are left as free.
Based on the result from redshift surveys
the following shape for the redshift prior has been chosen:
\begin{equation}
p(z|T,m_0)\propto z^{\alpha_t}
exp
\{
-[{z \over z_{mt}(m_o)}]^{\alpha_t}
\}
\label{par1}
\end{equation}
where
\begin{equation}
z_{mt}(m_0)=z_{0t}+k_{mt} (m_0-20.)
\label{par2}
\end{equation}
and $\alpha_t$, $z_{0t}$ and $k_t$ are considered free parameters.
In total, 11 parameters have to be determined using the
calibration procedure. For those objects with spectroscopic
redshifts, a `delta-function' located at the spectroscopic redshift
of the galaxy has been used instead of the likelihood $p(C|z,T)$.
Table 1 shows the values of the `best' values of the parameters
in eq. (\ref{par1},\ref{par2}) found by maximizing the
probability in eq. (\ref{25}) using the subroutine {\it amoeba}
(\cite{nr}).
The errors roughly indicate the parameter range which encloses $66\%$
of the probability. The values of the parameters in eq. (\ref{par})
are $k1=0.47\pm 0.02$ and $k2=0.165\pm 0.01$. The prior in
redshift $p(z|m_0)$ can obviously be found by summing over
the `nuisance' parameter (Jaynes 1998), in this case $T$:
\begin{equation}
p(z|m_0)=\sum_T p(T|m_0)p(z|T,m_0)
\end{equation}
Fig. \ref{priors} plots this prior for different magnitudes.
With the priors thus found, one can proceed with the redshift
estimation using eq. (\ref{20}). Here the multiplication by
the probability distribution $p(\lambda)$ and the integration
over $d \lambda$ will be skipped. As it can be seen from Table 1,
the uncertainties in the parameters are rather small and it is obvious
that the results would not change appreciably, so the additional
computational effort of performing a 11-dimensional integral is not
justified.
There are several options to convert the continuous probability
$p(z|C,m_0)$ to a point estimate of the `best' redshift $z_{B}$.
Here the `mode' of the final probability is chosen, although
taking the `median' value of $z$, corresponding to $50\%$ of the
cumulative probability, or even the `average'
$<z>\equiv\int dz z p(z|C,m_0)$ is also valid.
\begin{figure}[h]
\epsscale{0.5}
\plotone{figHDF.eps}
\caption{The photometric redshifts obtained with BPZ plotted against
the spectroscopic redshifts. The differences with fig.
\ref{comparison}b are the elimination of 3 galaxies with
$p_{\Delta z}<0.99$ (see text). This removes the only outlier present
in fig. \ref{comparison}b. The rms scatter around the continuous line
is $\Delta z_B/(1+z_B)=0.08$. }
\label{hdf}
\end{figure}
It was mentioned in sec \ref{bpz} that bayesian probability
offers a way to characterize the accuracy of the redshift
estimation using the odds or a similar indicator, for
instance by analogy with the gaussian distribution a
`$1\sigma$' error may be defined using a interval with contains
$66\%$ of the integral of $p(z|C,m_0)$ around $z_{B}$, etc.
Here it has been chosen as an indicator of the redshift reliability
the quantity $p_{\Delta z}$, the probability of $|z-z_B|<\Delta z$,
where $z$ is the galaxy redshift. In this way, when the value of
$p_{\Delta z}$ is low, we are warned that the redshift prediction is
unreliable. As it will be shown below, $p_{\Delta z}$ is extremely
efficient in picking out galaxies with `catastrophic errors' in their
redshifts.
The photometric redshifts resulting from applying BPZ to the
spectroscopic sample are plotted in fig. \ref{hdf}. Galaxies with
a probability $p_{\Delta z}<0.99$ (there are three of them) have
been discarded, where $\Delta z$ is chosen to be $0.2\times(1+z)$,
to take into account that the uncertainty grows with the redshift of
the galaxies.
It is evident from fig. \ref{hdf} that the agreement is very good
at all redshifts. The residuals $\Delta z_B=z_{B}-z_{spec}$
have $<\Delta z_B>=0.002$. If $\Delta z_B$ is divided by a
factor $(1+z_{spec})$, as suggested in \cite{fsoto}, the rms
of the quantity $\Delta z_B/(1+z_B)$ is only $0.08$. There are no
appreciable systematic effects in the residuals. One of
the three objects discarded because of their having $p_{\Delta z}<0.99$
is the only clear outlier in our ML estimation, with $z_{BPZ}=0.245$ and
$z_{spec}=2.93$ (see fig. \ref{comparison}b), evidence of the
usefulness of $p_{\Delta z}$ to generate a reliable sample.
From the comparison of fig. \ref{comparison}b with fig.
\ref{hdf}, it may seem that, apart from the exclusion of the
outlier, there is not much profit in applying BPZ with respect to ML.
This is not surprising in the particular case of the HDF
spectroscopic sample, which is formed mostly by galaxies either
very bright or occupying privileged regions in the color space. The
corresponding likelihood peaks are thus rather sharp, and little
affected by smooth prior probabilities.
\begin{figure}[h]
\epsscale{0.5}
\plotone{fig4f.eps}
\caption{a) The left plot shows the results of applying ML to the HDF
spectroscopic sample using only the four HST bands. Compare with fig.
\ref{comparison}b, which uses also the near IR photometry of
Dickinson et al. 1998.
The rms of the diagram is increased, and there are several outliers.
b) The right plot shows how applying BPZ with a threshold
$p_{\Delta z}>0.99$ leaves the remaining 101 objects ($93.5\%$ of the total)
virtually free of outliers. It is noteworthy that these results are
totally comparable or even better (as there are no outliers) than
those obtained in fig. \ref{comparison}a, in which the near-IR magnitudes
were included in the estimation.
}
\label{4f}
\end{figure}
To illustrate the effectiveness of BPZ under worse than ideal
conditions, the photometric redshifts for the spectroscopic
sample are estimated again using ML and BPZ but restricting the
color information to the UBVI HST filters.
The results are plotted in fig. \ref{4f}. The ML redshift diagram
displays 5 `catastrophic errors' ($\Delta z\gtrsim 1$). Note
that these are the same kind of errors pointed out by \cite{ellis}
in the first HDF photometric redshifts estimations.
BPZ with a $p_{\Delta z}>0.99$ threshold (which eliminates a total of 7
galaxies) totally eliminates those outliers. This is a clear example
of the capabilities of BPZ (combined with an adequate template set)
to obtain reliable photometric redshift estimates. Note that even using
near--IR colors, the ML estimates shown in fig. \ref{comparison} presented
outliers. This shows that applying BPZ to UV--only data may yield results
more reliable than those obtained with ML including near-IR
information! Although of course no more accurate; the scatter of
fig. \ref{comparison}b, once the outliers are removed is
$\Delta z\approx 0.18$, whereas fig. \ref{4f}b has a scatter of
$\Delta z\approx 0.24$, which incidentally is approximately the scatter
of fig. \ref{comparison}a.
Another obvious way of testing the efficiency of BPZ is with a
simulated sample. The latter can be generated using the procedure
described in \cite{fsoto}. Each galaxy in the
HDF is assigned a redshift and type using
ML (this is done deliberately to avoid biasing the test
towards BPZ) and then a mock catalog is created containing the colors
corresponding to the best fitting redshifts and templates. To represent
the photometric errors present in observations, a random photometric
noise of the same amplitude as the photometric error is added to each
object. Fig. \ref{90}b shows the ML estimated redshifts for the mock
catalog ($I<28$) against the `true' redshifts; although in general
the agreement is not bad (as could be expected) there are a large number
of outliers ($10\%$), whose positions illustrate the main source
of color/redshift degeneracies: high $z$ galaxies
which are erroneously assigned $z\lesssim 1$ redshifts and vice versa.
This shortcoming of the ML method is analyzed in detail in \cite{fsoto}.
In contrast, fig. \ref{90}a shows the results of applying BPZ with
a threshold of $p_{\Delta z}>0.9$. This eliminates $20\%$ of the
initial sample (almost half of which have catastrophically wrong
redshifts), but the number of outliers is reduced to a remarkable
$1\%$.
Is it possible to define some `reliability estimator', similar
to $p_{\Delta z}$ within the ML framework? The obvious choice seems to
be $\chi^2$.
Fig. \ref{odds}b plots the value of $\chi^2$ vs. the ML redshift error for
the mock catalog. It is clear that $\chi^2$ is almost useless to
pick out the outliers. The dashed line marks the upper $25\%$ quartile in
$\chi^2$; most of the outliers are below it, at smaller $\chi^2$
values. In stark contrast, fig. \ref{odds}a
plots the errors in the BPZ redshifts {\it vs.} the values of $p_{\Delta z}$.
The lower $25\%$ quartile, under the dashed line, contains practically
all the outliers. By setting an appropriate threshold one can virtually
eliminate the `catastrophic errors'.
\begin{figure}[h]
\epsscale{0.5}
\plotone{figsim_90.eps}
\caption{a) To the left, the photometric redshifts $z_B$
estimated using BPZ for the $I<28$ HDF mock catalog, plotted against
the `true' redshifts $z_t$ (see text). A threshold of $p_{\Delta z}>0.90$,
which eliminates $20\%$ of the objects has been applied.
b) The right plot shows the results obtained applying ML to the same mock
sample. The fraction of outliers is $10\%$).}
\label{90}
\end{figure}
Fig. \ref{oddsmz} shows the numbers of galaxies above a given $p_{\Delta z}$
threshold in the HDF as a function of magnitude and redshifts.
It shows how risky it is to estimate photometric redshifts using ML
for faint, $I\gtrsim 27$ objects; the fraction of objects with possible
catastrophic errors grows steadily with magnitude.
There is one caveat regarding the use of $p_{\Delta z}$ or
similar quantities as a reliability estimator. They provide a
safety check against the color/redshift degeneracies, since basically they
tell us if there are other probability peaks comparable to the highest
one, but they cannot protect us from template
incompleteness. If the template library does not contain any spectra
similar to the one corresponding to the galaxy, there is no indicator
able to warn us about the unreliability of the prediction. Because of this,
no matter how sophisticated the statistical methods become, it is
fundamental to have a good template set, which contains---even if only
approximately---all the possible galaxy types present in the sample.
\begin{figure}[h]
\epsscale{0.5}
\plotone{figsim_oddschi.eps}
\caption{a) On the left, the probability $p_{\Delta z}$ plotted
against the absolute value of the difference between the `true'
redshift ($z_t$) and the one estimated using BPZ ($z_B$) for the mock
sample described in Sec. \ref{test}. The higher the value
of $p_{\Delta z}$, the more reliable the redshift should be. The dashed
line shows the $25\%$ low quartile in the value of $p_{\Delta z}$. Most of
the outliers are at low values of $p_{\Delta z}$, what allows to
eliminate them by setting a suitable threshold on
$p_{\Delta z}$ (see text and fig. \ref{90})
b) The right plot shows that it is not possible to do something
similar using ML redshifts and $\chi^2$ as an estimator. The
value of $\chi^2$ of the best ML fit is plotted against the error in
the ML redshift estimation $|z_t-z_{ML}|$. The dotted line shows
the $25\%$ high quartile in the values of $\chi^2$. One would expect
that low values of $\chi^2$ (and therefore better fits) would correspond
to more reliable redshifts, but this obviously is not the case. This is not
surprising: the outliers in this figure are all due to color/reshifts
degeneracies as the one displayed in fig. \ref{colors}, which may give an
extremely good fit to the colors $C$, but a totally wrong redshift.}
\label{odds}
\end{figure}
Finally, fig. \ref{nz} shows the redshift distributions for the HDF
galaxies with $I<27$. No objects have been removed on the basis of
$p_{\Delta z}$, so the values of the histogram bins should be taken
with care. The overplotted continous curves are the distributions
used as priors and which simultaneously are the Bayesian fits to the
final redshift distributions. The results obtained from the HDF will
be analyzed in more detail, using a revised photometry, in a forthcoming
paper.
\begin{figure}[h]
\epsscale{0.5}
\plotone{figoddsmz.eps}
\caption{a) On the left, histograms showing the number of galaxies over
$p_{\Delta z}$ thresholds of $0.90$ and $0.99$ as a function of
magnitude. It can be seen that the reliability of the photometric
redshift estimation quickly degrades with the magnitude.
b) The same as a) but as a function of redshift.}
\label{oddsmz}
\end{figure}
\begin{figure}[h]
\epsscale{0.5}
\plotone{fignz.eps}
\caption{The $z_B$ redshift distributions for the $I<27$ HDF
galaxies divided by spectral types. The solid lines represent
the corresponding $p(z,T)$ distributions estimated using the prior
calibration method described in the text.}
\label{nz}
\end{figure}
\section{Applications}\label{appli}
As we argue above, the use of BPZ for photometric redshift
estimation offers obvious advantages over standard ML techniques.
However, quite often obtaining photometric redshifts is not an end
in itself, but an intermediate step towards measuring other
quantities, like the evolution of the star formation rate (\cite{con97}),
the galaxy--galaxy correlation function (\cite{con98},\cite{mir}),
galaxy or cluster mass distributions (\cite{hud}), etc. The usual
procedure consists in obtaining photometric redshifts for all the galaxies
in the sample, using ML or the training set method, and then work with
them as if these estimates were accurate, reliable spectroscopic redshifts.
The results of the previous sections alert us to the dangers
inherent in that approach, as it hardly takes into account the
uncertainties involved in photometric redshift estimation.
In contrast, within the Bayesian framework there is no need
to work with the discrete, point--like `best' redshift estimates.
The whole redshift probability distribution can be taken into account,
so that the uncertainties in the redshift estimation are accounted
for in the final result. To illustrate this point, let's outline
how BPZ can be applied to several problems which use photometric redshift
estimation.
\subsection{Spectral properties of a galaxy population}
If, instead of working with a discrete set of templates, one uses a
spectral library whose templates depend of parameters as the
metallicity, the star-formation history, initial mass function, etc.,
represented by $A$ in Sec \ref{bpz}, it is obvious from equation
(\ref{cont}) that the same technique used to estimate the redshift
can be applied to estimate any of the parameters $A$ which characterize
the galaxy spectrum. For
instance, let's suppose that one want to estimate the parameter
$a_i$. Then defining $A'=\{ a_j\}, j\neq i$, we have
\begin{equation}
p(a_i|C,m_0)=\int dz\int dA' p(a_i,z,A'|C,m_0)
\propto
\int dz\int dA' p(a_i,z,A'|m_0)p(C|z,A)
\end{equation}
That is, the likelihoods $p(C|z,A)$ and the weights
$p(a_i,z,A')\equiv p(z,A)$ are the same ones used for
the redshift estimation (eq. \ref{cont}), only that now the
integration is performed over the variables $z$ and $A'$ instead of
$A$. In this way, depending
on the template library which is being used, one can estimate galaxy
characteristics as the metallicity, dust content, etc.
An important advantage of this method over ML is that the estimates
of the parameter $a_i$ automatically include the uncertainty of the
redshift estimation, which is reflected in the final value of
$p(a_i|C,m_0)$. Besides, by integrating the probability over all
the parameters $A'$, one precisely
includes the uncertainties caused by possible parameter degeneracies
in the final result for $a_i$. It should also be noted that as many
of the results obtained in this paper, this method can be almost
straightforwardly applied to spectroscopical observations; one has
only to modify the likelihood expression which compares the observed
fluxes with the spectral template. The rest of the formalism remains
practically identical.
\subsection{Galaxy Clusters: member identification}
One frequent application of photometric redshift techniques is
the study of galaxy cluster fields. The goals may be the selection of
cluster galaxies to characterize their properties, especially at high
redshifts, or the identification of distant, background galaxies
to be used in gravitational lensing analysis (\cite{ben}). BPZ
offers an effective way of dealing with such problems.
To simplify the problem, the effects of gravitational lensing on
the background galaxies (magnification, number counts depletion, etc.)
will be neglected (see however the next subsection). Let's suppose
that we already have an estimate of the projected surface density
of cluster galaxies (which can roughly be obtained without any
photometric redshifts, just from the number counts surface
distribution) $n_c(m_0,\vec{ r})$, where $\vec{ r}$ is the
position with respect to the cluster center. The surface density of
`field', non--cluster galaxies is represented by $n_f(m_0)$. For
each galaxy in the sample we know its magnitude and colors $m_0,C$ and
also its position $\vec{ r}$, which is now a relevant parameter in
the redshift estimation. Following eq. (\ref{bas}) we can write
\begin{equation}
p(z|C,m_0,\vec{ r}) \propto \sum_T p(z,T|m_0,\vec{ r})p(C|z,T)
\end{equation}
A dependence on the magnitude (e.g. for the early types
cluster sequence) could easily be included in the likelihood
$p(C|z,T)$ if needed. The prior can be divided into the sum of two
different terms:
\begin{equation}
p(z,T|m_0,\vec{ r})=p_c(z,T|m_0,\vec{ r})+p_f(z,T|m_0,\vec{ r})
\end{equation}
where $p_c$ represents the prior probability of the galaxy belonging
to the cluster, whereas $p_f$ corresponds to the prior probability of
the galaxy belonging to the general field population. The expression
for $p_c$ can be written as
\begin{equation}
p_c(z,T|m_0,\vec{ r})={n_c(m_0,\vec{ r}) \over n_c(m_0,\vec{ r}) + n_f(m_0)}
p_c(T|m_0) g(z_c,\sigma z_c)
\end{equation}
The probability $p_c(T|m_0)$ corresponds to the expected galaxy mix
fraction in the cluster, which in general will depend on the magnitude
and will be different from that of field galaxies. The function
$g(z_c,\sigma z_c)$ is the redshift profile of the cluster; a good
approximation could be a gaussian with a width corresponding to the
cluster velocity dispersion.
The second prior takes the form
\begin{equation}
p_f(z,T|m_0,\vec{ r})={n_f(m_0) \over n_c(m_0,\vec{ r}) + n_f(m_0)}
p_f(T|m_0)p_f(z|T,m_0)
\end{equation}
which uses the priors for the general field galaxy population
(Sec \ref{test}). Finally, the hypothesis that the galaxy belongs to
the cluster or not can be decided about with the help of a properly
defined $p_{\Delta z}$, or with the odds introduced in Sec \ref{bpz}.
\subsubsection{Cluster detection}
We have assumed above that the cluster redshift and its galaxy surface
density distribution are known. However, in some cases, there is
a reasonable suspicion about the presence of a cluster at a certain
redshift, but not total certainty, and our goal is to confirm its existence.
An example using ML photometric redshift estimation is shown in
\cite{pel}. An extreme case with minimal prior information occur in
optical cluster surveys, galaxy catalogs covering large areas of the sky
are searched for clusters. In those cases there are no previous guesses
about the position or redshift of the cluster, and a `blind', automatized
search algorithm has to be used (\cite{pos}).
The prior expression used in the previous subsection offers a way to
build such a searching method. Instead of assuming that the
cluster redshift and its surface distribution are known, the redshift can be
left as a free parameter $z_c$ and the expression characterizing the
cluster galaxy surface density distribution $n_c(m_0,\vec{ r})$ can be
parametrized using the quantities $\lambda_c$. For simplicity,
let's suppose that
\begin{equation}
n_c(m_0,\vec{ r})=A_c \phi(m_0,z_c) f(\vec{ r}_c, \sigma r_c)
\end{equation}
where $A_c$ is the cluster `amplitude', $\phi(m_0,z_c)$ is the
number counts distribution expected for the cluster (which in
general will depend on the redshift $z_c$) and $f(\vec{r}_c,\sigma r_c)$
represents the cluster profile, centered on $\vec{r}_c$ and with a scale
width of $\sigma r_c$. This expression, except for the dependence
on the redshift is very similar to that used by \cite{pos} to define
their likelihood. Then for a multicolor galaxy sample with data
${\bf m_0}$, ${\bf C}$ and ${\bf \vec{ r}}$, the probability
\begin{equation}
p(A_c,\vec{r}_c,\sigma r_c,z_c,\sigma z_c|{\bf m_0},{\bf C},{\bf \vec{ r}})
\end{equation}
can be developed analogously to how it was done in Sec. \ref{bpz}.
The probability assigned to the existence of a cluster at a certain
redshift and position may be simply defined as $p(A_c|z_c,r_c)>0$.
\subsection{Cluster lensing}
It seems that the most obvious application of BPZ to cluster lensing
analysis is the selection of background galaxies with the technique
described in the previous subsection in order to apply the standard
cluster mass reconstruction techniques (\cite{ks}, \cite{bro},
\cite{sei}, \cite{tay}). However, using Bayesian probability it
is possible to develop an unified approach which simultaneously
considers the lensing and photometric information in an optimal way.
In a simplified fashion, the problem of determining the mass
distribution of a galaxy cluster from observables can be stated as
finding the probability
\begin{equation}
p(\lambda_M,\lambda_C|{\bf e},{\bf \vec{ r}},{\bf m_0}, {\bf C},\lambda_G)
\label{38}
\end{equation}
where $\lambda_M$ represent the parameters which describe the cluster
mass distribution; their number may range from a few, if the
cluster is described with a simplified analytical model or as
many as wanted if the mass distribution is characterized by e.g.
Fourier coefficients (\cite{sk} ).
$\lambda_C$ represents the cosmological parameters, which
sensitively affect the lensing strength. The parameter set $\lambda_G$
represents the properties of the background galaxy population which
affect the lensing, as its redshift distribution, number counts slope,
etc. and it is assumed to be known previously.
The data ${\bf e}$ correspond to the galaxy ellipticities, ${\bf\vec{ r}}$
to their angular positions. As above, ${\bf m_0}, {\bf C}$ correspond
to their colors and magnitudes. For simplicity, it will be assumed
that the cluster and foreground galaxies have been already removed
and we are dealing only with the background galaxy population.
Analogous to eq. (\ref{23}), we can develop eq. (\ref{38}) as
\begin{equation}
p(\lambda_M, \lambda_C|{\bf e},{\bf \vec{ r}},{\bf m_0}, {\bf C},\lambda_G)
\propto
p(\lambda_M) p(\lambda_C)
\prod_i^{n_g} \int dz_i
p(C_i,m_{0i},\vec{r_i},e_i,z_i|\lambda_M,\lambda_C,\lambda_G)
\end{equation}
where the last factor may be written as
\begin{equation}
p(C_i,m_{0i},\vec{r_i},e_i,z_i|\lambda_M,\lambda_C,\lambda_G)\propto
p(e_i|z_i,\vec{r_i},\lambda_M,\lambda_C)
p(\vec{r_i}|m_{0i},\lambda_M,\lambda_C,\lambda_G)
p(z_i|C,m_{0i},\vec{r_i},\lambda_M,\lambda_C,\lambda_G)
\end{equation}
The meaning of the three factors on the right side of the equation
is the following:
$p(e_i|...)$ represents the likelihood of measuring a certain
ellipticity $e_i$ in a galaxy given its redshift, position, etc.
The second factor $p(\vec{r_i}|...)$ corresponds to the so called
``Broadhurst effect'', the number counts depletion of background
galaxies caused by the cluster magnification $\mu$ (Broadhurst 1995,
Ben\'\i tez \& Broadhurst 1998). The last factor, $p(z_i|...)$ is
the redshift probability, but including a correction which takes into
account that the observed magnitude of a galaxy $m_0$ is affected by
the magnification $\mu(\vec{r})$. It is clear that the simplified method
outlined here is not the only way of applying Bayesian probability to
cluster mass reconstruction. My purpose here is to show that this can
be done considering the photometric redshifts in a integrated way with
the rest of the information.
\subsection{Galaxy evolution and cosmological parameters}\label{gev}
As it has been shown in section (\ref{bpz}), BPZ can be used to estimate
the parameters characterizing the joint magnitude--redshift--morphological
type galaxy distribution. For small fields, this distribution may be
dominated by local perturbations, and the redshift distribution may be
`spiky', as it is observed in redshift surveys of small fields.
However, if one were to average over a large number of fields,
the resulting distribution would contain important
information about galaxy evolution and the fundamental cosmological
parameters. \cite{san} included galaxy counts as one of the four
fundamental tests of observational cosmology, although noting that the
number-redshift distribution is in fact more sensitive to the value of
$\Omega_0$. As \cite{gar} also notes, the color distribution of the
galaxies in a survey hold also much more information about the process of
galaxy evolution that the raw number counts. However, quite often the only
method of analyzing multicolor observations is just comparing them
with the number counts model predictions, or at most, with color
distributions. There are several attempts at using photometric redshifts to
study global galaxy evolution parameters (e.g. \cite{saw},
\cite{con97}), but so far there is not an integrated statistical
method which would simultaneously considers all the information, magnitudes
and colors, contained in the data, and set it against the model predictions.
It is then clear that eq. (\ref{23}) can be used to estimate these
parameters from large enough samples of multicolor data. If it is assumed
that all the galaxies belong to a few morphological types, the
joint redshift-magnitude-`type' distribution can be written as
\begin{equation}
n(z,m_0,T)\propto {dV(z) \over dz}\phi_T(m_0)
\label{5}
\end{equation}
where $V(z)$ is the comoving volume as a function of redshift, which
depends on the cosmological parameters $\Omega_0,\Lambda_0$ and $H_0$,
and $\phi_T$ is the Schecter luminosity function for each morphological type
$T$, where the absolute magnitude $M_0$ has been substituted by the
apparent magnitude $m_0$ (a transformation which depends on the redshifts,
cosmological parameters and morphological type). Schecter's function
also depend on $M^{*}$, $\alpha$ and
$\phi^{*}$, and on the evolutionary parameters
$\epsilon$, such as the merging rate, the luminosity evolution, etc.
Therefore, the prior probability of $z$,$A$ and $m_{0}$ depends on the
parameters $\lambda_C=\{\Omega_0,\Lambda_0,H_0\}$,
$\lambda_*=\{M^{*},\phi^{*}\alpha\}$ and $\epsilon$.
As an example, let's suppose that one wants to estimate $\epsilon$,
independently of the rest of the parameters, given the data
${\bf D}\equiv\{D_i\}\equiv\{C_i,m_{0i}\}$. Then
\begin{equation}
p(\epsilon|{\bf D})=\int d\lambda_C d\lambda_*
p(\epsilon,\lambda_C,\lambda_*|{\bf D})
\end{equation}
\begin{equation}
p(\epsilon|{\bf D})\propto
\int d\lambda_C d\lambda_*
p(\epsilon,\lambda_C,\lambda_*)
\prod_i\int dz_i\sum_{T} p(z_i,T,m_{0i}|\epsilon,\lambda_C,\lambda_*)
p(C_i|z_i,T)
\label{6}
\end{equation}
The prior $p(z_i,T,m_{0i}|\epsilon,\lambda_C,\lambda_*)$ can be derived
from $n(z,m_0,T)$ in eq. (\ref{5}). The
prior $p(\epsilon,\lambda_C,\lambda_*)$
allows to include the uncertainties derived from previous observations or
theory in the values of these parameters, even when they are strongly
correlated among themselves, as in the case of the Schecter function
parameters $\lambda_*$. The narrower the prior
$p(\epsilon,\lambda_C,\lambda_*)$ is, the less `diluted' the
probability of $\epsilon$ and the more accurate the estimation.
\section{Conclusions}\label{conc}
Despite the remarkable progress of faint galaxy spectroscopical
surveys, photometric redshift techniques will become increasingly
important in the future. The most frequent approaches, the
template--fitting and empirical training set methods, present several
problems related which hinder their practical application. Here it is
shown that by consistently applying Bayesian probability to photometric
redshift estimation, most of those problems are efficiently solved.
The use of prior probabilities and Bayesian marginalization allows the
inclusion of valuable information as the shape of the redshift
distributions or the galaxy type fractions, which is usually ignored
by other methods. It is possible to characterize the accuracy of
the redshift estimation in a way with no equivalents in other statistical
approaches; this property allows to select galaxy samples for
which the redshift estimation is extremely reliable. In those cases when
the {\it a priori} information is insufficient, it is shown how to
`calibrate' the prior distributions, using even the data under
consideration. In this way it is possible to determine the properties
of individual galaxies more accurately and simultaneously estimate
their statistical properties in an optimal fashion.
The photometric redshifts obtained for the Hubble Deep Field using
optical and near-IR photometry show an excellent agreement with the
$\sim 100$ spectroscopic redshifts published up to date in the
interval $1<z<6$, yielding a rms error $\Delta z_B/(1+z_{spec}) = 0.08$
and no outliers. Note that these results, obtained with an empirical
set of templates, have not been reached by minimizing the difference
between spectroscopic and photometric redshifts (as for
empirical training set techniques, which may lead to an overestimation
of their precision) and thus offer a reasonable estimate of the
predictive capabilities of BPZ.
The reliability of the method is also tested by estimating redshifts
in the HDF but restricting the color information to the UBVI filters;
the results are shown to be more reliable than those obtained with
the existing techniques even including the near-IR information.
The Bayesian formalism developed here can be generalized to deal
with a wide range of problems which make use of photometric redshifts.
Several applications are outlined, e.g. the estimation of individual
galaxy characteristics as the metallicity, dust content, etc., or the
study of galaxy evolution and the cosmological parameters from large
multicolor surveys. Finally, using Bayesian probability it is possible to
develop an integrated statistical method for cluster mass reconstruction
which simultaneously considers the information provided by gravitational
lensing and photometric redshift estimation.
\acknowledgements
I would like to thank Tom Broadhurst and Rychard Bouwens for careful reading
the manuscript and making valuable comments. Thanks also to Alberto
Fern\'andez-Soto and collaborators for kindly providing me with the HDF
photometry and filter transmissions, and to Brenda Frye for help with
the intergalactic absorption correction. The author acknowledges a
Basque Government postdoctoral fellowship.
| proofpile-arXiv_065-8527 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Supersymmetric Yang-Mills (SYM) theory with maximally extended ($N=4$)
supersymmetry in four dimensions has long been known to have some very
interesting properties. In particular, it is ultra-violet finite and hence
superconformally invariant quantum mechanically \cite{n4}, it admits monopole
solutions which fall into spin one multiplets \cite{os} and it exhibits
Montonen-Olive duality \cite{mo,sen}. More recently, renewed interest in this
theory as a superconformal field theory (SCFT) has been stimulated by the
Maldacena conjecture via which it is related, in a certain limit, to IIB
supergravity (SG) on $AdS_5\times S^5$ \cite{mal}. In this paper we shall study
some $N=4$ SYM four-point correlation functions of gauge-invariant operators at
two loops in perturbation theory. The motivation for this study is threefold:
firstly, from a purely field-theoretic point of view, complete two-loop
four-point calculations in four-dimensional gauge theories are not commonplace;
secondly, it is of interest to see if, even qualitatively, there is any sign of
agreement with supergravity calculations \cite{hong,dzf}, and, thirdly, it
provides a check in perturbation theory of the assumption of references
\cite{hw} that correlation functions of analytic operators are indeed analytic
in the quantum theory.
The main results of the paper concern the evaluation of four-point functions of
gauge-invariant bilinears constructed from $N=2$ hypermultiplets in the adjoint
representation of the gauge group $SU(N_c)$. (We recall that the $N=4$ SYM
multiplet splits into the $N=2$ SYM multiplet and a hypermultiplet in the
adjoint representation.) The calculation of two-loop four-point amplitudes is,
by present standards, still a difficult task in any field theory, except for
quantities which depend only on the ultra-violet divergent parts of diagrams,
such as renormalisation group functions. Very few exact result have been
obtained for the finite parts of such amplitudes, the notable exception being
the case of gluon-gluon scattering in $N = 4$ SYM theory for which the two-loop
on-shell amplitude has been calculated in terms of the scalar double box
integrals \cite{beroya}. Our calculation is carried out in the $N=2$ harmonic
superspace formalism \cite{HSS,frules} and we arrive at answers expressed in
terms of a second-order differential operator acting on the standard $x$-space
scalar two-loop integral. From our results we are then able to deduce the
asymptotic behaviour of the four-point functions as two of the points approach
each other. In this limit the behaviour of the leading singularity can be found
explicitly in terms of elementary functions. We find that it has the form
$(x^2)^{-1} \ln x^2$ where $x$ is the coordinate of the difference between the
two coinciding points. This behaviour, although obtained in perturbation
theory, is in line with the behaviour reported on in references \cite{hong,dzf}
where some progress towards computing four-point functions in supergravity has
been made. We also find purely logarithmic next-to-leading terms.
For the case of three-point functions it is possible to check the Maldacena
conjecture by comparing SG results with perturbation theory for SCFT, because
the higher loop corrections to the leading terms turn out to vanish for the three-supercurrent correlator and vanish at least to first non-trivial order for other correlators
\cite{dzf2,min,hsw}. However, it is more difficult to compare SG results with
perturbative SCFT for higher point functions. One technical difficulty that one
encounters is that the easiest functions to compute in SCFT are the leading
scalar terms of the four-point function of four supercurrent operators,
whereas, on the SG side, it is easiest to compute amplitudes involving the
axion and dilaton fields. In section 2 we shall show how one may compute the
leading term of the $N=4$ four-point function from the three $N=2$
hypermultiplet four-point functions that are considered in perturbation theory
in section 4. On the grounds that there are no nilpotent superinvariants of
four points \cite{hw} this means that we can, in principle, determine the
entire $N=4$ four-point correlation function from the three $N=2$
hypermultiplet correlation functions. Furthermore, we can also compute the
four-point correlation functions of $N=2$ operators constructed as bilinears in
the $N=2$ SYM field-strength tensor superfield. The leading terms of the
non-vanishing amplitudes of this type are related to the hypermultiplet
correlation functions due to their common origin in $N=4$. In section 3 we then
use a superinvariant argument to show how the $N=2$ YM correlators can be
constructed in their entirety from the leading terms. It is the highest
components of these correlation functions in a $\theta$-expansion which
correspond to the SG amplitudes currently being investigated.
In the $N=4$ harmonic superspace approach to SYM one can construct a family of
gauge-invariant operators which are described by single-component analytic
superfields. These fields depend on only half the number of odd coordinates of
standard $N=4$ superspace and, in addition, depend analytically on the
coordinates of an auxiliary bosonic space, the coset space $S(U(2)\times
U(2))\backslash SU(4)$, which is a compact complex manifold. Moreover, this
family of operators is in one-to-one correspondence with the KK states of IIB
supergravity compactified on $AdS_5\times S^5$ \cite{af}. In references
\cite{hw} it has been argued that one might hope to get further constraints on
correlation functions of these operators by using superconformal invariance and
the assumption that analyticity is maintained in the quantum theory. However,
this assumption is difficult to check directly in the $N=4$ formalism because
it is intrinsically on-shell. In a recent paper \cite{hsw} analyticity was
checked for certain three-point functions using $N=2$ harmonic superspace
(which is an off-shell formalism). The four-point functions computed in the
current paper also preserve analyticity thereby lending further support to the
$N=4$ analyticity postulate.
The organisation of the paper is as follows: in the next section we shall show
how the leading, scalar term of the $N=4$ four-point correlation function can
be determined from three $N=2$ four-point functions; following this, we show that
knowledge of these functions is also sufficient to determine the $N=2$
four-point function with two $W^2$ operators and two $\bar W^2$ operators, $W$
being the chiral $N=2$ Yang-Mills field-strength multiplet, and we also show
that the leading scalar term of this correlation function can be used, in
principle, to determine it completely. In section 4 we present the ${\cal
N}=2$ harmonic superspace calculations of the two hypermultiplet correlation
functions at two loops in some detail. We then discuss the asymptotic behaviour
of the integrals that occur in these computations in order to find out the
leading singularities that arise when two of the points approach each other.
The paper ends with some further comments. The appendix collects
some known results on massless one-loop and two-loop integrals
which we have found useful.
\section{$N=4$ in terms of $N=2$}
In this section we show how one can compute the leading scalar term of the
$N=4$ four-point function of four supercurrents from the leading scalar terms
of three $N=2$ hypermultiplet four-point functions. We recall that in standard
$N=4$ superspace the $N=4$ field strength superfield $W^{IJ}=-W^{JI},\
I,J=1\ldots 4$ transforms under the real six-dimensional representation of the
internal symmetry group $SU(4)$ (where the $I$ index labels the fundamental
representation), i.e. it is self-dual, $\bar W_{IJ}={1\over2}\epsilon_{IJKL}
W^{KL}$. This superfield satisfies the (on-shell) constraint
\begin{equation}\label{N4con}
D_{\alpha}^IW^{JK}=D_{\alpha}^{[I}W^{JK]}
\end{equation}
where $D_{\alpha}^I$ is the superspace covariant derivative. Strictly speaking
this constraint holds only for an Abelian gauge theory and in the non-Abelian
case a connection needs to be included. However, the constraints satisfied by
the gauge-invariant bilinears that we shall consider are in fact the same in
the Abelian and non-Abelian cases.
In order to discuss the energy-momentum tensor it is convenient to use an
$SO(6)$ formalism. The field-strength itself can be written as a vector of
$SO(6)$, $W^A,\ A=1,\ldots 6$,
\begin{equation}}\def\end{equation}\noindent{\end{equation}
W^A={1\over2}(\sigma^A)_{JK}W^{JK}
\end{equation}\noindent
The sigma-matrices have the following components
\begin{equation}}\def\end{equation}\noindent{\end{equation}
\begin{array}{lclc}
(\sigma^a)_{bc}&= 0 & (\sigma^a)_{b4}&=\delta^a_b \\ (\sigma^{\bar
a})_{bc}&=\epsilon_{abc} & (\sigma^{\bar a})_{b4}&= 0
\end{array}
\end{equation}\noindent
where the small Latin indices run from 1 to 3, and where the $SO(6)$ and
$SU(4)$ indices split as $A=(a,\bar a)$ (in a complex basis) and $I=(a,4)$. An
upper (lower) $(\bar a)$ index is equivalent to a lower (upper) $a$ index and
vice versa. The sigma-matrices are self-dual,
\begin{equation}}\def\end{equation}\noindent{\end{equation}
(\bar\sigma^A)^{IJ}=\frac{1}{2}\epsilon^{IJKL}(\sigma^A)_{KL}
\end{equation}\noindent
In terms of the $SU(3)$ indices $a,\bar a$ one has the decompositions,
\begin{equation}}\def\end{equation}\noindent{\end{equation}
W^A\rightarrow (W^a, W^{\bar a}=\bar W_a)
\end{equation}\noindent
and
\begin{equation}}\def\end{equation}\noindent{\end{equation}
W^{IJ}\rightarrow \left(W^{ab}=\epsilon^{abc}\bar W_c,\ W^{a4}=W^a\right)
\end{equation}\noindent
The leading component of $W^a$ in an expansion in the fourth $\theta$ can be
identified with the $N=3$ SYM field-strength tensor. Decomposing once more to
$N=2$ one finds that $W^a$ splits into the $N=2$ field-strength $W$ and the
$N=2$ hypermultiplet $\phi^i$,
\begin{equation}}\def\end{equation}\noindent{\end{equation}
W^a\rightarrow (W^i\equiv \phi^i, W^3\equiv W)
\end{equation}\noindent
{}From (\ref{N4con}) it is easy to see that these superfields (evaluated with
both the third and fourth $\theta$ variables set equal to zero) do indeed obey
the required constraints, that is, $W$ is chiral and
\begin{eqnarray}}\def\barr{\begin{array}}\def\earr{\end{array}
D_{\alpha}^{(i}\phi^{j)}&=&0\\ \nonumber
D_{\alpha}^{(i}\bar\phi^{j)}&=&0
\end{eqnarray}
In addition, at the linearised level, the SYM field-strength $W$ also satisfies
the equation of motion, $D_{\alpha}^i D^{\alpha j} W=0$, and this also follows
from (\ref{N4con}). The $N=4$ supercurrent is given by \footnote{Here, and
throughout the paper, bilinear expressions such as $WW$, $\phi\phi$ etc. are
understood to include a trace over the Yang-Mills indices.}
\begin{equation}}\def\end{equation}\noindent{\end{equation}
T^{AB}=W^A W^B -{1\over6}\delta^{AB}W^C W^C
\end{equation}\noindent
and the four-point function we are going to consider is
\begin{equation}}\def\end{equation}\noindent{\end{equation}
G^{(N=4)}=<T^{A_1B_1}T^{A_2B_2}T^{A_3B_3}T^{A_4B_4}>
\end{equation}\noindent
where the numerical subscripts indicate the point concerned. This function can
be expressed in terms of $SO(6)$ invariant tensors multiplied by scalar factors
which are functions of the coordinates. The only $SO(6)$ invariant tensor that
can arise is $\delta$, and there are two modes of hooking the indices up in
$G^{(N=4)}$ each of which can occur in three combinations making six
independent amplitudes in all. Thus we have
\begin{eqnarray}}\def\barr{\begin{array}}\def\earr{\end{array}
G^{(N=4)}&=&a_1(\delta_{12})^2(\delta_{34})^2 +
a_2(\delta_{13})^2(\delta_{24})^2
+a_3(\delta_{14})^2(\delta_{23})^2\\ \nonumber
&\phantom{=}&
+b_1\delta_{13}\delta_{14}\delta_{23}\delta_{24}+
b_2\delta_{12}\delta_{14}\delta_{32}\delta_{34}
+b_3\delta_{12}\delta_{13}\delta_{42}\delta_{43}
\label{ab}
\end{eqnarray}
where, for example,
\begin{equation}}\def\end{equation}\noindent{\end{equation}
(\delta_{12})^2(\delta_{34})^2=\delta_{\{A_1B_1\}}^{A_2B_2}
\delta_{\{A_3B_3\}}^{A_4B_4}
\end{equation}\noindent
and
\begin{equation}}\def\end{equation}\noindent{\end{equation}
\delta_{13}\delta_{14}\delta_{23}\delta_{24}=\delta_{\{A_1B_1\}}^{\{A_3\{B_4}
\delta_{\{A_2B_2\}}^{A_4\}B_3\}}
\end{equation}\noindent
and where the brackets denote tracefree symmetrisation at each point.
In $N=3$ the $N=4$ supercurrent splits into two multiplets, the $N=3$
supercurrent $T^{a\bar b}$, and a second multiplet $T^{ab}$ which contains
amongst other components the fourth supersymmetry current. The $N=3$
supercurrent transforms under the real eight-dimensional representation of
$SU(3)$ while the second multiplet transforms according to the complex
six-dimensional representation. The $N=4$ four-point function decomposes into
several $N=3$ four-point functions. Amongst them we find three which, when
further decomposed under $N=2$, will suffice to determine all the $a$ and $b$
functions in (\ref{ab}):
\begin{eqnarray}}\def\barr{\begin{array}}\def\earr{\end{array}
G_1^{(N=3)}&=&<T^{a_1b_1}T^{\bar a_2\bar b_2}T^{a_3b_3}T^{\bar a_4\bar b_4}>\\
G_2^{(N=3)}&=&<T^{a_1b_1}T^{\bar a_2\bar b_2}T^{a_3\bar b_3}T^{a_4\bar b_4}>\\
G_3^{(N=3)}&=&<T^{a_1\bar b_1}T^{a_2\bar b_2}T^{a_3\bar b_3}T^{a_4\bar b_4}>
\end{eqnarray}
In the $N=2$ decomposition of $T^{ab}$ we find
\begin{equation}}\def\end{equation}\noindent{\end{equation}
T^{ij}=\phi^i\phi^j
\end{equation}\noindent
while $T^{33}=W^2$. On the other hand, the $N=3$ supercurrent
contains another $N=2$ hypermultiplet operator, also in a triplet of $SU(2)$.
This is obtained from $T^{a\bar b}$ by restricting the indices to run from 1 to
2 and then removing the trace over the $i,j$ indices. In this way we can
construct
\begin{equation}}\def\end{equation}\noindent{\end{equation}
\hat T^i{}_j=T^i{}_j-{1\over2}\delta^i_j
T^k{}_k=\phi^i\bar\phi_j-{1\over2}\delta^i_j\phi^k\bar\phi_k
\end{equation}\noindent
The $N=2$ harmonic superspace hypermultiplet $q^+$ is related to the $N=2$
superfield $\phi^i$ by
\begin{equation}}\def\end{equation}\noindent{\end{equation}\label{trivsol}
q^+= u^+_i\phi^i
\end{equation}\noindent
on-shell. Its conjugate $\widetilde q^+$ is given by
\begin{equation}}\def\end{equation}\noindent{\end{equation}
\widetilde q^+= u^{+i}\bar\phi_i
\end{equation}\noindent
where the details of $N=2$ harmonic superspace are reviewed in section 4.
Restricting the indices on the three $N=3$ four-point functions $G_1^{(N=3)}$ and
$G_2^{(N=3)}$ to run from 1 to 2, removing the $N=2$ traces at each point where
necessary and multiplying the resulting functions by $u^+_i u^+_j$ at each
point we find three hypermultiplet four-point functions. The leading terms of
these correlation functions are given in terms of the $a$ and $b$ functions
introduced in equation (\ref{ab}) and it is a straightforward computation to show
that, in the notation of section 4,
\begin{eqnarray}}\def\barr{\begin{array}}\def\earr{\end{array}
G_1^{(N=2)}&=&<\widetilde q^+\widetilde q^+|q^+q^+|\widetilde q^+
\widetilde q^+ |q^+q^+>\\ \nonumber
&=&(12)^2(34)^2 a_1 + (14)^2(23)^2 a_3 + (12)(23)(34)(41) b_2
\end{eqnarray}
while
\begin{eqnarray}}\def\barr{\begin{array}}\def\earr{\end{array}
G_2^{(N=2)}&=&<\widetilde q^+\widetilde q^+|q^+q^+|q^+\widetilde q^+|q^+\widetilde
q^+> \\ \nonumber &=&{1\over4}\left((12)^2(34)^2(-2a_1 - b_3) + (14)^2(23)^2
b_1+ (12)(23)(34)(41) (b_3-b_1-b_2)\right)
\end{eqnarray}
and
\begin{eqnarray}}\def\barr{\begin{array}}\def\earr{\end{array}
G_3^{(N=2)}&=&<q^+\widetilde q^+|q^+\widetilde q^+|q^+\widetilde q^+|q^+\widetilde
q^+> \\ \nonumber &=&{1\over8}\Big((12)^2(34)^2(2a_1+2a_2 +b_3) + (14)^2(23)^2
(2a_2 +2a_3 +b_1)+ \\ \nonumber
&\phantom{=}& + (12)(23)(34)(41) (b_2-b_1-b_3-4a_2)\Big)
\end{eqnarray}
where, for example,
\begin{equation}
(12)= u_1^{+i}u_2^{+j}\epsilon_{ij}
\end{equation}
In section 4 we shall see that each of these four-point functions can be
written in terms of three functions of the coordinates, $A_1,A_2,A_3$, and
hence all of the $a$ and $b$ functions that appear in the $N=4$ four-point
function can be determined by computing the above $N=2$ correlators.
We now consider the $N=2$ correlation functions involving the gauge-invariant
operators $W^2$ and $\bar W^2$ constructed from the $N=2$ SYM field-strength.
The possible independent four-point functions of this type are
\begin{eqnarray}
G_4^{(N=2)}&=&<W^2\bar W^2 W^2\bar W^2> \\ \nonumber
G_5^{(N=2)}&=&<W^2 W^2 W^2\bar W^2>
\\ \nonumber
G_6^{(N=2)}&=&<W^2 W^2 W^2 W^2>
\end{eqnarray}
The leading terms of $G_5^{(N=2)}$ and $G_6^{(N=2)}$ vanish as one can easily
see by examining the leading terms of the $N=3$ correlation functions from
which they can be derived. In fact, it is to be expected that these correlation
functions should vanish at all orders in $\theta$. This can be argued from an
$N=4$ point of view from the absence of nilpotent $N=4$ superinvariants, or
directly in $N=2$. For example, using just $N=2$ superconformal invariance, it
is possible to show that all correlation functions of gauge-invariant powers of
$W$ vanish.
We remark that the $N=2$ four-point function $G_4^{(N=2)}=<W^2\bar W^2W^2\bar
W^2>$ is also obtainable from $G_2^{(N=3)}$. In terms of the $a$ and $b$
functions it is given by
\begin{equation}
<W^2\bar W^2W^2\bar W^2>=a_1 + a_3 + b_2
\end{equation}
On-shell the $(\theta)^4$ component of $W^2$ is
$F_{\alpha\beta}F^{\alpha\beta}$ where $F_{\alpha\beta}$ is the self-dual
space-time Yang-Mills field-strength tensor, so that the top $((\theta)^{16})$
component of this correlation function is directly related to the dilaton and
axion amplitudes in supergravity. Clearly any four-point function of the
operators $F_{\mu\nu}F^{\mu\nu}$ and ${1\over
2}\epsilon^{\mu\nu\rho\sigma}F_{\mu\nu}F_{\rho\sigma}$ can be obtained from the
above.
\section{The $N=2$ chiral-antichiral four-point function}
In this section we show that the complete $N=2$ four-point function
$G_4^{(N=2)}$ is determined by its leading term using only superconformal
invariance. The coordinates of $N=2$ superspace are
$(x^{\alpha\dot\alpha},\theta^{\alpha}_i,\bar\theta^{\dot\alpha i})$ and an
infinitesimal superconformal transformation in this space is given by a
superconformal Killing vector field
$V=F^{\alpha\dot\alpha}\partial_{\alpha\dot\alpha}+\varphi^{\alpha}_i
D_{\alpha}^i-\bar \varphi^{\dot \alpha i}\bar D_{\dot \alpha i}$, where
$D_{\alpha}^i$ is the usual flat space supercovariant derivative. By
definition, $V$ satisfies the equation
\begin{equation}
[D_{\alpha}^i,V]\cong D_{\alpha}^i
\end{equation}
The chiral superfield $W^2$ transforms as
\begin{equation}
\delta W^2=V W^2 + \Delta W^2
\end{equation}
where $\Delta=\partial_{\alpha\dot\alpha}F^{\alpha\dot\alpha}-D_{\alpha
}^i\varphi^{\alpha}_i$.
Since $G_4^{(N=2)}$ is chiral at points 1 and 3 and anti-chiral at points 2 and
4 it depends only on the chiral or anti-chiral coordinates at these points.
These are given by
\begin{equation}
\begin{array}{lclcll}
X^{\alpha\dot\alpha}&=&x^{\alpha\dot\alpha}+
2{i}\theta^{\alpha}_i\bar\theta^{\dot\alpha i}
&;&\theta^{\alpha}_i &{\rm chiral} \\
\bar X^{\alpha\dot\alpha}&=&x^{\alpha\dot\alpha}-2{i}\theta^{\alpha}_i
\bar\theta^{\dot\alpha i}
&;&\bar\theta^{\dot\alpha i}&{\rm anti-chiral}
\end{array}
\end{equation}
At lowest order in Grassmann variables translational invariance in $x$ implies
that $G_4^{(N=2)}$ depends only on the difference variables $x_r- x_s,\
r,s=1\ldots 4$, of which three are independent. Combining this with
$Q$-supersymmetry one finds that $G_3^{(N=2)}$ depends only on the
$Q$-supersymmetric extensions of these differences, which will be denoted
$y_{rs}$, as well as the differences $\theta_{13}=\theta_1-\theta_3$ and
$\bar\theta_{24}=\bar\theta_2-\bar\theta_4$. The supersymmetric difference
variables joining one chiral point $(r)$ with one anti-chiral point $(s)$ have
the following form:
\begin{equation}
y_{rs}=X_r-\bar X_s -4i\theta_r\bar\theta _s
\end{equation}
When the chirality of the two points is the same one has
\begin{eqnarray}
y_{13}&=&X_1-X_3 -2{i}\theta_{13}(\bar\theta_2 +\bar\theta_4)\\
\nonumber
y_{24}&=&\bar X_2-\bar X_4 +2{i}(\theta_1 +\theta_3)\bar\theta_{24}
\end{eqnarray}
It is easy to find a free solution of the Ward Identity for $G_4^{(N=2)}$; it is
given by
\begin{equation}
G_4^{o(N=2)}={1\over y_{14}^2 y_{23}^2}
\end{equation}
A general solution can be written in terms of this free solution in the form
\begin{equation}
G_4^{(N=2)}=G_4^{o(N=2)} \times F
\end{equation}
where $F$ is a function of superinvariants. At the lowest order, $F$ is a
function of two conformal invariants which may be taken to be
\begin{equation}
S\equiv{x_{12}^2x_{34}^2\over x_{14}^2x_{23}^2};\qquad
T\equiv{x_{13}^2x_{24}^2\over x_{14}^2x_{23}^2}
\end{equation}
The strategy is now to show that these two conformal invariants can be extended
to superconformal invariants and furthermore that there are no further
superconformal invariants. Any new superinvariant would have to vanish at
lowest order and would thus be nilpotent.
{}From the above discussion it is clear that $F$ can only depend on the $y_{rs}$
and the odd differences $\theta_{13}$ and $\bar\theta_{24}$. Furthermore, it
has dilation weight and $R$-weight zero, the latter implying that it depends on
the odd coordinates in the combination $\theta_{13}\bar\theta_{24}$. This takes
all of the symmetries into account except for $S$-supersymmetry and conformal
symmetry. However, conformal transformations appear in the commutator of two
$S$-supersymmetry transformations and so it is sufficient to check the latter.
The $S$-supersymmetry transformations of the chiral-antichiral variables are
rather simple:
\begin{equation}
\delta y_{rs}^{\alpha\dot\alpha}=
\theta_{r i}^{\alpha}\eta_{\beta}^iy_{rs}^{\beta\dot\alpha}
\end{equation}
The transformations of the (anti-)chiral-chiral variables are slightly more
complicated and it is convenient to introduce new variables as follows:
\begin{eqnarray}
\hat y_{13}&=& y_{12}+y_{23}\\ \nonumber
\hat y_{24}&=& -y_{12}+y_{14}
\end{eqnarray}
Under $S$-supersymmetry these transform as follows:
\begin{eqnarray}
\delta\hat y_{13}^{\alpha\dot\alpha}&=&
\theta_{3\hspace{1 pt} i}^{\alpha}\eta_{\beta}^i\hat y_{13}^{\beta\dot\alpha}
+\theta_{13\hspace{1 pt} i}^{\alpha}\eta_{\beta}^i y_{12}^{\beta\dot\alpha}\nonumber\\
\delta\hat y_{24}^{\alpha\dot\alpha}&=&
\theta_{1\hspace{1 pt} i}^{\alpha}\eta_{\beta}^i\hat y_{24}^{\beta\dot\alpha}
\end{eqnarray}
The transformations of the odd variables are
\begin{eqnarray}
\delta\theta^{\alpha}_i&=&\theta^{\alpha}_j\eta_{\beta}^j\theta^{\beta}_i\\
\nonumber
\delta\bar\theta^{\dot\alpha i}&=&-{i\over4}\eta_{\alpha}^i
\bar X^{\alpha\dot\alpha}
\end{eqnarray}
Using these transformations it is easy to extend $S$ to a superinvariant $\hat S$. It is
\begin{equation}
\hat S={y_{12}^2y_{34}^2\over y_{14}^2y_{23}^2}
\end{equation}
However, the extension of the second invariant is slightly more complicated
because it involves the (anti-)chiral-chiral differences. A straightforward
computation shows that the required superinvariant is
\begin{equation}
\hat T={1\over y_{14}^2y_{23}^2}\left(\hat y_{13}^2\hat y_{24}^2
-16i\theta_{13}\cdot\bar\theta_{24}\cdot(\hat y_{13}y_{12}\hat y_{24})-
16y_{12}^2(\theta_{13}\cdot\bar\theta_{24})^2\right)
\end{equation}
Now consider the possibility of a nilpotent superinvariant. Under an
$S$-supersymmetry transformation the leading term in the variation arises from
the $\theta$-independent term in the variation of $\bar\theta$. On the
assumption that the $x$-differences are invertible, it follows immediately that
there can be no such invariants. Thus, the general form of the four-point
function $G_4^{(N=2)}$ is
\begin{equation}
<W^2\bar W^2W^2\bar W^2>={1\over y_{14}^2y_{23}^2}F(\hat S,\hat T)
\end{equation}
and so is calculable from the leading term as claimed.
\section{Computation of four-point $N=2$ correlation functions}
\subsection{$N =2$ harmonic superspace and Feynman rules}
\subsubsection{}
The $N =4$ SYM multiplet reduces to an $N =2$ SYM multiplet and a
hypermultiplet. The latter are best described in $N =2$ harmonic superspace
\cite{HSS}. In addition to the usual bosonic ($x^{\alpha\dot\alpha}$) and
fermionic ($\theta^{\alpha}_i, \bar\theta^{\dot\alpha i}$) coordinates, it
involves $SU(2)$ harmonic ones:
\begin{equation}\label{harco}
SU(2)\ni u=(u_i^+,u_i^-)\;: \qquad u^-_i =\overline{u^{+i}}\;, \quad
u^{+i}u^-_i
= 1\ .
\end{equation}
They parametrise the coset space $SU(2)/U(1)\sim S^2$ in the following sense:
the index $i$ transforms under the left $SU(2)$ and the index (``charge") $\pm$
under the right $U(1)$; further, all functions of $u^\pm$ are homogeneous under
the action of the right $U(1)$ group. The harmonic functions $f(u)$ are defined
by their harmonic expansion (in general, infinite) on $S^2$.
The main advantage of harmonic superspace is that it allows us to define
Grassmann-analytic (G-analytic) superfields satisfying the constraints
\begin{equation}}\def\end{equation}\noindent{\end{equation}\label{Gan}
D^+_\alpha \Phi(x,\theta,u) = \bar D^+_{\dot\alpha} \Phi(x,\theta,u) = 0\;.
\end{equation}\noindent
Here
\begin{equation}}\def\end{equation}\noindent{\end{equation}
D^+_{\alpha,\dot\alpha} = u^+_i D^i_{\alpha,\dot\alpha}
\end{equation}\noindent
are covariant $U(1)$ harmonic projections of the spinor derivatives. The
G-analyticity condition (\ref{Gan}) is integrable, since by projecting the
spinor derivative algebra
\begin{equation}}\def\end{equation}\noindent{\end{equation}
\{D^i_\alpha,D^j_\beta\} = \{\bar D_{\dot\alpha i},\bar D_{\dot\beta j}\} = 0,
\quad
\{D^i_\alpha,\bar D_{\dot\beta j}\} = -2i
\delta^i_j \partial_{\alpha\dot\beta}
\end{equation}\noindent
one finds
\begin{equation}}\def\end{equation}\noindent{\end{equation}
\{D^+_\alpha,D^+_\beta\} = \{\bar D^+_{\dot\alpha},\bar D^+_{\dot\beta}\} =
\{D^+_\alpha,\bar D^+_{\dot\beta}\} = 0\;.
\end{equation}\noindent
Moreover, the G-analyticity condition (\ref{Gan}) can be solved explicitly. To
this end one introduces a new, analytic basis in harmonic superspace:
\begin{equation}}\def\end{equation}\noindent{\end{equation}\label{anbas}
x^{\alpha\dot\alpha}_A = x^{\alpha\dot\alpha} - 2i\theta^{\alpha (i}
\bar\theta^{\dot\alpha j)} u^+_i u^-_j\;, \quad \theta^\pm_{\alpha,\dot\alpha} =
u^\pm_i\theta^i_{\alpha,\dot\alpha}\;, \quad u^\pm_i \;.
\end{equation}\noindent
In this basis the constraints (\ref{Gan}) just imply
\begin{equation}}\def\end{equation}\noindent{\end{equation}\label{Gansupf}
\Phi^q = \Phi^q(x_A,\theta^+,\bar\theta^+,u^\pm)\;,
\end{equation}\noindent
i.e. the solution is a Grassmann-analytic function of $\theta^+,\bar\theta^+$
(in the sense that it does not depend on the complex conjugates
$\theta^-,\bar\theta^-$).
We emphasise that the superfield (\ref{Gansupf}) is a non-trivial harmonic
function carrying an external $U(1)$ charge $q=0,\pm 1,\pm 2,\ldots$. Thus,
$\Phi^q$ has an infinite harmonic expansion on the sphere $S^2$. Most of the
terms in this expansion turn out to be auxiliary (in the case of the
hypermultiplet) or pure gauge (in the SYM case) degrees of freedom. In order to
obtain an ordinary superfield with a finite number of components one has to
restrict the harmonic dependence. This is done with the help of the harmonic
derivative (covariant derivative on $S^2$)
\begin{equation}}\def\end{equation}\noindent{\end{equation}
D^{++} = u^{+i}{\partial\over\partial u^{-i}}\;,
\end{equation}\noindent
or, in the analytic basis (\ref{anbas}),
\begin{equation}}\def\end{equation}\noindent{\end{equation}\label{GanD}
D^{++} = u^{+i}{\partial\over\partial u^{-i}} -2i\theta^{+\alpha}
\bar\theta^{+\dot\alpha}
{\partial\over\partial
x^{\alpha\dot\alpha}_A}\;.
\end{equation}\noindent
Thus, for instance, one can choose the G-analytic superfield
$q^+(x_A,\theta^+,\bar\theta^+,u^\pm)$ carrying $U(1)$ charge $+1$ and impose
the harmonic analyticity condition
\begin{equation}}\def\end{equation}\noindent{\end{equation}\label{emhyp}
D^{++}q^+=0\;.
\end{equation}\noindent
Due to the presence of a space-time derivative in the analytic form
(\ref{GanD}) of $D^{++}$, equation (\ref{emhyp}) not only ``shortens'' the
harmonic superfield $q^+$, but also puts the remaining components on shell. In
fact, (\ref{emhyp}) is the equation of motion for the $N =2$ hypermultiplet. We
remark that eq. (\ref{emhyp}) is compatible with the G-analyticity conditions
(\ref{Gan}) owing to the obvious commutation relations
\begin{equation}}\def\end{equation}\noindent{\end{equation}
[D^{++}, D^+_{\alpha,\dot\alpha}] = 0\;.
\end{equation}\noindent
Note also that in the old basis $(x,\theta_i,\bar\theta^i,u)$ the harmonic
analyticity condition has the trivial solution given in (\ref{trivsol});
however, in this basis the G-analyticity condition (\ref{Gan}) on $q^+$ becomes
a non-trivial condition which in fact follows from the $N=4$ SYM on-shell
constraints (\ref{N4con}).
\subsubsection{}
A remarkable feature of harmonic superspace is that it allows us to have an
off-shell version of the hypermultiplet. To this end it is sufficient to relax
the on-shell condition (\ref{emhyp}) and subsequently obtain it as a
variational equation from the action
\begin{equation}}\def\end{equation}\noindent{\end{equation}\label{HMact}
S_{HM} = \int d^4x_A du d^4\theta^+\; \widetilde q^+ D^{++} q^+\;.
\end{equation}\noindent
Here the integral is over the G-analytic superspace: $du$ means the standard
invariant measure on $S^2$ and $d^4\theta^+\equiv (D^-)^4$. The special
conjugation $\widetilde q^+$ combines complex conjugation and the antipodal map
on $S^2$ \cite{HSS}. Its essential property is that it leaves the G-analytic
superspace invariant (unlike simple complex conjugation). The reality of the
action is established with the help of the conjugation rules $\widetilde
{\widetilde q^+} = - q^+$ and $\widetilde D^{++} = D^{++}$. Note that the
$U(1)$ charge of the Lagrangian in (\ref{HMact}) is $+4$, which exactly matches
that of the measure (otherwise the $SU(2)$ invariant harmonic integral would
vanish).
The other ingredient of the $N=4$ theory is the $N=2$ SYM multiplet. It is
described by a real G-analytic superfield
$V^{++}(x_A,\theta^+,\bar\theta^+,u^\pm) = \widetilde V^{++}$ of $U(1)$ charge
$+2$ subject to the following gauge transformation:
\begin{equation}}\def\end{equation}\noindent{\end{equation}\label{gtrv}
\delta V^{++} = -D^{++}\Lambda +ig[\Lambda,V^{++}]
\end{equation}\noindent
where $\Lambda(x_A,\theta^+,\bar\theta^+,u^\pm)$ is a G-analytic gauge
parameter. It can be shown \cite{HSS} that what remains from the superfield
$V^{++}$ in the Wess-Zumino gauge is just the (finite-component) $N=2$ SYM
multiplet. Under a gauge transformation (\ref{gtrv}), the matter
(hypermultiplet) superfields transforms in the standard way
\begin{equation}}\def\end{equation}\noindent{\end{equation}
\delta q^+ = i\Lambda q^+\; .
\end{equation}\noindent
Thus $V^{++}$ has the interpretation of the gauge connection for the harmonic
covariant derivative ${\cal D}^{++} = D^{++} + igV^{++}$. All this suggests the
standard minimal SYM-to-matter coupling which consists in covariantising the
derivative in (\ref{HMact}):
\begin{equation}}\def\end{equation}\noindent{\end{equation}\label{HMSYMact}
S_{HM/SYM} = \int d^4x_A du d^4\theta^+\; \left[\widetilde q^+_a (\delta_{ab}
D^{++} + {g\over 2}f_{abc}V^{++}_c) q^+_b\right]\;.
\end{equation}\noindent
In order to reproduce the $N=4$ theory we have chosen the matter $q^+ = q^+_a
t_a$ in the adjoint representation of the gauge group $G$ with structure
constant $f_{abc}$
\footnote{We use the definitions $[t_a,t_b] = if_{abc}t_c$, $\mbox{tr}(t_at_b) =
C(Adj)\delta_{ab}$, where the quadratic Casimir is normalised so that $C(Adj)
= N_c$ for $SU(N_c)$.}.
We shall not explain here how to construct the gauge-invariant action for
$V^{++}$ \cite{HSS}. We only present the form of the gauge-fixed kinetic term
in the Fermi-Feynman gauge \cite{frules}:
\begin{equation}}\def\end{equation}\noindent{\end{equation}\label{FF}
S_{SYM+GF} = -{1\over 2}\int d^4x_A du d^4\theta^+\; V^{++}_a\Box V^{++}_a\;.
\end{equation}\noindent
Of course, the full SYM theory includes gluon vertices of arbitrary order as
well as Faddeev-Popov ghosts, but we shall not need them here (the details can
be found in \cite{frules}).
\subsubsection{}
The off-shell description of both theories above allows us to develop
manifestly $N=2$ supersymmetric Feynman rules. We begin with the gauge
propagator. Since the corresponding kinetic operator in the FF gauge (\ref{FF})
is simply $\Box$, the propagator is given by the Green's function
$1/4i\pi^2x^2_{12}$:
\begin{equation}}\def\end{equation}\noindent{\end{equation}\label{invbox}
\Box_1 {1\over 4i\pi^2x^2_{12}} = \delta(x_1-x_2)
\end{equation}\noindent
combined with the appropriate Grassmann and harmonic delta functions:
\vskip 0.5 in
\begin{center}
\begin{picture}(30000,4000)(0,-2000)
\put(0,2000){\scriptsize{1a}}
\drawline\gluon[\E\REG](1500,2000)[6]
\put(8430,2000){\scriptsize{2b}}
\put(15000,2000){$\langle V^{++}_a(1)V^{++}_b(2)\rangle =$}
\put(1500,-1500){Figure 1}
\end{picture}
\end{center}
\begin{equation}}\def\end{equation}\noindent{\end{equation}\label{Vprop}
= {i\over 4\pi^2} \delta_{ab} (D^+_1)^4
\left({\delta_{12}\over
x^2_{12}}\right)
\delta^{(-2,2)}(u_1,u_2) = {i\over 4\pi^2} \delta_{ab} (D^+_2)^4
\left({\delta_{12}\over
x^2_{12}}\right)
\delta^{(2,-2)}(u_1,u_2)\;,
\end{equation}\noindent
where $\delta_{12}$ is shorthand for the Grassmann delta function
$\delta^8(\theta_1-\theta_2)$ and $\delta^{(2,-2)}(u_1,u_2)$ is a harmonic
delta function carrying $U(1)$ charges $+2$ and $-2$ with respect to its first
and second arguments. Note that the propagator is written down in the usual
basis in superspace and not in the analytic one, so $x_{1,2}$ appearing in
(\ref{Vprop}) are the ordinary space-time variables and not the shifted $x_A$
from (\ref{anbas}). The G-analyticity of the propagator is assured by the
projector
\begin{equation}}\def\end{equation}\noindent{\end{equation}
(D^+)^4 = {1\over 16} D^{+\alpha}D^+_\alpha \bar D^+_{\dot\alpha} \bar
D^{+\dot\alpha}\;.
\end{equation}\noindent
The two forms given in (\ref{Vprop}) are equivalent owing to the presence of
Grassmann and harmonic delta functions \footnote{In certain cases the harmonic
delta function needs to be regularised in order to avoid coincident harmonic
singularities. In such cases one should use an equivalent form of the
propagator (\ref{Vprop}) in which the analyticity with respect to both
arguments is manifest \cite{Ky}.}.
The matter propagator is somewhat more involved. The kinetic operator for the
hypermultiplet is a harmonic derivative, so one should expect a harmonic
distribution to appear in the propagator. Such distributions are simply given
by the inverse powers of the $SU(2)$ invariant combination $u^{+i}_1u^+_{2i}
\equiv (12)$ which vanishes when $u_1 = u_2$. One can prove the relations
\cite{frules}:
\begin{equation}}\def\end{equation}\noindent{\end{equation}\label{hdistr}
D^{++}_1 {1\over (12)^n} = {1\over (n-1)!}
(D^{--}_1)^{n-1}\delta^{(n,-n)}(u_1,u_2)
\end{equation}\noindent
which are in a way the $S^2$ analogues of eq. (\ref{invbox}). Here $D^{--} =
\overline{D^{++}}$ is the other covariant derivative
on $S^2$. So, the $q^+$ propagator is then given by
\vskip 0.5 in
\begin{center}
\begin{picture}(30000,4000)(0,-2000)
\put(0,2000){\scriptsize{1a}}
\drawline\fermion[\E\REG](1500,2000)[6430]
\drawarrow[\E\ATTIP](\pmidx,\pmidy)
\put(8430,2000){\scriptsize{2b}}
\put(15000,2000){$\langle \widetilde q^+_a(1) q^+_b(2)\rangle =$}
\put(1500,-1500){Figure 2}
\end{picture}
\end{center}
\begin{equation}}\def\end{equation}\noindent{\end{equation}\label{qprop}
= {i\over 4\pi^2} \delta_{ab}
{(D^+_1)^4(D^+_2)^4\over (12)^3}
\left({\delta_{12}\over x^2_{12}}\right) \equiv \Pi_{12} \delta_{ab} \;.
\end{equation}\noindent
This time we need the presence of two G-analyticity projectors
$(D^+_1)^4(D^+_2)^4$ because we do not have a harmonic delta function any more.
In order to show that (\ref{qprop}) is indeed the Green's function for the
operator $D^{++}$, one uses (\ref{hdistr}) and the identity
\begin{equation}}\def\end{equation}\noindent{\end{equation}
-{1\over 2} (D^+)^4 (D^{--})^2 \Phi = \Box \Phi
\end{equation}\noindent
on any G-analytic superfield $\Phi$.
Finally, the only vertex relevant to our two-loop calculation can be read off
from the
coupling term in (\ref{HMSYMact}):
\vskip 1 in
\begin{center}
\begin{picture}(30000,4000)(0,-4000)
\drawline\fermion[\E\REG](0,0)[3215]
\drawarrow[\E\ATTIP](\pmidx,\pmidy)
\drawline\gluon[\N\CENTRAL](\fermionbackx,\fermionbacky)[3]
\put(-1000,\fermionbacky){\scriptsize{a}}
\put(7000,\fermionbacky){\scriptsize{b}}
\put(3000,4500){\scriptsize{c}} \put(2815,-1500){\scriptsize{1}}
\drawline\fermion[\E\REG](\fermionbackx,\fermionbacky)[3215]
\drawarrow[\E\ATTIP](\pmidx,\pmidy) \put(11000,2000){${g\over 2}
f_{abc}\int d^4x_{A,1} du_1 d^4\theta_{1}^+$}
\put(1500,-3500){Figure 3}\label{vertex}
\end{picture}
\end{center}
It involves an integral over the G-analytic superspace.
Note also the following useful relations. The full superspace
Grassmann measure is related to the G-analytic one by
\begin{equation}}\def\end{equation}\noindent{\end{equation}\label{fullmeas} d^8\theta = d^4\theta^+ (D^+)^4 =
(D^-)^4(D^+)^4\;. \end{equation}\noindent
The Grassmann delta function $\delta^8(\theta_1-\theta_2) \equiv
\delta_{12}$ is defined as usual,
\begin{equation}}\def\end{equation}\noindent{\end{equation}
\int d^8\theta\; \delta^8(\theta) =\left.
(D^-)^4(D^+)^4\delta^8(\theta)\right\vert_{\theta=0} = 1\;, \end{equation}\noindent
from which it is easy to derive
\begin{equation}}\def\end{equation}\noindent{\end{equation}
\left.(D^+_1)^4(D^+_2)^4\delta_{12}\right\vert_{\theta=0} =
(12)^4\;. \end{equation}\noindent
Using this relation as a starting point we find others, e.g.:
\begin{eqnarray}}\def\barr{\begin{array}}\def\earr{\end{array}
\left.(D^+_3)^2(D^+_1)^4(D^+_2)^4\delta_{12}\right\vert_{\theta=0}
&=& \left.(\bar
D^+_3)^2(D^+_1)^4(D^+_2)^4\delta_{12}\right\vert_{\theta=0} = 0\;,
\nonumber\\ \left.D^+_{3\alpha}\bar
D^+_{3\dot\alpha}(D^+_1)^4(D^+_2)^4\delta_{12}\right\vert_{\theta=0}
&= &
-2i(13)(23)(12)^3\partial_{1\alpha\dot\alpha}\;,\label{relations}
\\
\left.(D^+_3)^4(D^+_1)^4(D^+_2)^4\delta_{12}\right\vert_{\theta=0}
&=& - (13)^2(23)^2(12)^2\Box_1\;, \quad \mbox{etc.} \nonumber \end{eqnarray}
\subsection{Four-point hypermultiplet correlators}
In what follows we shall apply the above Feynman rules to compute
four-point correlators of composite gauge invariant operators made
out of two hypermultiplets. There are two types of such composite
operators: $q^+q^+$ (and its conjugate $\widetilde q^+\widetilde
q^+$) and $\widetilde q^+q^+$. The structure of the hypermultiplet
propagator (\ref{qprop}) suggests that we need equal numbers of
$q^+$'s and $\widetilde q^+$'s in order to form a closed
four-point loop. Indeed, for instance, correlators of the type
\begin{equation}}\def\end{equation}\noindent{\end{equation}\label{0corr} \langle q^+q^+\vert q^+q^+\vert q^+q^+\vert
q^+q^+\rangle \end{equation}\noindent
must vanish in the free case as well as to all orders in
perturbation theory. The reason is that the only interaction the
$q^+$'s have is given by the vertex (\ref{vertex}) and it is easy
to see that there are no possible graphs of this type. The same
applies to any configuration with unequal numbers of $q^+$'s and
$\widetilde q^+$'s. So, the non-trivial ones are
\begin{equation}}\def\end{equation}\noindent{\end{equation}\label{1corr} \langle \widetilde q^+\widetilde q^+\vert
q^+q^+\vert \widetilde q^+\widetilde q^+\vert q^+q^+\rangle\;, \end{equation}\noindent
\begin{equation}}\def\end{equation}\noindent{\end{equation}\label{3corr} \langle \widetilde q^+\widetilde q^+\vert
q^+q^+\vert \widetilde q^+ q^+\vert \widetilde q^+q^+\rangle\;,
\end{equation}\noindent
\begin{equation}}\def\end{equation}\noindent{\end{equation}\label{2corr} \langle \widetilde q^+q^+\vert \widetilde
q^+q^+\vert \widetilde q^+q^+\vert \widetilde q^+q^+\rangle\;. \end{equation}\noindent
As explained in section 2, the correlators
(\ref{1corr})-(\ref{2corr}) are sufficient to determine the full
correlator of four $N=4$ supercurrents. In fact, as we shall see
later on, it is enough to compute (\ref{1corr}), the other two can
then be obtained by permutations of the points and symmetrisation.
The relevant graph topologies for the computation of the
correlator (\ref{1corr}) are shown in Figure 4:
\vskip 2 in
\begin{center}
\begin{picture}(42000,7000)(0,-12000)
\drawline\fermion[\E\REG](0,0)[10000] \global\advance\pmidx by
-400 \global\Yone=-1500 \put(\pmidx,\Yone){a}
\global\advance\pmidx by 400
\drawline\gluon[\N\CENTRAL](\pmidx,\pmidy)[6]
\global\Xone=\gluonlengthy
\drawline\fermion[\W\REG](\gluonbackx,\gluonbacky)[5000]
\drawline\fermion[\E\REG](\gluonbackx,\gluonbacky)[5000]
\drawline\fermion[\S\REG](\pbackx,\pbacky)[\Xone]
\drawline\fermion[\N\REG](0,0)[\Xone]
\drawline\fermion[\E\REG](15000,0)[10000] \global\advance\pmidx by
-400 \put(\pmidx,\Yone){b}
\drawline\fermion[\N\REG](\pbackx,\pbacky)[\Xone]
\drawline\fermion[\W\REG](\pbackx,\pbacky)[10000]
\drawline\fermion[\S\REG](\pbackx,\pbacky)[\Xone]
\global\Xtwo=\pmidx \global\Ytwo=\pfronty \startphantom
\drawline\gluon[\NE\FLIPPED](\pmidx,\pmidy)[3] \stopphantom
\global\Ythree=\gluonlengthy \global\negate\Ythree
\global\advance\Ytwo by \Ythree
\drawline\gluon[\NE\FLIPPED](\Xtwo,\Ytwo)[3]
\startphantom \drawloop\gluon[\N 5](30000,0) \stopphantom
\global\Xfive=\loopfrontx \global\negate\Xfive
\global\advance\Xfive by \loopbackx \global\advance\Xfive by
-10000 \global\divide\Xfive by 2
\drawline\fermion[\E\REG](30000,0)[10000] \global\advance\pmidx by
-400 \put(\pmidx,\Yone){c}
\drawline\fermion[\N\REG](\pbackx,\pbacky)[\Xone]
\drawline\fermion[\W\REG](\pbackx,\pbacky)[10000]
\global\advance\pfrontx by \Xfive \drawloop\gluon[\S
5](\pfrontx,\pfronty) \drawline\fermion[\N\REG](30000,0)[\Xone]
\global\Xfour=\Xone \global\multroothalf\Xfour
\drawline\fermion[\N\REG](5000,-10000)[\Xone]
\drawline\fermion[\NE\REG](5000,-10000)[\Xfour]
\drawline\fermion[\NW\REG](\pbackx,\pbacky)[\Xfour]
\drawline\gluon[\E\REG](\pfrontx,\pfronty)[5] \global\advance\Yone
by -10000 \global\advance\pmidx by -400 \put(\pmidx,\Yone){d}
\drawline\fermion[\SE\REG](\pbackx,\pbacky)[\Xfour]
\drawline\fermion[\N\REG](\pbackx,\pbacky)[\Xone]
\drawline\fermion[\SW\REG](\pbackx,\pbacky)[\Xfour]
\drawline\fermion[\NE\REG](35000,-10000)[\Xfour]
\drawline\fermion[\NW\REG](\pbackx,\pbacky)[\Xfour]
\drawline\fermion[\SW\REG](\pbackx,\pbacky)[\Xfour]
\drawline\fermion[\SE\REG](\pbackx,\pbacky)[\Xfour]
\drawline\gluon[\W\FLIPPED](\pfrontx,\pfronty)[5]
\global\advance\pmidx by -400 \put(\pmidx,\Yone){e}
\global\Xsix=\pbackx \global\advance\Xsix by -2500
\global\Xseven=\pbackx \global\advance\Xseven by -5000
\global\Ysix=\pbacky \global\advance\Ysix by 900
\global\Yseven=\pbacky
\curve(\pbackx,\pbacky,\Xsix,\Ysix,\Xseven,\Yseven)
\global\advance\Ysix by -1800
\curve(\pbackx,\pbacky,\Xsix,\Ysix,\Xseven,\Yseven)
\global\advance\Yone by -1500 \put(15800,\Yone){Figure 4}
\end{picture}
\end{center}
The graph (c) contains a vanishing two-point insertion (see
\cite{Ky}). The graphs (d) and (e) are proportional to the trace
of the structure constant $f_{abb}$ and thus vanish unless the
gauge group contains a $U(1)$ factor \footnote{The hypermultiplet
in the $N=4$ multiplet has no electric charge, therefore a $U(1)$
gauge factor corresponds to a trivial free sector of the theory.
Note, nonetheless, that if the graphs (d) and (e) are to be
considered, they contain divergent $x$-space integrals.}. Thus, we
only have to deal with the topologies (a) and (b). We shall do the
calculation in some detail for the case (a), the other one being
very similar.
Here is a detailed drawing of the configurations having the
topology of graph (a):
\vskip 0.5 in
\begin{center}
\begin{picture}(38000,10000)(0,-4000)
\startphantom \drawline\gluon[\E\CENTRAL](0,0)[14] \stopphantom
\global\Xone=\gluonlengthx \global\Xeight=\Xone
\global\divide\Xeight by 4 \drawline\fermion[\E\REG](0,0)[\Xone]
\drawarrow[\W\ATTIP](\Xeight,0) \global\multiply\Xeight by 3
\drawarrow[\W\ATTIP](\Xeight,0) \put(400,200){4}
\global\advance\pmidx by -400 \global\Ytwo=-1500
\put(\pmidx,\Ytwo){$I_1$} \global\advance\pmidx by 800
\put(\pmidx,200){6} \global\advance\pmidx by -400
\global\advance\pbackx by 400 \put(\pbackx,200){3}
\drawline\gluon[\N\CENTRAL](\pmidx,\pmidy)[8]
\global\advance\pmidx by -300 \global\advance\pmidy by 270
\global\Yone=\gluonlengthy \drawarrow[\E\ATTIP](\Xeight,\Yone)
\global\divide\Xeight by 3 \drawarrow[\E\ATTIP](\Xeight,\Yone)
\global\advance\Yone by 200 \global\advance\pbackx by 400
\put(\pbackx,\Yone){5} \global\divide\Xone by 2
\drawline\fermion[\W\REG](\gluonbackx,\gluonbacky)[\Xone]
\global\advance\pbackx by 400 \put(\pbackx,\Yone){1}
\drawline\fermion[\E\REG](\gluonbackx,\gluonbacky)[\Xone]
\global\advance\pbackx by 400 \put(\pbackx,\Yone){2}
\global\advance\Yone by -200
\drawline\fermion[\S\REG](\fermionbackx,\pbacky)[\Yone]
\drawarrow[\N\ATTIP](\pmidx,\pmidy)
\drawline\fermion[\N\REG](0,0)[\Yone]
\drawarrow[\S\ATTIP](\pmidx,\pmidy)
\global\multiply\Xone by 2
\drawline\fermion[\E\REG](22840,0)[\Xone]
\drawarrow[\W\ATTIP](\pmidx,\pmidy)
\drawarrow[\E\ATTIP](\pmidx,\Yone) \put(22900,200){4}
\global\advance\pmidx by -400 \put(\pmidx,\Ytwo){$I_2$}
\global\advance\pbackx by 400 \put(\pbackx,200){3}
\drawline\fermion[\N\REG](\fermionbackx,\pbacky)[\Yone]
\global\Yeight=\Yone \global\divide\Yeight by 4
\drawarrow[\S\ATTIP](22840,\Yeight)
\drawarrow[\N\ATTIP](\pbackx,\Yeight) \global\multiply\Yeight by 3
\drawarrow[\S\ATTIP](22840,\Yeight)
\drawarrow[\N\ATTIP](\pbackx,\Yeight) \global\advance\pmidx by 400
\global\advance\pmidy by 200 \put(\pmidx,\pmidy){5}
\global\advance\pbackx by 400 \global\advance\pbacky by 200
\put(\pbackx,\pbacky){2}
\drawline\fermion[\W\REG](\fermionbackx,\fermionbacky)[\Xone]
\global\advance\pbackx by 400 \global\advance\pbacky by 200
\put(\pbackx,\pbacky){1}
\drawline\fermion[\S\REG](\fermionbackx,\fermionbacky)[\Yone]
\drawline\gluon[\E\CENTRAL](\pmidx,\pmidy)[14]
\global\advance\pmidx by 260 \global\advance\pmidy by 200
\global\advance\pfrontx by -800 \global\advance\pfronty by 200
\put(\pfrontx,\pfronty){6}
\put(15800,-3000){Figure 5}
\end{picture}
\end{center}
The expression corresponding to the first of them is (up to a
factor containing the 't Hooft parameter $g^2N_c$):
\begin{equation}}\def\end{equation}\noindent{\end{equation}
I_1 = -\Pi_{14}\Pi_{32}\int d^4x_5 d^4x_6 du_5du_6 d^4\theta^+_5
d^4\theta^+_6 \Pi_{15}\Pi_{52}\Pi_{36}\Pi_{64} (D^+_6)^4
\left({\delta_{56}\over x^2_{56}}\right) \delta^{(2,-2)}(u_5,u_6)
\;. \nonumber \end{equation}\noindent
The technique we shall use to evaluate this graph is similar to
the usual $D$-algebra method employed in $N=1$ supergraph
calculations. First, since the propagators $\Pi$ are G-analytic,
we can use the four spinor derivatives $(D^+_6)^4$ to restore the
full Grassmann vertex $d^8\theta_6$ (see (\ref{fullmeas})). Then
we make use of the Grassmann and harmonic delta functions
$\delta_{56}\delta^{(2,-2)}(u_5,u_6)$ to do the integrals $\int
du_6 d^8\theta_6$. The next step is to pull the projector
$(D^+_1)^4$ from the propagator $\Pi_{15}$ out of the integrals
(it does not contain any integration variables and it only acts on
the first propagator). After that the remaining projector
$(D^+_5)^4$ from $\Pi_{15}$ can be used to restore the Grassmann
vertex $d^8\theta_5$ (everything else under the integral is
$D^+_5$-analytic). In this way we can free the Grassmann delta
function $\delta_{15}$ and then do the integral $\int
d^8\theta_5$. The resulting expression is (up to a numerical
factor):
\begin{eqnarray}
I_1 &=& -\Pi_{14}\Pi_{32}(D^+_1)^4\int {d^4x_5 d^4x_6 du_5\over (15)^3 x^2_{15}
x^2_{56}}\times \nonumber\\
&\times& {(D^+_5)^4(D^+_2)^4\over (52)^3}\left({\delta_{12}\over
x^2_{12}}\right) {(D^+_3)^4(D^+_5)^4\over
(35)^3}\left({\delta_{31}\over x^2_{36}}\right)
{(D^+_5)^4(D^+_4)^4\over (54)^3}\left({\delta_{14}\over
x^2_{64}}\right) \;.\nonumber
\end{eqnarray}
Here all $D^+$'s contain the same $\theta=\theta_1$ but different
$u$'s, as indicated by their index. Next we distribute the four
spinor derivatives $(D^+_1)^4$ over the three propagators and use
the identities (\ref{relations}) (remember that we are only
interested in the leading term of the correlator, therefore we set
all $\theta$'s$=0$). The result is:
\begin{eqnarray}}\def\barr{\begin{array}}\def\earr{\end{array} I_1(\theta=0) &=& -{(14)(23)\over x^2_{23}x^2_{14}}\int du_5
\times \nonumber\\ &\times& \left[ { 4i\pi^2\over x^2_{34}}
{(14)^2(52)(53)\over (51)(54)} g_3 + {4i\pi^2\over x^2_{34}}
{(13)^2(52)(54)\over (51)(53)} g_4 + {4i\pi^2\over x^2_{12}}
{(12)^2(53)(54)\over (51)(52)} g_1 \right.\nonumber\\ &+&
\left.(13)(14) {(52)\over (51)} 2\partial_3\cdot\partial_4f +
(12)(14) {(53)\over (51)} 2\partial_2\cdot\partial_4f +(12)(13)
{(54)\over (51)} 2\partial_2\cdot\partial_3f \right] \;. \nonumber
\end{eqnarray}
Here
\begin{equation}}\def\end{equation}\noindent{\end{equation}\label{2loop} f(x_1,x_2,x_3,x_4) = \int{d^4x_5 d^4x_6\over
x^2_{15}x^2_{25}x^2_{36}x^2_{46}x^2_{56}} \end{equation}\noindent
and, e.g.,
\begin{equation}}\def\end{equation}\noindent{\end{equation}\label{1loop} g_1(x_2,x_3,x_4) = {x^2_{12}\over 4i\pi^2}
\Box_2f(x_1,x_2,x_3,x_4) = \int{d^4x_5\over
x^2_{25}x^2_{35}x^2_{45}} \end{equation}\noindent
are two- and one-loop space-time integrals. The last step is to
compute the harmonic integral. The way this is done is explained
by the following example:
\begin{equation}}\def\end{equation}\noindent{\end{equation}
\int du_5 {(52)\over (51)} = \int du_5 {D^{++}_5(5^-2)\over (51)}
= - \int du_5 (5^-2) D^{++}_5 {1\over (51)} = - \int du_5
(5^-2)\delta^{(1,-1)}(5,1) = -(1^-2)\;, \nonumber \end{equation}\noindent
where $(1^-2) \equiv u^{-i}_1u^+_{2i}$ and eq. (\ref{hdistr}) has
been used. Other useful identities needed to simplify the result
are, e.g.:
\begin{equation}}\def\end{equation}\noindent{\end{equation}\label{cyclid} (1^-2)(13) = -(23) + (1^-3)(12) \end{equation}\noindent
(based on the cyclic property of the $SU(2)$ traces and on the
defining property $(11^-) = 1$, see (\ref{harco})) and
\begin{equation}}\def\end{equation}\noindent{\end{equation}
(\partial_1+\partial_2)^2 f = (\partial_3+\partial_4)^2 f \quad
\Rightarrow \quad 2\partial_1\cdot\partial_2 f =
2\partial_3\cdot\partial_4 f - {4i\pi^2\over x^2_{12}}(g_1+g_2) +
{4i\pi^2\over x^2_{34}}(g_3+g_4) \end{equation}\noindent
(based on the translational invariance of $f$). So, the end result
for the first graph in Figure 2 is:
\begin{eqnarray}}\def\barr{\begin{array}}\def\earr{\end{array} I_1(\theta=0) &=& {(14)(23)\over x^2_{14} x^2_{23}}\left\{
(21^-)(13)(14) {4i\pi^2 g_2\over x^2_{12}} + (12^-)(23)(24)
{4i\pi^2 g_1\over x^2_{12}} \right. \nonumber\\ && -
(23^-)(13)(34) {4i\pi^2 g_4\over x^2_{34}} + (24^-)(14)(34)
{4i\pi^2 g_3\over x^2_{34}}\label{interm}\\ && +
\left.(14)(23)2\partial_1\cdot\partial_2 f(1,2,3,4) -
(12)(34)\left[2\partial_2\cdot\partial_3 f(1,2,3,4) + {4i\pi^2
g_1\over x^2_{12}}\right]\right\}\;. \nonumber \end{eqnarray}
The second graph in Figure 5 is obtained by exchanging points 1
and 3:
\begin{equation}}\def\end{equation}\noindent{\end{equation}
I_2 = I_1(3,2,1,4)\;. \end{equation}\noindent
An important remark concerning these intermediate results is the
fact that they do not satisfy the harmonic analyticity condition
(\ref{emhyp}), as one would expect from the property of the free
on-shell hypermultiplets. Indeed, several terms in (\ref{interm})
contain negative-charged harmonics which are not annihilated by
$D^{++}$. As we shall see below, this important property of
harmonic analyticity is only achieved after summing up all the
relevant two-loop graphs. So, let us move on to the topology (b)
in Figure 4. There are four such graphs shown in Figure 6:
\vskip 0.5 in
\begin{center}
\begin{picture}(38000,8000)(0,-4000)
\drawline\fermion[\E\REG](0,0)[6000] \global\advance\pfrontx by
400 \global\advance\pfronty by 200 \put(\pfrontx,\pfronty){4}
\global\advance\pmidx by -400 \global\Yone=-1500
\put(\pmidx,\Yone){$J_1$}
\drawline\fermion[\N\REG](\pbackx,\pbacky)[6000]
\global\advance\pfrontx by 400 \global\advance\pfronty by 200
\put(\pfrontx,\pfronty){3}
\drawline\fermion[\W\REG](\pbackx,\pbacky)[6000]
\global\advance\pfrontx by 400 \global\advance\pfronty by 200
\put(\pfrontx,\pfronty){2}
\drawline\fermion[\S\REG](\pbackx,\pbacky)[6000]
\global\advance\pfrontx by 400 \global\advance\pfronty by 200
\put(\pfrontx,\pfronty){1} \global\Xtwo=\pmidx
\global\advance\pfronty by -200 \global\Ytwo=\pfronty
\startphantom \drawline\gluon[\NE\FLIPPED](\pmidx,\pmidy)[3]
\stopphantom \global\Ythree=\gluonlengthy \global\negate\Ythree
\global\advance\Ytwo by \Ythree
\drawline\gluon[\NE\FLIPPED](\Xtwo,\Ytwo)[3]
\drawline\fermion[\E\REG](10000,0)[6000] \global\advance\pfrontx
by 400 \global\advance\pfronty by 200 \put(\pfrontx,\pfronty){4}
\global\advance\pmidx by -400 \put(\pmidx,\Yone){$J_2$}
\drawline\fermion[\N\REG](\pbackx,\pbacky)[6000]
\global\advance\pfrontx by 400 \global\advance\pfronty by 200
\put(\pfrontx,\pfronty){3}
\drawline\fermion[\W\REG](\pbackx,\pbacky)[6000]
\global\advance\pfrontx by 400 \global\advance\pfronty by 200
\put(\pfrontx,\pfronty){2} \global\advance\pfrontx by -400
\global\Xtwo=\pfrontx \global\Ytwo=\pmidy \startphantom
\drawline\gluon[\SE\FLIPPED](\pmidx,\pmidy)[3] \stopphantom
\global\Xthree=\gluonlengthx \global\negate\Xthree
\global\advance\Xtwo by \Xthree
\drawline\gluon[\SE\FLIPPED](\Xtwo,\Ytwo)[3]
\drawline\fermion[\S\REG](\fermionbackx,\fermionbacky)[6000]
\global\advance\pfrontx by 400 \global\advance\pfronty by 200
\put(\pfrontx,\pfronty){1}
\drawline\fermion[\E\REG](20000,0)[6000] \global\advance\pfrontx
by 400 \global\advance\pfronty by 200 \put(\pfrontx,\pfronty){4}
\global\advance\pmidx by -400 \put(\pmidx,\Yone){$J_3$}
\drawline\fermion[\N\REG](\pbackx,\pbacky)[6000]
\global\advance\pfrontx by 400 \global\advance\pfronty by 200
\put(\pfrontx,\pfronty){3} \global\Xtwo=\pmidx
\global\advance\pfronty by -200 \global\Ytwo=\pfronty
\startphantom \drawline\gluon[\SW\FLIPPED](\pmidx,\pmidy)[3]
\stopphantom \global\Ythree=\gluonlengthy \global\negate\Ythree
\global\advance\Ytwo by \Ythree
\drawline\gluon[\SW\FLIPPED](\Xtwo,\Ytwo)[3]
\drawline\fermion[\W\REG](\fermionbackx,\fermionbacky)[6000]
\global\advance\pfrontx by 400 \global\advance\pfronty by 200
\put(\pfrontx,\pfronty){2}
\drawline\fermion[\S\REG](\pbackx,\pbacky)[6000]
\global\advance\pfrontx by 400 \global\advance\pfronty by 200
\put(\pfrontx,\pfronty){1}
\drawline\fermion[\E\REG](30000,0)[6000] \global\advance\pfrontx
by 400 \global\advance\pfronty by 200 \put(\pfrontx,\pfronty){4}
\global\advance\pmidx by -400 \put(\pmidx,\Yone){$J_4$}
\global\advance\pfrontx by -400 \global\Xtwo=\pfrontx
\global\Ytwo=\pmidy \startphantom
\drawline\gluon[\NW\FLIPPED](\pmidx,\pmidy)[3] \stopphantom
\global\Xthree=\gluonlengthx \global\negate\Xthree
\global\advance\Xtwo by \Xthree
\drawline\gluon[\NW\FLIPPED](\Xtwo,\Ytwo)[3]
\drawline\fermion[\N\REG](\fermionbackx,\fermionbacky)[6000]
\global\advance\pfrontx by 400 \global\advance\pfronty by 200
\put(\pfrontx,\pfronty){3}
\drawline\fermion[\W\REG](\pbackx,\pbacky)[6000]
\global\advance\pfrontx by 400 \global\advance\pfronty by 200
\put(\pfrontx,\pfronty){2}
\drawline\fermion[\S\REG](\pbackx,\pbacky)[6000]
\global\advance\pfrontx by 400 \global\advance\pfronty by 200
\put(\pfrontx,\pfronty){1}
\put(15800,-3000){Figure 6}
\end{picture}
\end{center}
The calculation is very similar to the one above, so we just give
the end result:
\begin{equation}}\def\end{equation}\noindent{\end{equation}
J_1(\theta=0) = - {(23)(34)\over x^2_{23}x^2_{34}}\left[
(42^-)(12)^2 {4i\pi^2 g_3\over x^2_{12}} + (24^-)(14)^2 {4i\pi^2
g_3\over x^2_{14}} + (12)(14) 2\partial_2\cdot\partial_4
f(1,2,1,4)\right]\;.\nonumber \end{equation}\noindent
Notice the appearance of the two-loop integral $f$ (\ref{2loop})
with two points identified, $x_1=x_3$. The other three graphs
$J_{2,3,4}$ in Figure 3 are obtained by cyclic permutation.
Finally, putting everything together, using cyclic harmonic
identities of the type (\ref{cyclid}) as well as the identity (see
the Appendix)
\begin{equation}}\def\end{equation}\noindent{\end{equation}
\Box_1f(1,2,1,3) = {x^2_{23}\over x^2_{12}x^2_{13}}4i\pi^2 g_4\;,
\end{equation}\noindent
we arrive at the following final result:
\begin{eqnarray}\label{answer}
&&\langle \widetilde q^+\widetilde q^+\vert q^+q^+\vert
\widetilde q^+\widetilde q^+\vert
q^+q^+\rangle \nonumber \\
&&\qquad = I_1 + I_2 + J_1 + J_2 + J_3 + J_4 \nonumber \\
&&\qquad = \left[ (14)^2(23)^2 A_1 + (12)^2(34)^2 A_2 +
(12)(23)(34)(41) A_3\right]\;,
\end{eqnarray}
where
\begin{equation}}\def\end{equation}\noindent{\end{equation}\label{A12} A_1={(\partial_1+\partial_2)^2 f(1,2,3,4)\over
x^2_{14}x^2_{23}}\;, \qquad A_2 ={(\partial_1+\partial_4)^2
f(1,4,2,3)\over x^2_{12}x^2_{34}}\;, \end{equation}\noindent
$$
A_3 = 4i\pi^2 {(x^2_{24} - x^2_{12} -x^2_{14}) g_3 + (x^2_{13} -
x^2_{12} -x^2_{23}) g_4 + (x^2_{24} - x^2_{23} -x^2_{34}) g_1 +
(x^2_{13} - x^2_{14} -x^2_{34}) g_2 \over
x^2_{12}x^2_{14}x^2_{23}x^2_{34}}
$$
\begin{equation}}\def\end{equation}\noindent{\end{equation}\label{A3}
+ {(\partial_2+\partial_3)^2 f(1,2,3,4)\over x^2_{14}x^2_{23}} +
{(\partial_1+\partial_2)^2 f(1,4,2,3)\over x^2_{12}x^2_{34}}\;.
\end{equation}\noindent
As pointed out earlier, this result is manifestly harmonic
analytic (there are only positive-charged harmonics in
(\ref{answer})). It is also easy to see that the correlator is
symmetric under the permutations $1\leftrightarrow 3$ or
$2\leftrightarrow 4$ corresponding to exchanging the two
$\widetilde q^+\widetilde q^+$ or $q^+q^+$ vertices.
Finally, we turn to the two other correlators (\ref{3corr}) and
(\ref{2corr}). The difference in the graph structure is the change
of flow along some of the hypermultiplet lines. By examining the
graphs in Figures 5 and 6 it is easy to see that this amounts to
an overall change of sign in the case (\ref{3corr}) (due to an odd
number of reversals of $q^+$ propagators and SYM-to-matter
vertices) or to no change at all in the case (\ref{2corr}) (an
even number of reversals). Then one has to take into account the
different symmetry of the new configurations which means that the
three harmonic structures in (\ref{answer}) have to be symmetrised
accordingly. This is done with the help of cyclic identities like
\begin{equation}\label{cycid}
(14)(23) \ {\stackrel{3\leftrightarrow 4}{\rightarrow}}\ (13)(24) = (12)(34)
+ (14)(23)\;.
\end{equation}
\section{Discussion of the results and asymptotic behaviour}
An interesting technical feature of our calculation is that the
space-time integrals resulting from the Grassmann and harmonic
integrations can be written in terms of a second-order
differential operator acting on the basic scalar two-loop
integral. Ordinarily, this type of gauge theory calculation will
produce a set of tensor integrals, which one would first need to
reduce to scalar integrals algebraically, using algorithms such as
in \cite{tarasov}. The reason for this unusual property can be
traced back to an alternative form for the $q^+$ propagator which
is obtained as follows. Given two points in $x$ space, $x_{1}$ and
$x_2$, one can define the supersymmetry-invariant difference
\begin{equation}\label{6.5}
\hat x_{12} =x_{12} + {2i\over (12)} \left[(1^-2) \theta^+_1 \bar\theta^+_1 -
(12^-) \theta^+_2 \bar\theta^+_2 + \theta^+_1 \bar\theta^+_2 + \theta^+_2
\bar\theta^+_1\right] \ .
\end{equation}
Note the manifest G-analyticity of this expression with respect to
both arguments. Now, with the help of (\ref{6.5}) one can rewrite
the propagator (\ref{qprop}) in the equivalent form
\begin{equation}\label{6.12}
\langle \widetilde q^{+}(1) q^+(2)\rangle =
{(12)\over \hat x^2_{12}}\ .
\end{equation}
One sees that the whole $\theta$ dependence of the propagator is
concentrated in the shift (\ref{6.5}). Thus, doing the $\theta$
integrals in the above graph calculation effectively amounts to
taking a couple of terms in the Taylor expansion of the scalar
propagators. This explains the general structure of the resulting
space-time integrals.
We now discuss the explicit space-time dependence of the
correlation functions. The basic integral we encounter is the
two-loop one (\ref{2loop}). In principle, it could be obtained by
Fourier transformation from the known result for the momentum
space double box (see eq. (\ref{D2}) in the Appendix).
Unfortunately, this appears to be a very difficult job. It is more
useful to note that, rewritten as a momentum space diagram, the
same integral is identical with the ``diagonal box'' diagram shown
in Figure 7.
\begin{center}
\begin{picture}(38000,13300)(-4000,-6000)
\put (17000,3500) {\line(0,-1){7000}} \put (17000,0)
{\circle{10000}} \put (15000,0) {\line(-1,0){1000}} \put
(19000,0) {\line(1,0){1000}} \put (16500,4500) {$x_{41}$} \put
(16500,-4500) {$x_{23}$} \put (13100,-1200) {$x_{12}$} \put
(20000,-1200) {$x_{34}$} \put (10000,-6400) {Figure 7: Diagonal
box diagram.}
\end{picture}
\end{center}
This diagram has not yet been calculated for the general off-shell
case. However in the special case where either $x_{23}=0$ or
$x_{41}=0$ it is known to be expressible in terms of the function
$\Phi^{(2)}$ defined in eq. (\ref{Phiexplicit}) in the Appendix.
For example, for $x_{41}=0$ one has \cite{davuss2loop3point}
\begin{eqnarray} f(x_1,x_2,x_3,x_1) &=& {(i\pi^2)^2\over x_{23}^2} \Phi^{(2)}
\Bigl({x_{12}^2\over x_{23}^2}, {x_{13}^2\over x_{23}^2}\Bigr)\;.
\label{fspecial} \end{eqnarray}\noindent
In the same way, the one-loop integral $g$ can, by eq.
(\ref{C1=Phi1}), be expressed in terms of another function
$\Phi^{(1)}$ defined in eq. (\ref{Phi1explicit}) in the Appendix:
\begin{eqnarray} g(x_1,x_2,x_3) &\equiv& \int {dx_4 \over
x_{14}^2x_{24}^2x_{34}^2} = - {i\pi^2\over x_{12}^2} \Phi^{(1)}
\Bigl( {x_{23}^2\over x_{12}^2} , {x_{31}^2\over x_{12}^2}
\Bigr)\;. \label{calcg3} \end{eqnarray}\noindent
A further explicit function can be found in the case of some
particular combination of derivatives on the two-loop integral
$f$. By exploiting the translation invariance of the
$x_5$-subintegral we can do the following manipulation on the
integral, e.g.:
\begin{eqnarray} (\partial_1 +\partial_2)^2 f(1,2,3,4) &=& \int {dx_6 \over
x_{36}^2 x_{46}^2 } (\partial_1 +\partial_2)^2 \int {dx_5 \over
x_{15}^2 x_{25}^2 x_{56}^2 } \nonumber\\ &=& \int {dx_6 \over x_{36}^2
x_{46}^2 }
\partial_6^2
\int {dx_5 \over x_{15}^2 x_{25}^2 x_{56}^2 } \nonumber\\ &=& 4i\pi^2
\int {dx_6 \over x_{36}^2 x_{46}^2 } \int {dx_5 \over x_{15}^2
x_{25}^2 } \delta(x_{56}) \nonumber\\ &=&
4i\pi^2
\int {dx_5 \over
x_{15}^2x_{25}^2x_{35}^2x_{45}^2 }\;. \label{calcA1} \end{eqnarray}\noindent
This 4-point one-loop function is given by
\begin{eqnarray} h(x_1,x_2,x_3,x_4) &\equiv& \int {dx_5 \over
x_{15}^2x_{25}^2x_{35}^2x_{45}^2 }
=
-
{i\pi^2\over x_{13}^2x_{24}^2} \Phi^{(1)} \Bigl(
{x_{12}^2x_{34}^2\over x_{13}^2x_{24}^2} , {x_{23}^2x_{41}^2\over
x_{13}^2x_{24}^2} \Bigr)\;. \label{calcg4} \end{eqnarray}\noindent\noindent
Unfortunately, no such trick exists in the case of the combination
of derivatives, e.g., $(\partial_1 +\partial_3)^2 f(1,2,3,4)$ and
we do not know the corresponding explicit function. This technical
problem prevents us from demonstrating the manifest conformal
invariance of the result. Indeed, if the integrals appearing in
the coefficients $A_1$ and $A_2$ (\ref{A12}) can be reduced to the
form (\ref{calcg4}) where the explicit dependence on the two
conformal cross-ratios is visible, the same is not obvious for the
third coefficient $A_3$ (\ref{A3}). The property
eq.(\ref{fspecial}) indicates that $f$ itself has not the form of
a function of the conformal cross ratios times propagator factors.
On the other hand, without further information on $f$ we cannot
completely exclude the possibility that this particular
combination of derivatives of $f$ also breaks down to conformally
invariant one-loop quantities, only in a less obvious way than it
happens for the other ones.
The only qualitative information about the correlation functions
we can obtain concerns their asymptotic behaviour when two points
approach each other. This is done in several steps. Firstly, eq.
(\ref{Phi1asympt}) gives us information on the coincidence limits
of $g$, e.g., for $x_3\to x_1$ one has
\begin{eqnarray} g(x_1,x_2,x_3) \quad {\stackrel{\scriptscriptstyle x_{31} \to 0}{\sim}}
\,\, {i\pi^2\over x_{12}^2} \ln x_{31}^2\; . \label{g3asympt}
\end{eqnarray}\noindent\noindent
Similarly, for the function $h$ we find:
\begin{eqnarray} h(x_1,x_2,x_3,x_4) \quad {\stackrel{\scriptscriptstyle x_{41} \to 0}{\sim}}
\,\, {i\pi^2\over x_{13}^2x_{21}^2} \ln x_{14}^2\;.
\label{g4asympt} \end{eqnarray}\noindent\noindent
This then allows us to determine the asymptotic behaviour of $A_1$
and $A_2$:
\begin{eqnarray} A_1 \quad {\stackrel{\scriptscriptstyle x_{41} \to 0}{\sim}} \,\,
-
4\pi^4 {\ln x_{14}^2 \over x_{14}^2x_{23}^2x_{13}^2x_{12}^2 }\;;
\label{A1asympt} \end{eqnarray}\noindent\noindent
\begin{eqnarray} A_2 &=& 4i\pi^2 \, { h(x_1,x_2,x_3,x_4) \over
x_{12}^2x_{34}^2 } \quad {\stackrel{\scriptscriptstyle x_{41} \to 0}{\sim}} \,\,
-4\pi^4 {\ln x_{14}^2 \over x_{13}^4 x_{12}^4 }\;.
\label{A2asympt} \end{eqnarray}\noindent\noindent
The case of $A_3$ requires more work, since the derivatives of $f$
appearing here cannot be used to get rid of one integration.
However, in the case of $(\partial_2 +\partial_3)^2f(1,2,3,4)$ the
limit $x_4\to x_1$ is finite. We can therefore take this limit
before differentiation. By a similar argument as above one can
show that
\begin{eqnarray} (\partial_2 +\partial_3)^2 f(1,2,3,1) &=& 4i\pi^2
{x_{23}^2\over x_{12}^2x_{13}^2} g_4 \label{A3trick} \end{eqnarray}\noindent (this
identity can also be derived using eq. (\ref{fspecial}) and
differentiating under the integral in eq. (\ref{C2})). Thus we
find
\begin{eqnarray} { (\partial_2 +\partial_3)^2f(1,2,3,4) \over
x_{14}^2x_{23}^2 } \quad {\stackrel{\scriptscriptstyle x_{41} \to 0}{\sim}} \,\,
4i\pi^2 {g_4\over x_{14}^2x_{12}^2x_{13}^2 } \label{A3easyterm}
\end{eqnarray}\noindent \noindent
This term is a pure pole term, without logarithmic corrections.
The same procedure does not work for the last term in $A_3$, since
here the limit is divergent. We evaluate this term by first
symmetrising and then differentiating under the integral:
\begin{eqnarray} (\partial_1+\partial_2)^2f(1,4,2,3) &=& {1\over 2} \Bigl[
(\partial_1+\partial_2)^2 + (\partial_3+\partial_4)^2 \Bigr]
f(1,4,2,3) \nonumber\\ &=& 2i\pi^2 \Bigl[ {g_4\over x_{14}^2} +
{g_3\over x_{23}^2} + {g_2\over x_{23}^2} + {g_1\over x_{14}^2}
\Bigr] \nonumber\\ &&\!\!\!\!\!\!\!\! + 4 \biggl\lbrace \int { dx_5dx_6
\,\,\, x_{15}\cdot x_{26} \over x_{45}^2 x_{36}^2 x_{56}^2
x_{15}^4 x_{26}^4 } + \int { dx_5dx_6 \,\,\, x_{36}\cdot x_{45}
\over x_{26}^2 x_{15}^2 x_{56}^2 x_{36}^4 x_{45}^4 }
\biggr\rbrace\;. \nonumber\\ \label{A3diff1} \end{eqnarray}\noindent\noindent
The remaining integrals are still singular in the limit $x_4\to
x_1$ but do not contribute to the ${1\over x_{14}^2}$ - pole.
After combining the terms involving $g_4$ and $g_1$ with eq.
(\ref{A3easyterm}) and the explicit $g_i$ - terms appearing in
$A_3$ one finds that the leading ${1\over x_{14}^2}$ - pole
cancels out, leaving a subleading logarithmic singularity for
$A_3$.
\section{Conclusions}
We have seen that both the $A_1$ and $A_2$ terms appearing in the
four-point function calculations turn out to be reducible to
one-loop quantities by various manipulations; they can be
evaluated explicitly and are clearly conformally invariant.
However, we have not succeeded in reducing the $A_3$ term to a
one-loop form (although it is not ruled out that such a reduction
may be possible) and it is consequently more difficult to verify
conformal invariance for this term since the function
$f(x_1,x_2,x_3,x_4)$ is not known explicitly.
Even though $A_3$ is not known explicitly as a function, we have
seen that it is possible to evaluate its leading behaviour in the
coincidence limit $x_{14}\sim 0$. The three different tensor
structures in the correlation function have different
singularities with the strongest being the one with the known
coefficient $A_1$. This is given by $(x^2)^{-1}\ln x^2$. In
section 2 we have shown how one may calculate the leading term (in
a $\theta$ expansion) of the $N=4$ supercurrent four-point
function from $A_1$, $A_2$ and $A_3$, and in particular, that we
can calculate the leading term of the $N=2$ four-point function
involving two $W^2$ operators and two $\bar W^2$ operators. The
leading behaviour of the $\theta$-independent term of this
four-point function in the coincidence limit is therefore
determined by the leading behaviour of $A_1$. If this were to
remain true for the higher-order terms in the $\theta$-expansion
then, since $A_1$ is a known function of invariants, we can use
the argument of section 3 to compute the asymptotic behaviour of
four-point functions of $F^2$ and $F\tilde F$ and this would put
us in a better position to make a comparison with the SG
computations when they are complete. This point is currently under
investigation.
It is interesting to note that some of the qualitative features we
have found here, such as one-loop box integrals and logarithmic
asymptotic behaviour, have also been found in instanton
calculations \cite{inst}.
In the case of three-point functions it is believed that the
corrections to the free term in perturbation theory cancel, at
least at leading order in $1/N_c$. This is certainly not the case
for four-point functions, and it is not clear precisely what
relation the perturbative results reported on here should have to
SG computations. It would be interesting to see what happens at
three loops, and whether one gets similar asymptotic behaviour in
the coincidence limit.
Finally, we note that the four-point functions computed here
exhibit harmonic analyticity even though the underlying fields,
the hypermultiplet and $N=2$ SYM gauge field in harmonic
superspace, are only Grassmann analytic. This is a more stringent
check of the analyticity postulate of the $N=4$ harmonic
superspace formalism than the previous three-point check and
obtaining a positive result on this point is encouraging.
\vspace{20pt} {\bf Acknowledgements:} We would like to thank A.
Davydychev for useful information concerning refs.
\cite{davussladder,davuss2loop3point}. This work was supported in
part by the EU network on Integrability, non-perturbative effects,
and symmetry in quantum field theory (FMRX-CT96-0012) and by the
British-French scientific programme Alliance (project 98074).
\vskip 10pt {\bf Note added:} In a recent e-print \cite{last}, a
special case of the amplitude considered here has been calculated
using $N=1$ superspace Feynman rules. Their result corresponds to
our term $A_2$. \vskip 10pt {\bf Note added in proof:} We would
like to emphasise that the three components of the amplitude
$A_1,A_2,A_3$ are {\sl not} trivially related to each other by the
$SU(4)$ invariance of the $N=4$ theory (see Section 2), as has
been erroneously assumed in the first reference in \cite{inst}.
| proofpile-arXiv_065-8536 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Recently, different authors
\cite{sten82,sten86,sten88,sten89,sten91,sab90,sab91,sab94,dg92,g92,dg94}
discovered an interesting multiparametric nonlinear
homogeneous modification of the Schr\"odinger equation in the coordinate
representation.
In the most general form this
{\it Stenflo--Sabatier--Doebner-Goldin\/} (SSDG) equation
can be written as
(we confine ourselves to the case of a free motion and assume $\hbar=m=1$)
\begin{equation}
i \frac{\partial\psi}{\partial t}=-\frac{1}{2}\nabla^{2}\psi+
\Omega\{\psi\}\psi \lb{1}
\end{equation}
where the local nonlinear functional $\Omega\{\psi\}$
is as a linear combination of terms
$\Delta\psi/\psi$, $(\nabla\psi/\psi)^2$, $ |\nabla\psi/\psi|^2$
and their complex conjugated counterparts, so that $\Omega\{\psi\}$
satisfies the {\it homogeneity condition\/}
$\Omega\{\gamma\psi\}
=\Omega\{\psi\}$ for an arbitrary complex constant $\gamma$.
The specific choices of the
{\it complex\/} coefficients in the linear combination correspond to the
equations describing waves in plasmas with sharp boundaries and in
nonlinear media \cite{sten82,sten86,sten88,sten89,sten91}.
However, trying to interpret \rf{1} as a {\it quantum mechanical\/}
equation one must worry about the conservation of probability.
For this reason, the functional $\Omega\{\psi\}$ was chosen in
\cite{sab90,sab91,sab94} in an explicit real form:
$\Omega\{\psi\}= \hat{\cal D} \ln|\psi|$,
where $\hat{\cal D}$ is the second order differential operator
$
\hat{\cal D}f = a\Delta f +{\bf b}\cdot\nabla f +c\nabla f\cdot\nabla f
$,
with real parameters $a,{\bf b},c$.
However, it was shown in \cite{dg92,g92} that the normalization could be
saved even in the presence of
{\it imaginary\/} (antihermitian) nonlinear corrections of a special
kind. The most general parametrization was proposed in \cite{dg94},
where $\Omega\{\psi\}$ was written in terms of real and imaginary parts,
$\Omega\{\psi\}=R\{\psi\}+iI\{\psi\}$, as follows,
\begin{equation}
I\{\psi\}=\frac{1}{2} D\frac{\nabla^{2}(\psi^*\psi)}{\psi^*\psi}\,,
\lb{imag}
\end{equation}
\begin{equation}
R\{\psi\}= \tilde D\sum_{j=1}^5 \lambda_j\Lambda_j[\psi]=
\tilde D\sum_{j=1}^5 c_j R_j[\psi]\,,
\lb{paramdg}
\lb{real}
\end{equation}
where all the coefficients $\lambda_j$ and $c_j$ are real, and the functionals
$\Lambda_j[\psi]$ or $R_j[\psi]$ are expressed in terms of the derivatives of the
wave function or in terms of the probability density $\rho=\psi^*\psi$ and
the probability current
${\bf j}=\left(\psi^*\nabla\psi-\psi\nabla\psi^*\right)/2i$:
\begin{tabular}{ll}
$\Lambda_{1}[\psi]=\displaystyle{\mbox{Re}\left(\frac{\nabla^{2}\psi}{\psi}
\right)}$&\quad
$R_1[\psi] =\displaystyle{\frac{\nabla\cdot{\bf j}}{\rho}}$\\[3mm]
$\Lambda_{2}[\psi]=\displaystyle{\mbox{Im}\left(\frac{\nabla^{2}\psi}{\psi}
\right)}$&\quad
$R_2[\psi] =\displaystyle{\frac{\nabla^2\rho}{\rho}}$\\[3mm]
$\Lambda_{3}[\psi]=\displaystyle{\mbox{Re}\left(\frac{\nabla\psi}{\psi}
\right)^{2}}$&\quad
$R_3[\psi] =\displaystyle{\frac{{\bf j}^2}{\rho^2}}$\\[3mm]
$\Lambda_{4}[\psi]=\displaystyle{\mbox{Im}\left(\frac{\nabla\psi}{\psi}
\right)^{2}}$&\quad
$R_4[\psi] =\displaystyle{\frac{{\bf j}\cdot\nabla\rho}{\rho^2}}$\\[3mm]
$\Lambda_{5}[\psi]=\displaystyle{\frac{|\nabla\psi|^{2}}{|\psi|^{2}}}$&\quad
$R_5[\psi] =\displaystyle{\frac{(\nabla\rho)^2}{\rho^2}}$\\[3mm]
\end{tabular}
\noindent
The coefficients $\lambda_j$ and $c_j$ are related as follows,
\vspace{3mm}
\begin{tabular}{ll}
$\lambda_1=2c_2$ &\quad $c_1=\lambda_2$\\
$\lambda_2=c_1$ &\quad $c_2=\frac12\lambda_1$\\
$\lambda_3=2c_5-\frac12 c_3$ &\quad $c_3=\lambda_5-\lambda_1-\lambda_3$\\
$\lambda_4=c_4$ &\quad $c_4=\lambda_4$\\
$\lambda_5=2c_2+2c_5+\frac12 c_3$ &\quad $c_5=\frac14(\lambda_5+\lambda_3-\lambda_1)$
\end{tabular}
\vspace{3mm}
\noindent
More general homogeneous nonlinear functionals, which include as special cases
the nonlocal terms proposed by Gisin \cite{gis} and by
Weinberg \cite{wein}, were given in \cite{dm93,dm95,grig,dm98}.
It is not clear, until now, whether nonlinear corrections to the
Schr\"odinger equation of the SSDG type have a physical meaning from the
point of view of quantum mechanics (possible experiments which could
verify the existence of such corrections were proposed in \cite{dmpla,dmnew},
and the relations between the SSDG-equation and the master equation for
mixed quantum states were studied in \cite{dm95,dmnew,dmcla}).
Nonetheless, the mathematical structure of the new family of nonlinear
equations appears rather rich. In particular, studying this
family resulted recently in discovering the nonlinear gauge transformations
\cite{g95,dg96,g97}.
The aim of our article is to show another remarkable property of the SSDG
equation, namely, the existence of a {\it new type of soliton solutions\/},
which are different from zero {\it in a finite space domain\/} even for
{\it arbitrarily small\/} nonlinear coefficients. As far as we know, such
kind of solitons was not discussed earlier.
\section{Soliton solutions with linear phase}
Looking for a {\it shape invariant\/} solution to the SSDG equation
\rf{1}--\rf{real} with a {\it linear phase\/},
\begin{equation}
\psi({\bf x},t)=g({\bf x}-{\bf v}t)e^{i({\bf k}{\bf x}-\omega t)},
\lb{sol1}
\end{equation}
we obtain the following two equations for the {\it real\/} function
$g({\bf x})$:
\begin{equation}
({\bf k}-{\bf v})\frac{\nabla g}{g} =D\left[\frac{\nabla^2 g}{g} +
\left(\frac{\nabla g}{g}\right)^2\right]
\lb{eqim}
\end{equation}
\begin{equation}
(1-\sigma) \frac{\nabla^2 g}{g} -\xi \left(\frac{\nabla g}{g}\right)^2
-2\mu {\bf k} \cdot \frac{\nabla g}{g}
={\bf k}^2(1+\eta) -2\omega \,,
\lb{eqre}
\end{equation}
where the new coefficients are defined as
\[
\sigma=2\tilde{D}\lambda_1 \equiv 4\tilde{D}c_2,
\quad \xi=2\tilde{D}\left(\lambda_3+\lambda_5\right)
\equiv 4\tilde{D}\left(c_2+2c_5\right),
\]
\[
\eta =2\tilde{D}\left(\lambda_5 -\lambda_3-\lambda_1\right)\equiv 2\tilde{D}c_3\,,
\quad \mu= 2\tilde{D}\left(\lambda_2+\lambda_4\right)
\equiv 2\tilde{D}\left(c_1+c_4\right).
\]
A general solution to eq. \rf{eqim} in the one-dimensional case is
\[
g_D(x)=\left\{C_1 + C_2 \exp[(k-v)x/D]\right\}^{1/2},
\]
with arbitrary constants $C_1$ and $C_2$. However, the function
$g_D(x)$ cannot be normalized, thus in order to guarantee normalization we
impose
\[
{\bf k}={\bf v}, \quad D=0,
\]
i.e. soliton solutions can only exist in the absence of dissipative terms
in the Hamiltonian.
The substitution
\begin{equation}
g({\bf x})=[f({\bf x})]^{\alpha}, \quad
\alpha=\frac{1-\sigma}{1-\sigma-\xi} \, ,
\lb{alfa}
\end{equation}
eliminates the nonlinear term $(\nabla g/g)^2$ in eq. \rf{eqre}, such that
\begin{equation}
\nabla^2 f -2\mbox{\boldmath{$\kappa$}}\cdot \nabla f +\gamma^2 f=0 \; ,
\lb{lineq}
\end{equation}
where
\begin{equation}
\gamma^2 =\left[ 2\omega -{\bf k}^2(1+\eta)\right]
\frac{1-\sigma-\xi}{(1-\sigma)^2}\,,
\quad \mbox{\boldmath{$\kappa$}}=\frac{2\mu{\bf k}}{1-\sigma}\,.
\lb{defgam}
\end{equation}
Note that $\gamma^2$ is a free parameter, which may assume both positive and
negative values, depending on the packet average energy
\[
\langle E\rangle \equiv i\int_{-\infty}^{\infty}\psi^{*}({\bf x},t)
\frac{\partial \psi({\bf x},t)}{\partial t}\,d{\bf x} =\omega.
\]
A general solution to eq. \rf{lineq} in the one-dimensional case reads
\begin{equation}
f(x)=e^{\kappa x}\left(C_1 e^{sx} +C_2 e^{-sx}\right),
\quad s=\sqrt{\kappa^2-\gamma^2}.
\lb{sols}
\end{equation}
In particular,
\noindent ({\em a})
For $\gamma^2<0$, function \rf{sols} goes to infinity when
$x\to\pm\infty$ (if both constants $C_1$ and $C_2$ are positive),
so a normalizable solution $g(x)$ (eq. \rf{alfa}) exists only under the
condition $\alpha<0$,
i.e., for parameters $\sigma$ and $\xi$ satisfying the inequalities
$\sigma<1,\; \xi>1-\sigma$ or
$\sigma>1,\; \xi<1-\sigma$ (in other words, these parameters must be
located between the straight lines
$\sigma=1$ and $\sigma+\xi=1$ in the $\sigma \xi$-plane).
This means that only
{\it strong nonlinearity\/} can give ``usual'' soliton solutions with
exponentially decreasing tails, whose simplest representative ($\mu =0$)
reads
\begin{equation}
g_*(x)= \left[ \cosh(\beta x)\right]^{-|\alpha|}.
\lb{sol2}
\end{equation}
This conclusion agrees with the results of studies \cite{sab90,sab91,sab94},
where exponentially confined solitons were found for the nonlinear
functionals like $\Omega\{\psi\}= a\Delta (\ln|\psi|)$.
Similar solutions to the special cases of the
SSDG equation with complex coefficients were studied in
\cite{sten88,sten89,sten91}.
A large family of exact solutions corresponding to the most general
Doebner--Goldin parametrization \rf{imag}-\rf{real} was found in
\cite{natt94,natt95,natt96}. However, that family
does not contain the solitons with a linear phase. For example,
the solution given in \cite{natt95} has the same amplitude factor as in
\rf{sol2}, but its phase is proportional to $\ln[g(x-kt)]$, so it does
not converge to the plane-wave solution of the linear Schr\" odinger
equation when the nonlinear coefficients $D$ and $\tilde{D}$ go to zero.
\noindent ({\em b}) For $ 0 < \gamma^2<\kappa^2$, function
\rf{sols} goes to infinity only for $x\to+\infty$, while for $x\to-\infty$
it goes to zero (or vise versa). In this case, we cannot obtain
a normalizable solution in the form \rf{alfa}
for any value of $\alpha$.
\noindent ({\em c}) Quite different situation arises when
$\gamma^2>\kappa^2$, then expression \rf{sols} shall contain
{\it trigonometric\/} functions, and (making a
shift of the origin, if necessary) we arrive at a solution to eq.
\rf{eqre} in the form
\begin{equation}
g_{\delta}(x)=\left[e^{\kappa x}\cos(\tilde\gamma x)\right]^{1+\delta}
\lb{FLS}
\end{equation}
with $\tilde\gamma=\sqrt{\gamma^2-\kappa^2}\ge 0$ and
\begin{equation}
\delta=\frac{\xi}{1-\sigma-\xi}\,.
\lb{defdelt}
\end{equation}
At first glance, we have a problem when $f<0$, since function $f^{\alpha}$
is ill-defined in this case (unless the exponent $\alpha$ is an integer).
But we notice that if $\alpha>1$ (i.e. $\delta>0)$, then the function
$g(x)=[f(x)]^{\alpha}$ turns into zero
{\it together with its derivative\/} $ g'(x)$ at $\tilde{\gamma} x = \pi /2$.
This means that there exists an
integrable solution with a {\it continuous first derivative\/},
which is {\it localized completely\/} inside a {\it finite\/} domain:
\[
\psi_{\delta}(x,t)= \left\{ \begin{array}{cl}
\left\{\cos\left[\tilde\gamma(x-kt)\right]\right\}^{1+\delta}
\exp\left[(1+\delta)\kappa(x-kt) + i(kx-\omega t)\right]
&\mbox{if} \quad |\tilde\gamma(x-kt)| < \pi/2\\[3mm]
0 & \mbox{if} \quad |\tilde\gamma(x-kt)|\ge \pi/2
\end{array} \right.
\]
It is remarkable that such a ``finite-length soliton'' (FLS) exists for an
{\it arbitrarily weak\/} nonlinearity, since the requirement $\delta>0$ implies
the inequalities
\begin{equation}
0<\xi < 1-\sigma
\lb{conds}
\end{equation}
which can be satisfied for small values of $\xi$ and $\sigma$
(another possibility is $0>\xi > 1-\sigma$, but it demands $\sigma>1$,
meaning a stronger nonlinearity).
In terms of the coefficients $\lambda_j$ and $c_j$ , condition \rf{conds} reads
\begin{eqnarray*}
& \tilde{D}\left(\lambda_3+\lambda_5\right)>0,&
\quad 2\tilde{D}\left(\lambda_1+\lambda_3+\lambda_5\right)<1\\
& \tilde{D}\left(c_2+2c_5\right)>0, & \quad 8\tilde{D}\left(c_2+c_5\right)<1 \; .
\end{eqnarray*}
It was shown in \cite{dg94} that the SSDG equation is Galilean invariant
provided that (i) $c_1+c_4=0$ and (ii) $c_3=0$. In our notation this
means $\mu=\eta=0$. Thus we arrive at the 3-parameter family of homogeneous
local nonlinear functionals admitting Galilean-invariant and spatially
confined soliton solutions to eq. \rf{1}:
\begin{eqnarray}
\Omega\{\psi\}&=&\frac12\left\{
\sigma\,\mbox{Re}\frac{\nabla^{2}\psi}{\psi}+
\nu\,\mbox{Im}\left[\nabla\cdot\left(\frac{\nabla\psi}{\psi}\right)\right]+
\xi\left[\mbox{Re}\frac{\nabla\psi}{\psi}\right]^2 +
\sigma\,\left[\mbox{Im}\frac{\nabla\psi}{\psi}\right]^2
\right\}
\lb{finlam}\\
&=&\frac18\left\{
2\sigma\,\frac{\nabla^{2}\rho}{\rho}+
4\nu\,\nabla\cdot\left(\frac{{\bf j}}{\rho}\right)+
(\xi-\sigma)\left(\frac{\nabla\rho}{\rho}\right)^{2}
\right\}.
\lb{finc}
\end{eqnarray}
Note that parameter $\nu=2\tilde{D}\lambda_2=2\tilde{D}c_1$ does not make any
influence on the discussed solutions, thus only derivatives of the density
$\rho$ and not of the current density ${\bf j}$ are important for soliton
solutions. The only crucial parameter is
$\xi$, so, the simplest 1-parameter nonlinear functional admitting
FLS solution reads ($\nu = \sigma =0$),
\begin{equation}
\Omega\{\psi\}=\frac{\xi}{2}\left[\mbox{Re}
\left(\frac{\nabla\psi}{\psi}\right)\right]^{2}=
\frac{\xi}{8}\left(\frac{\nabla\rho}{\rho}\right)^{2}, \quad 0<\xi<1\, ,
\lb{simpl}
\end{equation}
The explicit form of all FLS-solutions is as follows,
\begin{equation}
\psi_{{\bf k}\gamma}({\bf x},t)=\left\{
\begin{array}{cl}
\left[f_{\gamma}({\bf x}-{\bf k}t)\right]^{1+\delta}
e^{i({\bf k}{\bf x}-\omega_{{\bf k}\gamma} t)} &
\mbox{if} \quad |{\bf x}-{\bf k}t|\in {\cal R}^{(+)}(f_{\gamma})\\[3mm]
0 & \mbox{if} \quad |{\bf x}-{\bf k}t|\notin {\cal R}^{(+)}(f_{\gamma})
\end{array}
\right.,
\lb{solFLS}
\end{equation}
where $f_{\gamma}({\bf x})$ is any {\it positive\/} solution to the
Helmholtz equation $\left(\nabla^2 +\gamma^2\right)f=0$ with an arbitrary
real constant $\gamma$, and ${\cal R}^{(+)}(f_{\gamma})$ is the internal
part of a space region bounded by a closed surface (in 3 dimensions) or a
closed curve (in 2 dimensions) determined by the equation
$f_{\gamma}({\bf x})=0$ (in principle, this region may be multi-connected).
To avoid any ambiguity, we define the nonlinear functional $\Omega\{\psi\}$
for $\psi=0$, assuming $\Omega\{\psi\}\psi=0$ on such points.
Although the solution \rf{solFLS} is non-analytical, it has continuous first
derivatives in all points of the space.
Under the Galilean invariance symmetry ($\eta=0$), the frequency
$\omega_{{\bf k}\gamma} $ (eq. \rf{defgam}) equals
\begin{equation}
\omega_{{\bf k}\gamma}=\frac12{\bf k}^2 +
\frac12\gamma^2\frac{(1-\sigma)^2}{1-\sigma-\xi}\,, \lb{galinvfreq}
\end{equation}
so, the usual dispersion relation of linear Quantum Mechanics
($\omega_{{\bf k}}=\frac12 {\bf k}^2$) is modified
by an additional constant term (proportional to $\gamma^2$) that may be
interpreted as an ``internal energy'' of the wave packet \rf{solFLS}
due to its
confinement, whereas ${\bf k}^2/2$ is the energy of the ``center-of-mass''
motion. For $\gamma=0$, then $\nabla^2 f_0 = 0$, and considering $f_{0}=1$
we obtain a plane wave solution to the Schr\"odinger equation
with the wave number ${\bf k}$.
The concrete shapes of the FLS-packets in 2 and 3 dimensions may be quite
diverse. The most symmetric solutions are given by eq. \rf{solFLS} with
$f_{\gamma}({\bf x})$ in the form of
$J_0(\gamma |{\bf x}|)$ or $\sin(\gamma|{\bf x}|)/|{\bf x}|$
(in 2 and 3 dimensions, respectively). However, there exist also asymmetric
packets with functions $f_{\gamma}({\bf x})$ proportional to
$J_m(\gamma |{\bf x}|)\cos(m\varphi)$ or
$j_l(\gamma |{\bf x}|)Y_{lm}(\vartheta,\varphi)$, where $J_m(x)$ is the
Bessel function, $j_l(x)$ is the spherical Bessel function (proportional to
the Bessel function with the semi-integral index), and
$Y_{lm}(\vartheta,\varphi)$ is a real-valued analog of the spherical harmonics,
$\vartheta,\varphi$ being the usual angular variables.
\section{Discussion}
Although the substitution \rf{alfa} was used already in \cite{sab94,dg94},
the existence of the FLS solutions was not noticed before, perhaps, because
the authors of the cited papers were looking for solutions of the
stationary Schr\"odinger equation or for usual exponentially confined
solitons.
It should be noted that the substitution \rf{alfa} linearizes only the
equation \rf{eqre} for the {\it real\/} amplitude of the special form of
solution \rf{sol1}, but not eq. \rf{1} as a whole.
As was shown in \cite{g95,dg96}, the {\it nonlinear gauge transformation\/}
(NGT)
\begin{equation}
\psi \mapsto \psi' =|\psi|\exp\left[ i\left( z^* \ln\psi+ z \ln\psi^*
\right) \right]
\lb{gauge}
\end{equation}
(where $z$ is a complex parameter) transforms any SSDG equation into an
equation of the same kind, but with another set of coefficients, and all
equations can be classified in accordance with the possible values of 5
invariants of the NGT family. In the case of eq. \rf{1} with the
functional \rf{simpl} the invariants are as follows (we use the same notation
as in \cite{dg96}):
\[
\tau_1=\tau_4=0, \quad \tau_2=\frac18, \quad \tau_3=-1, \quad
\imath_5=-\,\frac{\xi}{16}
\,.
\]
If we had $\imath_5=0$, then the whole SSDG equation could be linearized
by means of a
suitable NGT. It is the nonzero value of parameter $\xi$ that prevents the
linearization and makes possible the existence of the FLS solutions.
It is interesting to note in this connection, that various special cases
of the general SSDG equation were considered before the general structure
of the equation was found in \cite{sab94,dg94}, but the coefficients
were chosen in such a way that the parameter $\xi$ was almost always taken
equal to zero
\cite{guer,smol,vig}. The only exception is Ref. \cite{kib}, where the
only nonzero coefficient in the $\lambda_j$-parametrization is $\lambda_5$; however,
in this case not only $\xi\neq 0$, but $\eta\neq 0$, too, so the
Galilean invariance symmetry is absent.
Turning to a possible physical meaning of the FLS solutions, we may say that
they realize the ``dream of De Broglie'', in a sense that they permit
to identify a quantum particle with a nonspreading wave packet of finite
length travelling with a constant velocity in the free space.
Earlier, the only proposed nonlinear equation that resulted into a
nonspreading wave packet solution for
a free particle was the one proposed by Bialynicki-Birula and
Mycielski \cite{bbm} (BBM), with the nonlinear term
$\Omega\{\psi\}=-b\ln|\psi|^2$ (it was shown recently that this term can
arise if one applies to the SSDG equation a nonlinear gauge transformation
with {\it time-dependent coefficients\/} \cite{dg96}).
The solitons of the BBM-equation are Gaussian wave packets ({\em gaussons})
whose constant width is inversly proportional to the nonlinear coefficient
$b$. In contrast to the {\em gaussons}, the FLS of the SSDG equation have
the width $\gamma^{-1}$ as a free parameter, independent of the nonlinear
coefficients. The most attractive feature of FLS solutions is that they
exist for an arbitrarily weak nonlinearity. Consequently, the
superposition principle of quantum mechanics, which is verified, from the
experimental point of view, with a limited accuracy, does not rule out the
nonlinear terms like \rf{simpl} immediately. On the contrary, new
experiments on the verification of (non?)linearity of quantum mechanics
could be proposed, which would take into account the FLS phenomenon.
\subsection*{Acknowledgements}
This research was supported by FAPESP (Brazil) project 96/05437-0.
S.S.Mizrahi thanks CNPq and FINEP, Brasil, for partial financial support.
\newpage
| proofpile-arXiv_065-8543 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The study of groups of galaxies as dynamical systems is interesting not
only {\it per se}, but also because groups can be used to set constraints
on cosmological models (e.g. Frenk \etal 1990; Weinberg \& Cole 1992,
Zabludoff \etal 1993;
Zabludoff \& Geller 1994, Nolthenius \etal 1994, 1997) and on models of
galaxy formation (Frenk \etal 1996; Kaufmann \etal 1997). Groups are
also interesting sites where to look for interactions of galaxies with
their environment, in order to obtain information on galaxy evolution
processes (Postman \& Geller 1984, Allington-Smith \etal 1993).
Group catalogs identified in redshift space are increasing both in
number and size (CfA2N, RPG; SSRS2, Ramella \etal 1998;
Perseus-Pisces, Trasarti Battistoni 1998; LCRS, Tucker \etal 1997). At the same time, cosmological n-body simulations are
reaching the resolution required to allow replication of the
observational techniques for the identification of groups. In
particular, Frederic (1995a,b) uses n-body simulations to evaluate and
compare the performances of commonly used group finding algorithms.
Among the main properties of groups, the velocity dispersion
is of particular interest. It is easy to measure and it is
well suited for comparison with the predictions of cosmological n-body
models (Frenk \etal 1990; Moore \etal 1993; Zabludoff \etal 1993).
Distributions of velocity dispersions of nearby groups are now well
determined with rather small statistical uncertainties given the
large size of the samples. Ramella \etal 1995 and Ramella \etal 1996
survey the redshifts of candidate faint members of a
selection of nearby groups and
find that the velocity dispersion
of groups is stable against inclusion of
fainter members. In other words, the velocity dispersion estimated
on the basis of the fewer original bright members is a good indicator of the
velocity dispersion obtained from a better sampling of the same group.
In this paper we identify and analyze groups of galaxies in
the recently completed ESP survey (Vettolani \etal 1997).
The ESP group catalog is interesting because of
its depth ($b_J \le 19.4$)
and because it samples a new independent region of the universe.
ESP is a nearly bi-dimensional survey (the declination range is
much smaller than the right ascension range), five times deeper than either
CfA2 (Geller \& Huchra 1989) or SSRS2 (da Costa \etal 1994).
The volume of the survey is $\sim 5 \times 10^4$ ~$h^{-3}$ Mpc$^3$~ at
the sensitivity peak of the survey,
$z \sim 0.1$, and $\sim 1.9\times 10^5$
~$h^{-3}$ Mpc$^3$~ at the effective depth of the sample, $z \sim 0.16$.
Even if the volume of ESP is of the
same order of magnitude of the
volumes explored with the individual CfA2, SSRS2, and Perseus-Pisces samples,
it intercepts a larger number of structures. In fact, the strip geometry
is very efficient for the detection of large scale structures
within redshift surveys (de Lapparent \etal 1988).
In particular we determine the distribution of the velocity dispersions
of groups and show that our result is reliable in spite of the
particular geometry of the ESP survey (two rows of adjacent circular
fields of radius $\theta = 16$ arcmin, see Figure 1 of
Vettolani \etal 1997).
An important aspect that distinguishes the ESP group catalog from the
other shallower catalogs is that we have the spectra of all galaxies with
measured redshift.
It is already well known that emission line galaxies
are rarer in rich clusters than in the field (Biviano \etal 1997). The
relation between the fraction of emission line galaxies and the local
density is a manifestation of the morphology--density relationship
observed for clusters (Dressler 1980), a useful tool in the study of galaxy evolution.
With the ESP catalog we explore the extent of the morphology density
relationship in the intermediate range of densities that are typical of
groups at a larger depth than in previous studies.
We note that preliminary
results of a search for groups in the Las Campanas Redshift Survey
(Shectman \etal 1996) have
been presented by Tucker \etal (1997). The properties of these groups,
as distant as ours, are difficult to compare with those of our ESP
groups and with those of shallower surveys because LCRS a) is a red
band survey (ESP and shallower surveys are selected
in the blue band), b) it is
not simply magnitude limited, and c) it does not uniformly sample
structures in redshift space. In particular, the different
selection criteria could have a strong impact
on the results concerning the morphology-density relation,
the luminosity segregation, and the possible differences between the
luminosity functions of member and non-member galaxies.
In section 2) we briefly describe the data; in section 3) we analyze
the effect of the ESP geometry on the estimate of the velocity dispersion
of groups; in section 4)
we summarize the group identification procedure; in section 5) we present
the ESP group catalog; in section 6) we analyze properties of groups
that are relevant to a characterization of the Large Scale Structure (LSS);
in section 7) we analyze the properties of
galaxies in groups and compare them to the properties of ``field''
galaxies ({\it i.e. } galaxies that have not been assigned to groups) and ``cluster'' galaxies; in section 8) we identify ESP counterparts
of ACO and/or EDCC clusters (Lumsden \etal 1992).
Finally, we summarize our results in section 9).
\section{The Data}
The ESO Slice Project (ESP) galaxy redshift survey is described
in Vettolani \etal (1997). The data of the full sample together
with a detailed description of the instrumental set-up and of the
data reduction can be found in Vettolani \etal (1998). Here
we only briefly describe the survey.
The ESP survey extends over a strip of $\alpha \times \delta = 22^o \times
1^o$ (strip A), plus a nearby area of $5^o \times 1^o$ (strip B), five degrees
West of the main strip, in the South Galactic Pole region (
$23^{h} 23^m \le \alpha_{1950} \le 01^{h} 20^m $ and
$22^{h} 30^m \le \alpha_{1950} \le
22^{h} 52^m $ respectively; $ -40^o 45' \le \delta_{1950} \le -39^o 45'$).
Each of the two strips is covered with two rows
of slightly overlapping circular fields of angular radius $\theta =
16$ arcmin, the separation between the centers of neighboring
circles being 30 arcmin.
Each field corresponds to the field of view of the multifiber
spectrograph OPTOPUS at the 3.6m ESO telescope that was used
to obtain almost all of the spectra (the MEFOS spectrograph
was used in the last ESP run).
Throughout this paper we will assume that the circular fields are
tangent, with an angular radius of 15 arcmin:
this simplification has no consequences on the galaxy sample.
The total solid angle of the spectroscopic survey is 23.2 square degrees.
The galaxy catalog consists of all (candidate) galaxies
brighter than the limiting magnitude $b_{J,lim} = 19.4$ listed
in the Edinburgh--Durham Southern Galaxy Catalogue (Heydon--Dumbleton
et al. 1988, 1989).
The spectra cover the wavelength range 3730\AA~to 6050\AA, with
an average pixel size of 4.5\AA.
Galaxy redshifts are measured by cross--correlating
sky-subtracted spectra with a set of 8
template stars observed with
the same instrumental set-up used to obtain the galaxy spectra.
In this paper we use emission line redshifts only for
galaxies with no reliable absorption line redshift.
The median internal velocity error is of the order of $\sim 60$ km s$^{-1}$\ .
From a comparison of our 8 templates with three SAO radial velocity
standard stars we estimate that the zero--point error should be smaller
than $\sim 10$ km s$^{-1}$\ .
The total number of confirmed galaxies with reliable redshift measurement
is 3342. The completeness of strip A and strip B are estimated to be
91\% and 67\% respectively.
\section{ESP Geometry and the Measure of Velocity Dispersions}
To all practical purposes, the projection of the ESP survey on the sky
consists of two rows of adjacent
circular OPTOPUS fields of radius 15 arcmin and a separation
of 30 arcmin between adjacent centers. The
angular extent of groups and clusters at the typical depth of the
survey ($z \simeq 0.1$ ) are comparable, or even larger, than the size
of the OPTOPUS fields. Therefore, most systems falling into the
survey's volume are only partially surveyed.
The main effect of the "mask" of OPTOPUS fields is to hide a fraction
of group members that lie within or close to the strip containing the
mask (the OPTOPUS fields cover 78\% of the area of the "un-masked"
strip). Because of the hidden members, several poor groups may not
appear at all in our catalog. On the contrary, our catalog might
include parts of groups that are centered outside the ESP strip. These
problems notwithstanding, we expect to derive useful information on the
most important physical parameter of groups, the velocity dispersion,
$\sigma_{cz}$\ .
Our estimate of
the parent velocity dispersion, $\sigma_p$, is based upon
the sample standard deviation $\sigma_{cz}(N_{mem})$. The sample standard
deviation defined as $ \sigma_{cz}(N_{mem}) = \sqrt{\Sigma _i (cz_i - <cz>)^2/
(N_{mem}-3/2)}$ is a nearly unbiased estimate of the velocity
dispersion (Ledermann, 1984), independent of the size $N_{mem}$ of
the sample. We make the standard assumptions that
a) barycentric velocities of members are not correlated with
their real 3D positions within groups, and that b)
in each group the distribution of barycentric
velocities is approximately
gaussian. Because the position on the sky of the OPTOPUS mask is not
related to the positions of groups, its only effect is to
reduce at random $N_{mem}$. Therefore, using an unbiased estimate
of the velocity dispersion, the mask
has no effect on our determination of the average velocity dispersions of
groups.
The effect of the mask is to broaden the distribution of the sample
standard deviations. The variance of the distribution of sample
standard deviations varies with $N_{mem}$ approximately as
$\sigma_{cz}^2/2N_{mem}$ (Ledermann, 1984).
This distribution, proportional to the
$\chi^2$ distribution, is skewed: even if the mean of the distribution
is unbiased, $\sigma_{cz}$\ is more frequently underestimated than overestimated.
While it is easy to predict the effect of the mask on the determination
of the velocity dispersion of a single group, it is rather difficult to
predict the effect of the mask on the observed distribution of velocity
dispersions of a sample of groups with different ``true'' velocity
dispersions and different number of members. In order to estimate
qualitatively the effect of the mask on the shape of the distribution
of velocity dispersions, we perform a simple Monte Carlo simulation.
We simulate a group by placing uniformly at random $N_{mem}$ points
within a circle of angular radius $\theta_{gr}$ corresponding, at the
redshift of the group, to the linear projected radius $R_{gr}$ = 0.5
$h^{-1}$ Mpc. This radius is the typical size of groups observed in shallow
surveys (e.g. RPG). We select the redshift of the
group, $z_{gr}$, by random sampling the observed distribution of ESP
galaxy redshifts.
In order to start from reasonably realistic distributions,
we set $N_{mem}$ and the velocity dispersion, $\sigma_{cz}$\ , by random sampling the
relative histograms obtained from our ESP catalog. We limit the range
of $N_{mem}$\ to 3 $\le$ $N_{mem}$\ $\le $ 18 and the range of $\sigma_{cz}$\ to 0 $\le $
$\sigma_{cz}$\ $\le$ 1000 km s$^{-1}$\ .
We lay down at random the center of the simulated group within the region
of the sky defined by extending 15 arcmin northward and southward the
"un-masked" limits of the ESP survey. We then assign to each of the
$N_{mem}$\ points a barycentric velocity randomly sampled from a gaussian
with dispersion $\sigma_{cz}$\ centered on $z_{gr}$. We compute the velocity
dispersion, $\sigma_{no-mask}$, of the $N_{mem}$\ velocities. Finally, we
apply the mask and discard the points that fall outside the mask.
We discard the whole group if there
are fewer than 3 points left within the mask ($N_{mem,mask} < 3$). If
the group ``survives'' the mask, we compute the dispersion
$\sigma_{mask}$ of the $N_{mem,mask}$ members. On average, 78\% of the
groups survive the mask (this fraction corresponds to the ratio between
the area covered by the mask and the area of the ``un-masked'' strip).
The percentage of surviving groups depends on
the exact limits of the region where we lay down at random groups and
on the projected distribution of members within $R_{gr}$. For the
purpose of the simulation, the fraction of surviving groups is not
critical.
We repeat the procedure 100 times for $n_{gr}$ = 231 simulated groups
(the number of groups identified within ESP). At each run we compute
the histograms N($N_{mem,mask}$) and N($\sigma_{mask}$).
In Figure 1 we plot the input distribution N($N_{mem}$) --thin line--
together with the average output distribution,
$<$N($N_{mem,mask}$)$> n_{gr}/n_{mask}$
-- thick line--. Errorbars
represent $\pm$ one standard deviation derived in each bin from the
distribution of the 100 histograms N($N_{mem,mask}$); for clarity we omit
the similar errorbars of N($N_{mem}$.
The factor $n_{gr}/n_{mask}$ normalizes the output distribution
to the number of input groups. The two
histograms in Figure 1 are clearly very similar since N($N_{mem}$) is
within one sigma from $<$N($N_{mem,mask}$)$> n_{gr}/n_{mask}$ for
all values of $N_{mem}$\ . We point out here that the similarity between the input and
output histograms does not mean that the surviving groups have not
changed. In fact only about 63\% of the triplets survive the mask
while, for example, 88\% of the groups with 5 members and 98\% of
those with 10 members ``survive'' the mask.
\begin{figure}
\epsfysize=9cm
\epsfbox{figure1.ps}
\caption[]{Effect of the ``OPTOPUS mask'' on N($N_{mem}$):
the thin histogram is the ``true'' distribution,
the thick histogram is the average distribution ``observed'' through the
``OPTOPUS mask'', normalized to the number of input groups.
Errorbars represent $\pm$ one standard deviation.
}
\end{figure}
Figure 2 shows the results of our simple simulation for the velocity
dispersion. The thin histogram is the input ``true'' distribution
N($\sigma_{gr}$). The dotted histogram is
the average ``observed'' distribution obtained without
dropping galaxies that lie outside the OPTOPUS mask, N($\sigma_{no-mask}$).
This is the distribution we would observe if the
geometry of the survey would be a simple strip.
The third histogram (thick line) is the average output
distribution in presence of the OPTOPUS mask,
$<$N($\sigma_{mask}$)$> n_{gr}/n_{mask}$
(errorbars are $\pm$ one-sigma). The input
distribution N($\sigma_{gr}$), the distribution N($\sigma_{no-mask}$),
and the distribution $<$N($\sigma_{mask}$)$> n_{gr}/n_{mask}$
are all within one-sigma from each other.
In particular the two distributions we observe with and without
OPTOPUS mask are undistinguishable (at the 99.9\% confidence
level, according to the KS test). The low velocity dispersion
bins are slightly more populated in the ``observed'' histograms
because the estimate of the ``true'' $\sigma_{cz}$\ is based on small $N_{mem}$\ .
Note that in the case of real observations, some
groups in the lowest $\sigma_{cz}$\ bin will be shifted again to the next higher bin
because of measurement errors.
Our results do not change if we take into account the slight dependence
of $\sigma_{cz}$\ from $N_{mem}$\ observed within our ESP catalog: also in this case
the effect of the mask is negligible.
In conclusion, the simulation confirms our expectation that the OPTOPUS
mask has no significant effect on the shape of the distribution of
velocity dispersions.
\begin{figure}
\epsfysize=9cm
\epsfbox{figure2.ps}
\caption[]{Effect of the ``OPTOPUS mask'' on N($\sigma_{gr}$).
The thin histogram is the ``real'' distribution, the dotted
histogram shows the effect of
sampling on the input distribution, and
the thick histogram is the average distribution
``observed'' through the ``OPTOPUS mask'',
normalized to the number of input groups.
Errorbars represent $\pm$ one standard deviation.
}
\end{figure}
\medskip \medskip
\section{Group Identification}
We identify groups with the so-called friend-of-friend algorithm (FOFA;
Huchra \& Geller, 1982) as described in RPG.
We implement here the cosmological corrections required
by the depth of the sample ($z \le 0.2$). Throughout this paper we
use H$_o$ = 100 km s$^{-1}$\ Mpc$^{-1}$ and $q_0 = 0.5$.
For each galaxy in the magnitude limited ESP catalog,
the FOFA identifies all other galaxies
with a projected comoving separation $$D_{12} \le D_L(V_1,V_2) \eqno (1)$$
and a line-of-sight velocity difference
$$V_{12} \le V_L(V_1,V_2). \eqno (2)$$
Here $V_1 = cz_1$ and $V_2 = cz_2$ are the velocities of the
two galaxies in the pair.
All pairs linked by a common galaxy form a ``group''.
The two parameters D$_L$\ , V$_L$\ are scaled with distance in order to take
into account the decrease of the magnitude range of the luminosity
function sampled at increasing distance. The scaling is
$$D_L=D_o R \eqno (3)$$
and
$$V_L = V_o R, \eqno (4)$$
where
$$R=\left[\int_{-\infty}^{M_{lim}} \Phi(M)
{dM}/\int_{-\infty}^{M_{12}} \Phi(M) {dM}\right]^{1/3}, \eqno (5)$$
$$M_{12}= b_{J,lim}-25-5 \log(d_{L}(\bar{z}_{12})) - <K(\bar{z}_{12}>, \eqno (6) $$
and $M_{lim}$ is the absolute magnitude corresponding to $b_{J,lim}$
at a fiducial velocity $V_f$. We
compute $d_L(\bar{z}_{12})$ with the Mattig (1958) expression, where
$\bar{z}_{12} = .5(z_1 + z_2)$. Finally, $<K(\bar{z}_{12})>$ is the
$K-$correction ``weighted'' with the expected morphological mix at
each redshift as in Zucca \etal (1997).
The scaling is the same for both D$_L$\ and V$_L$\ and it is normalized
at the fiducial velocity $V_f$ = 5000 km s$^{-1}$\ ,
where D$_0$\ = $D_L(V_f$) and V$_0$\ = $V_L(V_f$). In particular,
a given value of D$_0$\ corresponds to a minimum number overdensity
threshold for groups, $\delta\rho/\rho$\ . The luminosity function we use is the
Schechter parametrization with $M^* = -19.61$,
$\alpha = -1.22$, and $\phi^* = 0.020$ Mpc$^{-3}$
computed for ESP galaxies by Zucca \etal (1997).
We do not consider galaxies with velocities
$cz \le V_f$ because the linear extension of the survey in the
direction of the declination is smaller than the typical size
of a group for $cz \le $5000 km s$^{-1}$\ .
We also limit the maximum depth of our group catalog to $cz \le 60000$
km s$^{-1}$\ . Beyond this limit the accessible part of the luminosity function
becomes very small and the scaling of the FOFA linking parameters
excessively large.
The main characteristics of the distribution of galaxies within the
volume of the universe surveyed by ESP (Vettolani \etal 1997) are very
similar to those observed within shallower, wider angle magnitude
limited redshift surveys. For this reason we expect that the
conclusions on the fine-tuning of FOFA reached by Ramella \etal 1989,
Frederic 1995a, and RPG will hold true also for ESP.
In particular, RPG show that within the CfAN2 redshift survey the
choice of the FOFA parameters is not critical in a wide region of the
parameter space around ($\delta\rho/\rho$\ = 80, V$_0$\ = 350 km s$^{-1}$\ ). With our
luminosity function and fiducial velocity, we obtain $\delta\rho/\rho$\ = 80 for D$_0$\
= 0.26 Mpc, a value comparable to the D$_0$\ value used for CfAN2. It is
therefore reasonable to expect that the same results of the exploration
of the parameter space will hold also for the ESP survey. In order to
verify our expectation, we run FOFA with the following five pairs of
values of the linking parameters selected among those used by RPG:
($\delta\rho/\rho$\ = 80, V$_0$\ = 250 km s$^{-1}$\ ), ($\delta\rho/\rho$\ = 80, V$_0$\ = 350 km s$^{-1}$\ ), ($\delta\rho/\rho$\ =
80, V$_0$\ = 600 km s$^{-1}$\ ), ($\delta\rho/\rho$\ = 60, V$_0$\ = 350 km s$^{-1}$\ ), ($\delta\rho/\rho$\ = 100, V$_0$\ =
350 km s$^{-1}$\ ). Based on RPG, these pairs of values are sufficient to give
an indication of the stability of the group catalogs in the parameter
space (D$_L$\ ,V$_L$\ ). The number of groups in the five cases is
N$_{groups}$ = 217, 231, 253, 239, and 217 respectively.
\begin{figure}
\epsfysize=9cm
\epsfbox{figure3.ps}
\caption[]{The distribution of velocity dispersions, N($\sigma_{cz}$),
of different ESP group catalogs obtained for a grid of values of
the search parameters $\delta\rho/\rho$\ and $V_0$.
Errorbars represent $\pm$ one standard deviation.
}
\end{figure}
We plot in Figure 3 the observed distributions of the velocity
dispersions of the five group catalogs.
We compare the distribution obtained for ($\delta\rho/\rho$\ = 80, V$_0$\ = 350 km s$^{-1}$\ )
--thick histogram in Figure 3 -- with the other four
distributions and find that the only significant
difference (99.9 \% level, according to the KS test) occurs with
the distribution obtained using the largest velocity link, V$_0$\ = 600
km s$^{-1}$\ (dotted histogram). This value of the
velocity-link produces an excess of high
velocity dispersion systems. Frederic 1995a,
Ramella \etal 1989, and RPG argue that these high velocity
dispersion systems are likely to
include a significant number of interlopers
(galaxies with high barycentric velocity that are
not physically related to the group in real space).
On the basis of the results of our tests, we choose the catalog
obtained with $\delta\rho/\rho$\ = 80 (D$_0$\ = 0.26 $h^{-1}$ Mpc) and V$_0$\ = 350 km s$^{-1}$\ as our
final ESP group catalog. This choice offers the advantage of a
straightforward comparison between the properties of ESP catalog and
those of the CfA2N (RPG), and SSRS2 (Ramella \etal 1998)
catalogs.
\section{The Group Catalog}
We identify 231 groups within the redshift limits $5000 \le cz \le
60000$ km s$^{-1}$\ . These groups contain 1250 members, 40.5 \% of the 3085 ESP
$b_J \leq 19.4$ galaxies within the same $cz$\ range.
In Table 1 we present our group catalog. For each group we list the ID
number (column 1), the number of members (column 2), the coordinates
$\alpha_{1950}$ and $\delta_{1950}$ (columns 3 and 4 respectively), the
mean radial velocity $cz$\ in km s$^{-1}$\ corrected for Virgo infall and
galactic rotation (column 5), and the velocity dispersion $\sigma_{cz}$\ (column 6).
We compute the velocity dispersion following the prescription of Ledermann
(1984) for an unbiased estimator of the dispersion (see previous section).
We also take into account the cosmological expansion of the universe
and the measurement errors according to the prescriptions of
Danese \etal (1980).
The errors we associate to the redshifts are those output by the RVSAO
cross-correlation procedure multiplied by a factor 1.6. This factor
brings the cross-correlation error in rough agreement
with the external error estimated from repeated observations (Vettolani
\etal 1998 -- here we do not distinguish between emission and
absorption line redshifts).
Table 1 is available
only in electronic form via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5)
or via http://cdsweb.u-strasbg.fr/Abstract.html.
In the case of 24 groups, the correction of $\sigma_{cz}$\ for the measurement
errors leads to a negative value. In column 6 of Table 1 we give the error
as an upper limit to $\sigma_{cz}$\ for these groups.
Not all galaxies in the region of the sky covered by the ESP survey
have a measured redshift. Of the original target list, 444 objects are
not observed, and 207 objects have a noisy spectrum, insufficient for
a reliable determination of the redshift. In Table 1 we give, for each
group, the ratio of these objects to the number of members (column 7).
In computing these rates,
we assign to each group the objects without redshift whose ($\alpha$,
$\delta$) coordinates fall within the angular region defined by the
union of the $N_{mem}$\ circular regions obtained by projecting on the sky
circles of linear radius $D_L(cz_{group})$ centered on all group
members. There are groups that are separated along the line-of-sight
but that overlap once projected on the sky. If an object without
redshift lies within the overlap region, we assign the object to both
groups.
There are 67 groups that do not contain any object of the photometric
catalog without measured
redshift. On the other hand, in the case of 51 groups the number of objects without
redshift equals, or exceeds, the number of members. These groups
are mostly triplets and quadruplets. Only 14 out of the 51 (possibly)
highly incomplete groups have $N_{mem}$\ $\geq$ 5. Most of these groups are located
in the relatively small region B of the redshift survey (Vettolani \etal 1998),
which is the least complete (completeness level = 71\%).
Finally, we estimate that only 8 out of 231 groups are entirely contained
within one OPTOPUS field. By "entirely contained" we mean that none of
the circles of projected linear radius $D_L$ centered on the member
galaxies crosses the edges of the OPTOPUS fields.
\begin{figure*}
\epsfysize=16cm
\centerline{\rotate[l]{\epsfbox[28 110 423 785]{figure4a.ps}}}
\epsfysize=16cm
\centerline{\rotate[l]{\epsfbox[28 110 500 785]{figure4b.ps}}}
\caption[]{Cone diagrams ($\alpha$ -- $cz$) of ESP galaxies (top panel)
and of ESP groups (bottom panel). The larger circles in the cone
diagram of groups mark the ESP counterparts of known ACO and/or
EDCC clusters.
}
\end{figure*}
\section{Properties of Groups}
\medskip
In this section we discuss properties of ESP groups
that can be used to characterize the LSS and that set useful
constraints to the predictions of cosmological N-body models.
\medskip \medskip
\subsection{Abundances of Groups and Members}
\medskip
The first ``global''
property of groups we consider is the ratio, $f_{groups}$, of their
number to the number of non-member galaxies within the survey. For ESP
we have $f_{ESP,groups}$ = 0.13 $\pm$ 0.01, for CfA2N RPG find
$f_{CfA2N,groups}$ = 0.13 $\pm$ 0.01, for SSRS2 Ramella \etal 1998 find
$f_{SSRS2,groups}$ = 0.12 $\pm$ 0.01. Clearly the proportion of groups
among galaxies is the same in all three independent volumes of the
universe surveyed with ESP, CfA2N and SSRS2. Because CfA2N and SSRS2
mostly sample only one large structure while ESP intercepts
several large structures,
our result means that the clustering of
galaxies in groups within the large scale structure is homogeneous
on scales smaller than those of the structures themselves.
We point out that, on the
basis of our simple simulation, we do not expect the OPTOPUS mask to
affect the determination of $f_{groups}$.
We now consider the ratio of member to non-member galaxies, $f_{mem}$.
Within ESP we have $f_{ESP,mem}$ = 0.68 $\pm$ 0.02; within CfAN and
SSRS2, the values of the ratio are $f_{CfA2,mem}$ = 0.81 $\pm$ 0.02
and $f_{SSRS2,mem}$ = 0.67 $\pm$ 0.02 respectively. Quoted uncertainties
are one poissonian standard deviation. According to the poissonian
uncertainties, $f_{ESP,mem}$ and $f_{SSRS2,mem}$ are
undistinguishable. The value of $f_{CfA2,mem}$ is significantly
different from the other two ratios. However, the real
uncertainty in the ratio of members to non-members is higher than the
poissonian estimate because the fluctuations in the number of members
is dominated by the fluctuations in the
smaller number of groups. Moreover, the total number of members is
strongly influenced by few very rich systems. In fact, it is sufficient
to eliminate two clusters, Virgo and Coma, from CfA2N in order to
reduce the value of $f_{CfA2,mem}$ to $f_{CfA2,mem}$ = 0.70 $\pm$
0.02, in close agreement with the ratio observed within ESP and SSRS2.
In conclusion, groups are a remarkably stable property of the
large-scale distribution of galaxies. Once the richest clusters
are excluded, the abundances of groups and of
members relative to that
of non-member or``field'' galaxies are constant over several large and
independent regions of the universe.
\medskip \medskip
\subsection{ Distribution of Groups in Redshift-Space}
\medskip
We plot in the top panel of Figure 4 the cone diagram
($\alpha$ vs $cz$) for the 3085 ESP
galaxies within 5000 $< cz <$ 60000 km s$^{-1}$\ . In the bottom panel of
Figure 4 we plot the
cone diagram of the 231 ESP groups.
Figures 4 shows that groups trace very well the galaxy distribution, as
they do in shallower surveys ($cz \mathrel{\copy\simlessbox}$ 12000 km s$^{-1}$\ ). Note that
in Figure 4 we project adjacent beams, not a strip
of constant thickness.
The topology of the galaxy distribution in redshift space has already
been described by Vettolani \etal (1997) and will be the subject of a
forthcoming paper. The most striking features are the voids of sizes
$\simeq$ 50 $h^{-1}$ Mpc \ and the two density peaks at $cz \simeq$ 18000 km s$^{-1}$\
and $cz \simeq $ 30000 km s$^{-1}$\ . These features are also the main
features of the group distribution.
\begin{figure}
\epsfysize=9cm
\epsfbox{figure5.ps}
\caption[]{The redshift distributions of groups (thick
histogram) and galaxies (thin histogram), divided by
the total number of groups and
by the total number of galaxies respectively.
}
\end{figure}
\begin{figure}
\epsfysize=13cm
\epsfxsize=9cm
\epsfbox{figure6.ps}
\caption[]{Number densities of galaxies in comoving distance bins.
The top panel
is for members, the middle panel is for non-members, and the bottom
panel is for all galaxies.The dashed lines represent the $\pm$ one sigma corridor around the mean
galaxy density.
}
\end{figure}
In Figure 5 we plot the redshift distributions of groups (thick
histogram) and galaxies (thin galaxies), divided by the total number of
groups and by the total number of galaxies, respectively. In Figure 6
we plot number densities of galaxies in redshift bins. Number
densities are computed using the $n_3$ estimator of Davis \& Huchra (1982):
all the details about the density estimates are given in Zucca \etal 1997.
We vary the size of the redshift bins in order
to keep constant the number of galaxies expected in
each bin based on the selection function. The top panel is for
member galaxies, the middle panel is
for isolated and binary galaxies, and the bottom panel is for all ESP galaxies.
The dashed lines represent the $\pm$ one sigma corridor around the mean
galaxy density.
It is clear from Figure 5 that the redshift distributions of groups and
galaxies are undistinguishable (98 \% confidence level). Not
surprisingly, the number density in redshift bins of members and all
galaxies are highly correlated (Figure 6a and 6c). More interestingly,
the number density distribution of non-member galaxies is also correlated
with the number density distribution of all galaxies (Figure 6b and
6c). In particular, the two density peaks at $cz \simeq $ 18000
km s$^{-1}$\ and $cz \simeq $ 30000 km s$^{-1}$\ of the number density distribution
of all galaxies are also identifiable in the number density
distribution of non-member galaxies, even if with a lower contrast.
We know from the previous section that groups are a very stable global
property of the galaxy distribution within the volume of the ESP and
within other shallower surveys. Here we show that a tight relation
between non-member galaxies and groups exists even on smaller scales.
Our result is particularly interesting in
view of the depth of the ESP survey and of the number of large
structures intercepted along the line-of-sight.
\medskip \medskip
\subsection{The Distribution of Velocity
Dispersions}
\medskip
We now discuss the velocity dispersions of ESP groups. According to our
simulation in section 2, the effect on the
velocity dispersions of the OPTOPUS mask is statistically negligible.
The median velocity dispersion of all groups is $\sigma_{ESP,median}$ =
194 (106,339) km s$^{-1}$\ . The numbers in parenthesis are the 1st and 3rd
quartiles of the distribution.
Poor groups with $N_{mem}$\ $<$ 5 have a median velocity dispersion
$\sigma_{median,poor}$=145 (65,254) km s$^{-1}$\ , richer groups have
$\sigma_{median,rich}$=272 (178,399) km s$^{-1}$\ . For comparison, the median
velocity dispersions of CfA2N and SSRS2 are $\sigma_{CfA2N,median}$
=198 (88,368) km s$^{-1}$\ and $\sigma_{SSRS2,median}$ = 171 (90,289) km s$^{-1}$\ . We take the
values of the velocity dispersions for the CfA2 and SSRS2 groups from Ramella
\etal (1997,1998). In order to compare these velocity dispersions with
ours, we correct them for a fixed
error of 50 km s$^{-1}$\ (corresponding to an RVSAO error of $\simeq$ 35 km s$^{-1}$\ )
and multiply them by $\sqrt{(N_{mem}-1)/(N_{mem}-3/2)}$. We note that,
because of the OPTOPUS mask, the comparison of the velocity dispersions of
"rich" and "poor" groups within ESP with those of similar systems
within CfA2N and SSRS2 is not meaningful. A fraction of ESP
"poor" groups may actually be part of "rich" groups.
In Figure 7 we plot (thick histogram) the distribution of the velocity
dispersions, $n_{ESP}(\sigma)$, normalized to the total number of
groups. Errorbars are one sigma poissonian errors. We also plot the
normalized $\sigma_{cz}$\ distributions of CfA2N and SSRS2.
According to the KS test, differences between
$n_{ESP}(\sigma)$ and the other two distributions are not significant
(P$_{KS}$ = 0.3 and 0.2 for the comparison between ESP and CfA2N and
SSRS2 respectively).
\begin{figure}
\epsfysize=9cm
\epsfbox{figure7.ps}
\caption[]{Comparison between the distribution of velocity dispersions
of ESP groups (thick line) and those of CfA2N (thin line) and SSRS2 groups
(dotted line).
Each distribution is normalized to the total number of
groups. Errorbars are one sigma poissonian errors.
}
\end{figure}
It is interesting to point out that $n_{CfA2N}(\sigma)$ and
$n_{SSRS2}(\sigma)$ do differ significantly (97\% level),
$n_{CfA2N}(\sigma)$ being richer of high velocity dispersion systems
(Marzke \etal, 1995). Groups with dispersion velocities $\sigma_{cz}$\ $>$ 700
km s$^{-1}$\ are rare and the fluctuations from
survey to survey correspondingly high. The abundance of these high $\sigma_{cz}$\
systems is the same within both ESP and SSRS2 (2\%) but it is higher
within CfA2N (5\%). If we disregard these few high velocity dispersion
systems, the difference between $n_{CfA2N}(\sigma)$ and
$n_{SSRS2}(\sigma)$ ceases to be significant.
From this result we conclude that each survey contains
a fair representation of groups.
The distribution of velocity dispersions is an important characteristic
of groups because it is linked to the group mass. Therefore $n(\sigma)$
constitutes an important constraint for cosmological models.
Furthermore, $\sigma_{cz}$\ is a
much better parameter for the classification of systems than the number
of members (even more so in the case of the present catalog where the
OPTOPUS mask affects the number of members much more than velocity
dispersions)
The ESP survey provides a new determination of the shape of $n(\sigma)$
in a much deeper volume than those of existing shallower surveys. We
find that, within the errors, $n_{ESP}(\sigma)$ is very similar to
both $n_{CfA2N}(\sigma)$ and $n_{SSRS2}(\sigma)$.
\medskip \medskip
\section{Properties of Galaxies in Groups}
\medskip
In this section we examine the luminosities and the spectral features
of galaxies in different environments: the ``field'', groups, and clusters.
The dependence of these properties on the environment offers insights
into the processes of galaxy formation and evolution and on the
dynamical status of groups.
\subsection{The Luminosity Function of Members}
\medskip
Here we investigate the possible difference between the luminosity functions
of member and non-member galaxies.
We compute the luminosity function with the STY
method (Sandage, Tamman \& Yahil 1979). We assume a Schechter (1976) form
for the luminosity function and follow the procedure described
in detail in Zucca \etal (1997).
We find that galaxies in groups have a brighter $M^*$ with respect to
non--member galaxies; the slope $\alpha$ does not change significantly
in the two cases. In particular the parameters we obtain
are $\alpha= -1.25^{+0.11}_{-0.11}$ and $M^* = -19.80^{+0.14}_{-0.13}$
for the 1250 members, and $\alpha= -1.21^{+0.10}_{-0.09}$
and $M^* = -19.52^{+0.10}_{-0.10}$ for the 1835 non--members.
In Figure 8 we draw (dotted lines) the confidence ellipses of the $\alpha$
and $M^*$ parameters obtained in the two cases of member and non-member
galaxies. The two luminosity functions differ at the
$2\sigma$ level. In Figure 8 we also plot the confidence
ellipses for the parameters of the total sample (solid lines) derived
in the same volume of ESP where we identify groups.
\begin{figure}
\epsfysize=9cm
\epsfbox{figure8.ps}
\caption[]{One- and two-sigma
confidence ellipses (dotted lines) of the $\alpha$
and $M^*$ parameters of the Schechter luminosity function
of members (brightest $M^*$) and non-members (faintest $M^*$).
The solid lines show the confidence ellipses derived for the total ESP sample
considered in this paper
(5000 km s$^{-1}$\ $\le$ $cz$\ $\le$ 60000 km s$^{-1}$\ ).
}
\end{figure}
The fact that galaxies in groups are brighter than non-member galaxies is
a clear demonstration of the existence of luminosity segregation
in the ESP survey, a much deeper survey than those where the
luminosity segregation has been previously investigated
(Park \etal 1994, Willmer \etal 1998). Our finding is consistent
with the results of Lin \etal (1996), who find evidence
of a luminosity bias in their analysis of the LCRS power spectrum.
In further support of the existence of a luminosity segregation,
we also find that $M^*$ becomes brighter for members
of groups of increasing richness. As before, the parameter
$\alpha$ remains almost constant. Only in the case of the richest groups,
$N_{mem}\ge 10$, we find a marginally significant steepening
of the slope $\alpha$.
\medskip
\subsection{Emission/Absorption Lines
Statistics} \medskip
One interesting question is whether
the environment of a galaxy has a statistical
influence on the presence of detectable emission lines in the galaxy spectrum.
Because emission lines galaxies are mostly spirals (Kennicutt 1992),
the answer to this question is relevant to the investigation of
the morphology-density relation in systems of intermediate density.
The fraction of ESP galaxies with detectable emission lines within the
redshift range $5000 \le cz \le 60000$ km s$^{-1}$\ is 44\% (1360/3085). Of
these $e.l.$-galaxies \ , (34 $\pm$ 2)\% (467/1360) are members of groups. The fraction of
galaxies without detectable emission lines, $a.l.$-galaxies \ , that are members of
groups is significantly higher: 783/1725 or (45 $\pm$ 2)\%.
We note that our detection limit for emission lines correspond to
an equivalent width of about 5 $\AA$
\begin{figure}
\epsfysize=9cm
\epsfbox{figure9.ps}
\caption[]{Fraction of $e.l.$-galaxies \ in the ``field'', in poor groups, and rich groups. The
two arrows indicate the fraction of $e.l.$-galaxies \ in the two richest ACO
clusters in our catalog, A2840 ($f_e$ = 21\%) and A2860 ($f_e$ = 19\%)
}
\end{figure}
We consider three types of environments: a) the ``field'',
i.e. all galaxies that have not been assigned to groups, b) poor
goups, i.e. groups with 3 $\le$ $N_{mem}$\ $\le$ 4, and c) rich groups
with 5 $\le$ $N_{mem}$\ . We find that the fraction of $e.l.$-galaxies \ decreases as
the environment becomes denser. In the ``field'' the fraction of $e.l.$-galaxies \ is
$f_e$ = 49\%, it decreases to $f_e$ = 46\% for poor groups and to
$f_e$ = 33\% for richer groups. In Figure 9 we plot $f_e$ as a function of $N_{mem}$\ . We also
indicate the values of $f_e$ of the two richest Abell clusters in our
catalog, A2840 ($f_e$ = 21\%) and A2860 ($f_e$ = 19\%).
The significance of the correlation between environment and $f_e$
can be investigated with a 2-way contingency table (Table 2). For simplicity,
we do not consider triplets and quadruplets.
\setcounter{table}{1}
\begin{table}
\caption[]{ Frequency of Emission Line Galaxies in Different Environments}
\begin{flushleft}
\begin{tabular}{lrrr}
\hline\noalign{\smallskip}
& $N_{e.l.}$ & $N_{a.l.}$ & $N_{tot}$ \cr
\noalign{\smallskip}
\hline\noalign{\smallskip}
``field'' & 893 & 942 & 1835 \\
rich groups & 266 & 549 & 815 \\
total & 1159 &1491 & 2650 \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{flushleft}
\end{table}
The contingency coefficient is C=0.15 and the significance of
the correlation between environment and frequency of emission line
galaxies exceeds the 99.9\% level.
Our result indicates that the morphology-density relation extends over the
whole range of densities from
groups to clusters. Previous indications of the existence of the
morphology-density relation for groups are based either on very
local samples (Postman \& Geller 1984) or on samples
that are not suitable for statistical analysis (Allington-Smith \etal 1993).
Very recently, Hashimoto \etal (1998) also confirm the existence of a
morphology-density relation over a wide range of environment densities
within LCRS.
Examining our result in more detail,
we note that the fraction of $e.l.$-galaxies \ , $f_e$, in triplets and
quadruplets is very similar to the value of $f_e$ for isolated galaxies.
Triplets and quadruplets are likely to correspond, on average,
to the low-density tail of groups. Moreover,
Ramella \etal (1997) and Frederic (1995a) estimate that the FOFA
could produces a large fraction of unbound triplets and quadruplets.
These ``pseudo-groups'' dilute the properties of real bound triplets
and quadruplets with ``field'' galaxies,
artificially increasing the value of $f_e$. This effect, in our survey,
is partially counter-balanced by the triplets and quadruplets that are
actually part of richer systems cut by the OPTOPUS mask. Considering that
rich systems are significantly rarer than triplets and quadruplets, we
estimate that the value of $f_e$ we measure for triplets and quadruplets
should be considered an upper limit.
Our catalog also includes ESP counterparts of 17 clusters listed in at
least one of the ACO, ACOS (Abell \etal 1989)
or EDCC (Lumsden \etal 1992) catalogs (section 8, Table 3).
For these clusters $f_{e,clusters}$ = 0.25 (63 $e.l.$-galaxies \ out of 256
galaxies). The number of members of these systems
is not a direct measure of their richness because of the apparent
magnitude limit of the catalog and because of the OPTOPUS mask.
However, because they include all the richest systems in our catalog
and because they are counterparts of 2-D clusters, it is resonable to
assume that they are intrinsically rich. We remember here that Biviano
\etal (1997) find $f_e$ = 0.21 for their sample of ENACS clusters.The fact
that for ESP counterparts of clusters we find a lower value of $f_e$
than for the other rich groups ($f_{e,groups}$ = 0.36 without clusters),
further supports
the existence of a morphology-density relation over the whole range
of densities from clusters to the ``field''.
Many systems of
our catalog are not completely surveyed, therefore
the relationship between $f_e$ and the density of the environment we
find can only be considered qualitative. However, while incompleteness
certainly increases the variance around the mean result, we do not
expect severe systematic biases. In
order to verify that incompleteness does not affect our
results, we consider the subsample of 67 groups that contain no ESP
objects without measured redshift. We obtain for this subsample the
same relationship between group richness and fraction of emission line
galaxies we find for the whole catalog.
\medskip
\subsection{Seyfert Galaxies} \medskip
Within ESP we identify 12 Seyfert 1 galaxies and 9 Seyfert 2 galaxies.
We identify type 1 Seyferts visually on the basis of the
presence of broad (FWHM of a few $10^3$~km~s$^{-1}$) components in the
permitted lines. Our list is complete with the possible exception of
objects with weak broad lines which are hidden in the noise.
The identification of type 2 Seyferts is not straightforward, because
it is based on line ratios and usually requires measurements of
emission lines which fall outside our spectral range: only the
F([O~III]$\lambda$5007)/F(H$\beta$) ratio is available from our
spectra, and it is therefore impossible to draw a complete diagnostic
diagram (Baldwin \etal 1981, Veilleux \& Osterbrock 1987). We
classify tentatively as type 2 Seyferts all emission line galaxies with
$\log(\hbox{F([O~III]$\lambda$5007)}/\hbox{F(H$\beta$)})\ge 0.7$: this
threshold cuts out almost all non-active emission line galaxies, but
also many narrow-line AGN with a medium to low degree of ionization.
Thus the list of possible Seyfert 2 galaxies is almost free of
contamination, but should by no means be considered complete.
The origin of the Seyfert phenomenon could be related to the
interaction with close companions (Balick \& Heckman 1982, Petrosian
1982, Dahari 1984, MacKenty 1989), or to a dense environment
(Kollatschny \& Fricke 1989, De Robertis \etal 1998). Observational
evidence is, however, far from conclusive. For example, Seyfert 1 and
Seyfert 2 galaxies have been found to have an excess of (possibly)
physical companions compared to other spiral galaxies by Rafanelli
\etal (1995). Laurikainen \& Salo (1995) agree with Rafanelli
\etal (1995) about Seyfert 2 galaxies, but reach the opposite
conclusion about Seyfert 1 galaxies.
In our case, 7 (33\%) out of 21 Seyferts are group members. For
comparison, 460 (34\%) emission line galaxies (not including Seyfert
galaxies) are group members and 879 are either isolated or binaries.
Clearly, within the limits of our relatively poor statistics, we find
that Seyfert galaxies do not prefer a different environment than that
of the other emission line galaxies.
In order to test the dependence of the Seyfert phenomenon on the
interaction of galaxies with close companions rather than with the
general environment, we compute for all Seyferts and emission line
galaxies the projected linear distance to their nearest neighbor, the
$nn$-distance. We limit the search of companions to galaxies that are
closer than 3000 km s$^{-1}$\ along the line of sight.
We find that the distribution of $nn$-distances
of the sample of Seyfert galaxies is not significantly different from
that of all emission line galaxies.
We also consider the frequency of companions at projected linear
distances $d < 0.15 h^{-1}$ Mpc. We have 7 Seyfert galaxies with
such a close companion (33\%) and 315 (23\%) emission lines galaxies.
One of the 7 Seyferts is a member of a binary systems, the remaining
six Seyferts are members of groups. Even if, taken at face value, the
higher frequency of close companions observed among Seyfert galaxies
supports a causal connection between gravitational interaction and the
Seyfert phenomenon, these frequencies are not significantly different.
We note that members of close angular pairs ($\theta < 24.6$
arcsec) in the original target list for ESP, are more frequently missing
from the redshift survey than other objects
(Vettolani \etal 1998). This bias, due to OPTOPUS mechanical constraints,
could hide a possible excess of
physical companions of Seyfert galaxies.
In order to estimate how strongly our result could be affected by this
observational bias, we identify the nearest neighbors of
Seyfert and emission line galaxies from a list including both
galaxies with redshift and objects
that have not been observed. When we compute projected linear distances
to objects that have not been observed, we assume that they are at the
same redshift of their candidate companion galaxy. As before, we do not
find significant differences between the new $nn$-distributions of
Seyferts and $e.l.$-galaxies \ .
This result demonstrates that the higher average incompleteness of
close angular pairs does not affect our main conclusions: a) Seyfert
galaxies within ESP are found as frequently within groups as other
emission line galaxies, b) Seyfert galaxies show a small but not significant
excess of close physical companions relative to the other emission line
galaxies. We point out again that the sample
of Seyferts is rather small and the statistical uncertainties
correspondingly large.
\medskip \medskip
\begin{table*}
\caption[]{ Clusters within ESP}
\begin{flushleft}
\begin{tabular}{crrcccccc}
\hline\noalign{\smallskip}
ID & ESP & $N_{mem}$\ & $\alpha_{1950}$ & $\delta_{1950}$ & R & $z_{est}$ & $z_{ESP}$& $\sigma$ \\
& & & ($^h~^m~^s$) &($^o$~'~'') & & & & km s$^{-1}$\ \\
\noalign{\smallskip}
\hline\noalign{\smallskip}
E0163 & 6 & 3 & 22~32~40 & -40~38~41 & & &0.13535& 121 \\
E0169 & 7 & 11 & 22~34~12 & -39~50~51 & & &0.06294& 282 \\
S1055 & 21 & 9 & 22~39~43 & -40~16~07 &0 & &0.02901& 102 \\
A4068 & 88 & 9 & 23~57~08 & -39~46~59 &0 & 0.07151 &0.10261& 700 \\
E0435 & 121 & 8 & 00~17~31 & -40~40~37 & & &0.15073& 334 \\
A2769 & 126 & 6 & 00~21~45 & -39~53~49 &0 & 0.15708 &0.14020& 419 \\
A2771 & 128 & 18 & 00~21~50 & -40~26~49 &0 & 0.06260 &0.06876& 268 \\
A2828 & 176 & 5 & 00~49~10 & -39~50~54 &0 & 0.13133 &0.19676& 468 \\
A2840 & 183 & 34 & 00~52~01 & -40~04~19 &1 & 0.10460 &0.10618& 339 \\
A2852 & 192 & 12 & 00~57~00 & -39~54~19 &0 & 0.17581 &0.19845& 235 \\
E0113 & 196 & 16 & 00~58~21 & -40~31~05 & & &0.05449& 372 \\
A2857 & 200 & 8 & 01~00~06 & -40~12~42 &1 & 0.19092 &0.19755& 504 \\
E0519 & 205 & 43 & 01~02~36 & -40~03~02 & & &0.10637& 319 \\
S0127 & 213 & 25 & 01~05~27 & -40~21~08 &0 & &0.10498& 505 \\
A2874 & 216 & 25 & 01~06~08 & -40~36~01 &1 & 0.15812 &0.14191& 817 \\
E0529 & 218 & 17 & 01~07~40 & -40~40~34 & & &0.10483& 282 \\
E0546 & 231 & 7 & 01~19~27 & -39~53~07 & & &0.11909& 424 \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{flushleft}
\end{table*}
\section{Clusters and Rich Systems} \medskip
Within our survey lie the centers of 9 ACO clusters,
5 ACOS clusters and 12 EDCC clusters.
Several entries of the three lists correspond to the same cluster.
Taking into account multiple identifications, there are 20
clusters listed within one or more of the three catalogs that
lie within ESP.
In our catalog we find at least one counterpart for 17 out of the 20
clusters. The three clusters that do not correspond to any of our
systems are ACO 2860, ACOS 11, and ACOS 32, all of Abell richness R =
0. ACO2860 is a very nearby object with a redshift, $z$ = 0.0268,
close to our minimum redshift. ACOS 11 and ACOS 32 are both distance
class D = 6 objects that may be either projection effects or real
clusters located beyond our redshift limit. We select the ESP
counterparts among the groups that are close to the clusters on the sky
and that have a redshift compatible with the distance class and/or
magnitude of the cluster. If more ESP groups are counterparts of a
cluster, we identify the cluster with the richest counterpart.
In Table 3 we list the name (column 1), and the coordinates (columns 4,
5) of the 17 clusters with ESP counterpart together with their richness
(column 6) and, if available, their redshift as estimated by Zucca
\etal (1993) (column 7). In the case of clusters listed in both EDCC and ACO
or ACOS, we give the ACO or ACOS identification number. In the same
Table 3 we also list the ID number of the cluster counterparts within
our catalog (column 2), their number of members (column 3), redshift
(column 8) and velocity dispersion (column 9).
There are 8 clusters with redshift estimated by Zucca \etal (1993) The
measured redshifts of 6 of these clusters are in good agreement with
the estimated redshift: the difference between the two redshifts is of
the order of 10\%, less than the 20\% uncertainty on the estimated
redshifts. For the remaining 2 clusters, ACO 2828 and ACO 4068, the
estimated redshift is significantly smaller than our measured redshift.
The projection of the foreground systems ESP 175 and ESP 178 within
the Abell radius of ACO 2828 could explain the inconsistency between
estimated and measured redshift for this cluster. In the case of ACO
4068 we do not find any foreground/background system within ESP. ACO
4068 is very close to the northern declination boundary of the ESP
strip. An inspection of the COSMOS catalog just outside the boundary of
the OPTOPUS field containing ACO 4068 shows that a significant part of
this cluster lies outside our redshift survey and therefore
background/foreground projection could still be the cause of the
inconsistency between its estimated and measured redshifts.
We also note that EDCC163 and ACOS1055 are among the most incomplete
systems in our catalog. In the fields of EDCC163 ($N_{mem}$\ = 3) and ACOS1055
($N_{mem}$\ = 9) the number of objects without redshift is
16 and 63 respectively.
We will not consider these two clusters in what follows.
In panel a) and panel b) of Figure 10 we plot , respectively, $N_{mem}$\ and
$\sigma_{cz}$\ as a function of $cz$\ . As expected, clusters (represented by large
dots) populate the highest part of both diagrams at all redshifts. In
both diagrams, mixed with clusters, there are also ESP groups
that have not been identified as clusters.
\begin{figure}
\epsfysize=9cm
\epsfbox{figure10.ps}
\caption[]{$N_{mem}$\ (top panel) and $\sigma_{cz}$\ (bottom panel) as a function of $cz$\ .
Large dots are ESP counterparts of 2-D ACO and/or EDCC catalogs.
}
\end{figure}
\begin{figure}
\epsfysize=9cm
\epsfbox{figure11.ps}
\caption[]{ ESP groups (crosses) with 25000 km s$^{-1}$\
$<$ $cz$\ $<$ 45000 km s$^{-1}$\
in the $\sigma_{cz}$\ -- $N_{mem}$\ plane.
Large dots are ESP counterparts of 2-D ACO and/or EDCC catalogs,
smaller dots are ``cluster-like'' groups. The ESP counterparts
of ACO and/or EDCC clusters are labeled with their ESP ID number (Table 1).
}
\end{figure}
The completeness of bidimensional cluster catalogs is an important
issue for cosmology (van Haarlem \etal 1997) since the density of these
clusters and their properties are used as constraints on cosmological
models (e.g. Frenk \etal 1990, Bahcall \etal 1997, Coles \etal 1998).
It is therefore interesting to determine whether there are other
systems selected in redshift space that have properties similar to
those of the cluster counterparts but that have escaped 2-D
identification.
We limit our search for "cluster-like" groups to the velocity range
25000 $<$ $cz$\ $<$ 45000 km s$^{-1}$\ . Within this range the selection function
is rather stable and relatively close to its maximum. In our catalog
we identify the counterparts of 8 2-D clusters within this redshift range.
Two of the eight clusters are of richness class R=1
(ACO2840=ESP183 and ACO2874=ESP216).
The minimum number of members of these clusters is 6
and the lowest velocity dispersion of the 8 clusters
is 280 km s$^{-1}$\ .
Apart from the counterparts of the 8 clusters, we find 11 additional ESP groups
that satisfy all three conditions 25000 $<$ $cz$\ $<$ 45000 km s$^{-1}$\ ,
$N_{mem,min} \geq 6$, and $\sigma_{cz,min} \geq 280$ km s$^{-1}$\ . These groups are $\simeq$ 10\% of all groups in this redshift interval and
we list them in Table 4.
\begin{table}
\caption[]{ Cluster-like Groups }
\begin{flushleft}
\begin{tabular}{rrcccc}
\hline\noalign{\smallskip}
ESP & $N_{mem}$\ & $ \alpha_{1950}$ & $ \delta_{1950}$ & $cz$ & $ \sigma$ \\
& & $h~~m~~s$ & $^o~~`~~"$ & km s$^{-1}$\ & km s$^{-1}$\ \\
\noalign{\smallskip}
\hline\noalign{\smallskip}
48 & 7& 23~26~03 & -39~52~45 & 30231& 333 \\
96 & 8& 00~02~04 & -40~29~17 & 29329& 386 \\
124 & 17& 00~20~33 & -40~33~37 & 39102& 283 \\
130 & 8& 00~23~02 & -40~15~47 & 41898& 284 \\
155 & 9& 00~40~39 & -40~00~46 & 39280& 349 \\
186 & 7& 00~54~16 & -40~25~00 & 30163& 319 \\
190 & 13& 00~55~29 & -40~34~41 & 31033& 399 \\
195 & 8& 00~58~10 & -40~06~00 & 31728& 305 \\
201 & 11& 01~00~34 & -40~03~30 & 27225& 429 \\
203 & 10& 01~02~28 & -40~18~29 & 27298& 361 \\
226 & 12& 01~14~57 & -39~52~45 & 36298& 432 \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{flushleft}
\end{table}
In a $\sigma_{cz}$\ -- $N_{mem}$\ plane, Figure 11, the eleven "cluster-like" groups
occupy a "transition region" between clusters and groups.
First we note that, in this plane, the two counterparts of the R=1 ACO clusters
(ESP183 and ESP216) are very distant from the "cluster-like" groups.
The same holds true for the only rich EDCC cluster that is not an ACO cluster,
EDCC519. We conclude that no rich cluster is missing from 2-D catalogs in
the region of the sky covered by the ESP survey.
This conclusion is reassuring, even if it does not allow us to
discuss the problem of the completeness of rich 2-D clusters in general
because it is based on a small number of objects.
In the case of the more numerous poorer clusters, Figure 11 shows that
several systems could be missing from the 2-D list.
The boundaries of the cluster and group regions in the $\sigma_{cz}$\ -- $N_{mem}$\ plane
are blurred by the OPTOPUS mask and by the
narrow width of the ESP survey. It is
therefore difficult to give a precise estimate of
how many "cluster-like" groups should be considered "missing"
from bidimensional catalogs.
That poor 2-D clusters and "cluster-like" 3-D groups are probably
the same kind of systems is confirmed by the fact that they have
the same fraction of $e.l.$-galaxies \ , a higher value
than it is typical of richer clusters.
The 11 "cluster-like" groups have a total of 110 members,
43 of which are $e.l.$-galaxies \ : $f_{e,cluster-like}$ = 0.39. The 4 poor
clusters that have $N_{mem}$\ $\le $ 17 include
39 members and have $f_{e,poor~clusters}$ = 0.41.
We remember here that for all ESP counterparts of clusters we find
$f_{e,clusters}$ = 0.25.
In conclusion, the comparison of ESP systems with ACO, ACOS and EDCC
clusters indicates that the ``low mass'' end of the distribution
of clusters is poorly represented in 2-D catalogs; on the other hand,
the 2-D catalogs appear reasonably complete for high mass clusters.
\section{Summary}
In this paper we search objectively and analyze groups of galaxies in
the recently completed ESP survey ($23^{h} 23^m \le \alpha_{1950} \le 01^{h} 20^m $ and $22^{h} 30^m \le \alpha_{1950} \le
22^{h} 52^m $; $ -40^o 45' \le \delta_{1950} \le -39^o 45'$).
We identify 231 groups above the number overdensity threshold $\delta\rho/\rho$\ =80
in the redshift range 5000 km s$^{-1}$\ $\le cz \le $ 60000 km s$^{-1}$\ .
These groups contain 1250 members, 40.5 \% of the 3085 ESP
galaxies within the same $cz$\ range. The median velocity dispersion of
ESP groups is $\sigma_{ESP,median}$ = 194 km s$^{-1}$\ (at the redshift
of the system and taking into account measurement errors). We verify that
our estimates of the average velocity dispersions are not biased by the
geometry of the ESP survey which causes most systems to
be only partially surveyed.
The groups we find trace very well the geometry of the large scale
distribution of galaxies, as they do in
shallower surveys. Because groups are also numerous,
they constitute an interesting characterization of the large scale structure.
The following are our main results on the properties of groups
that set interesting ``constraints'' on cosmological models:
\begin{itemize}
\item the ratio of members to non-members is $f_{ESP,mem}$
= 0.68 $\pm$ 0.02. This value is in very close agreement with the value
found in shallower surveys, once the few richest clusters
(e.g. Coma and Virgo) are neglected.
\item the ratio of groups to the number of non-member galaxies is
$f_{ESP,groups}$ = 0.13 $\pm$ 0.01, also in very close agreement with the value
found in shallower surveys.
\item the distribution of velocity dispersions of ESP groups
is not distinguishable from those of CfA2N and SSRS2
groups.
\end{itemize}
These results are of particular interest because
the ESP group catalog is five times deeper than any other wide-angle
shallow survey group catalog and the number
of large scale features explored is correspondingly larger. As a consequence,
the properties of ESP groups are more stable with respect to
possible structure-to-structure variations. The fact that
the properties of ESP groups agree very well with those
of CfA2N and SSRS2 groups indicates that structure-to-structure
variations are not large and that the properties of groups we find
can be considered representative of the local universe.
As far as the richest systems (clusters) are concerned,
we identify ESP counterparts for 17 out of 20 2-D
selected ACO and/or EDCC clusters. Because the volume
of ESP is comparable to the volume of individual shallower surveys, it is
not big enough to include a fair sample of
clusters. The variations from survey to survey in the number and
properties of clusters are large.
Turning our attention to properties of galaxies
as a function of their environment, we find
that:
\begin{itemize}
\item the Schechter luminosity function of galaxies in groups
has a brighter $M^*$ (-19.80) with respect to
non--member galaxies ($M^* = -19.52)$; the slope $\alpha$ ($\simeq 1.2$)
does not change significantly between the two cases.
\item $M^*$ becomes brighter for members
of groups of increasing richness. The parameter
$\alpha$ remains almost constant; only in the case of the richest groups
we find a marginally significant steepening of the slope $\alpha$.
\item 34\% (467/1360) of ESP galaxies with
detectable emission lines are members of groups. The fraction of
galaxies without detectable emission lines in groups is significantly
higher: 45\% (783/1725).
\item the fraction of $e.l.$-galaxies \ in the field is
$f_e$ = 49\%; it decreases to $f_e$ = 46\% for poor groups and to
$f_e$ = 33\% for richer groups. For the ESP counterparts of ACO and/EDCC
clusters $f_e$ = 25\%.
\end{itemize}
We conclude that luminosity segregation is at work
in the ESP survey: galaxies in the dense environment of
groups are, on average, brighter than ``field'' galaxies.
Galaxies in groups are also less likely to have detectable emission lines
in their spectra. In fact, we find a gradual decrease of the
fraction of emission line galaxies among members of systems of
increasing richness: the morphology-density relation clearly extends over the
whole range of densities from groups to clusters.
As a final note, we identify 12 Seyfert 1 galaxies and 9 Seyfert 2 galaxies.
We find that: a) Seyfert
galaxies within ESP are members of groups as frequently as other
emission line galaxies, and b) Seyfert galaxies show a small but not significant
excess of close physical companions relative to the other emission line
galaxies. We point out again that the sample
of Seyferts is rather small and the statistical uncertainties
correspondingly large.
\begin{acknowledgements}
We thank the referee for his careful reading of the manuscript and his
helpful suggestions. This work has been partially supported through
NATO Grant CRG 920150, EEC Contract ERB--CHRX--CT92--0033, CNR Contract
95.01099.CT02 and by Institut National des Sciences de l'Univers and
Cosmology GDR.
\end{acknowledgements}
| proofpile-arXiv_065-8571 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The remarkable achievement of Bose--Einstein condensation (BEC)
of alkali
atoms in magnetic
traps~[1--3]
has created an enormous interest in the properties of dilute Bose gases.
A very recent review on the trapped Bose gases can be found in
Ref.~\cite{dalfovo} and an overview in Ref.~\cite{chris}.
However, it is useful to gain insight into the simpler problem
of an interacting homogeneous Bose gas by
applying modern methods from thermal field theory
before attacking the full problem of atoms trapped in an external potential.
The homogeneous Bose gas at zero temperature
was intensively studied in the 1950s~\cite{Lee-Yang,Wu}.
The properties of this system
can be calculated in an expansion of powers of $\sqrt{\rho a^3}$, where
$\rho$ is the density of the gas and $a$ is the S--wave scattering length.
At zero temperature this expansion is equivalent to the loop expansion.
The leading quantum correction to the ground state energy was calculated
in 1957 by Lee and Yang~\cite{Lee-Yang}. The complete
two--loop result was recently
obtained by Braaten and Nieto~\cite{eric1}.
The system has also been investigated at finite temperature. Since this
model is in the same universality class as the three--dimensional $xy$--model,
one expects the phase transition to be second order~\cite{zinn}.
However, both Bogoliubov theory~\cite{Lee-Yang}
and two--body $t$--matrix theory~\cite{griffin}
predict a first order phase transition.
They fail because they do not take into account many--body effects
of the medium~\cite{t-ma}.
In order to resolve this problem,
Bijlsma and Stoof used a many--body $t$--matrix
approximation~\cite{t-ma}. In this approximation the propagators are
self--consistently determined in the self-energy diagrams
(in contrast with the Bogoliubov theory, where free propagators are used
in the self-energy graphs). This approach
yields a second order
phase transition, but predicts the same critical temperature
as that of a non--interacting gas.
Haugset, Haugerud and Ravndal~\cite{haug} have recently studied the
phase transition of this system. By self--consistently solving a gap equation
for the effective chemical potential, they are effectively
summing up daisy and superdaisy diagrams. The inclusion of these diagrams
is essential in order to satisfy the Goldstone theorem at finite temperature.
Within this approximation the phase transition is second order, but
there is no correction to $T_c$ compared with the non--interacting gas.
One very powerful method of quantum field theory is the Wilson
renormalization group (RG)~\cite{wilson,joe}.
Renormalization group techniques have been applied
to a homogeneous Bose gas by several authors [15--18].
Bijlsma and Stoof have made an extensive quantitative study of this system
and in particular calculated non--universal quantities such as the
critical temperature and the superfluid fraction below $T_c$~\cite{henk}.
The non--universal quantities depend on the details of the
interactions between the
atoms.
Using renormalization group methods, they demonstrate
that the phase transition is indeed second order, and that the critical
temperature increases by approximately 10\%
for typical alkali atoms compared with that of a
free Bose gas.
A review summarizing the current understanding of homogenous
Bose gases can be found in~\cite{Shi-Griffin}.
The critical exponents for the phase transition from a superfluid to
a normal fluid observed in liquid $^4$He have been measured very
accurately~\cite{zinn}.
On the theoretical side, the most precise calculations to date involve the
$\epsilon$--expansion.
The agreement between the five loop calculations up to $\epsilon^5$
and experiment is excellent, but one should bear in mind that the series
is actually asymptotic.
The $\epsilon$--expansion works extremely well for scalar theories,
but not for gauge theories~\cite{arnold},
and so it is important
to have alternative methods to compute the critical exponents.
The momentum shell
renormalization group provides one such alternative, and the literature
on the calculational techniques for obtaining the critical exponents
is now vast~[22--26].
In the present work we reconsider the nonrelativistic homogeneous
spin zero--Bose gas at finite
temperature using RG techniques including higher order operators than was done
in Ref~\cite{henk}.
We focus in particular on the critical behavior and
the calculation of critical exponents.
The paper is organized as follows. In section II we briefly
discuss the symmetries and interactions underlying the effective
Lagrangian of the Bose gas.
In section III
the renormalization group is discussed, and we derive the flow equation for
the one--loop effective potential.
In section IV
the flow equation for the one-loop RG--improved effective potential
is found.
In section V we calculate fixed points and critical exponents, using
different cutoff functions and
we address the question of scheme dependence of the
results for the critical exponents~\cite{scheme}.
In section VI we summarize and conclude.
In the Appendix we prove that the one--loop renormalization group flow
equation is exact to
leading order in the derivative expansion.
Throughout the paper we use the imaginary time formalism.
\section{Effective Lagrangian}
In this section we briefly summarize the symmetries and interactions
underlying the effective Lagrangian of a dilute Bose gas. A more detailed
description can be found in~\cite{chris,eric2}.
The starting point of the description of
a homogeneous
Bose gas is an effective quantum field theory valid for low
momenta~\cite{e+g}.
As long as the momenta $p$ of the atoms are small compared to their
inverse size,
the interactions are effectively local and we can describe them
by a local quantum field theory. Since the momenta are assumed to be so small
that the atoms are nonrelativistic, the Lagrangian is
invariant under Galilean transformations.
There is also an $O(2)$--symmetry, and for simplicity we take the atoms
to have either have zero spin or to be maximally polarized.
We can then describe them by a single complex
field:
\begin{eqnarray}
\psi={1\over \sqrt{2}}\left[\psi_1+i\psi_2\right].
\end{eqnarray}
The interactions of two atoms can be described in terms of a two-body
potential $V({\bf r}_1-{\bf r}_2)$. The potential is repulsive at very
short distances
and there is an attractive well. Finally, there is a long--range Van der Waals tail that behaves like $1/R^6$.
For the example of $^{87}$Rb the minimum of the well is around
$R_0=5a_0$, where $a_0$ is the Bohr radius. The
depth is approximately $0.5$ eV and there are around
100 molecular bound states in the well.
The S-wave scattering lengths $a$ of typical alkali atoms are much larger than
$R_0$ (e.g. $a\approx 110a_0$ for $^{87}$Rb) because the natural scale for
the scattering length is set by
the Van der Waals interaction~\cite{chris}.
The Euclidean effective Lagrangian then reads
\begin{eqnarray}
\label{l}
{\cal L}_E=\psi^{\dagger}\partial_{\tau}\psi+\nabla\psi^{\dagger}
\cdot\nabla\psi-\mu\psi^{\dagger}\psi+
g(\psi^{\dagger}\psi)^2+
\ldots \,
\end{eqnarray}
Here, $\mu$ is the chemical potential. We have set $\hbar=1$ and $2m=1$.
The interaction $g(\psi^{\dagger}\psi)^2$
represents $2\rightarrow 2$ scattering and the coupling
constant $g$ is proportional to the $S$--wave scattering length $a$:
\begin{equation}
g=4\pi a.
\end{equation}
The ellipses indicate all symmetry preserving operators that are higher
order in the number of
fields
and derivatives.
In the following we consider the dilute gas $\rho a^3\ll 1$, which implies
that we only need to retain the quartic interaction in the bare Lagrangian
Eq.~(\ref{l})~\cite{henk}.
The action can be written as
\begin{eqnarray}
\label{s1}
S\left[\psi_1,\psi_2\right]=\int_0^{\beta}d\tau\int d^dx\Bigg\{
{i\over2}\epsilon_{ij}\psi_i\partial_{\tau}\psi_j
+{1\over2}\nabla\psi_i\cdot\nabla\psi_i
+{g\over4}(\psi_i\psi_i)^2
\Bigg\},
\end{eqnarray}
where $d$ is the number of spatial dimensions and repeated indices are summed
over.
In a field theoretic language, BEC is described
as spontaneous symmetry breaking of the $O(2)$--symmetry and the complex
field
$\psi$ acquires a nonzero vacuum expectation value $v$.
Due to the $O(2)$--symmetry, the field $v$ can be chosen to be real and
so we shift the fields according to
\begin{eqnarray}
\label{para}
\psi_1\rightarrow v+\psi_1,\hspace{0.7cm}\psi_2\rightarrow \psi_2.
\end{eqnarray}
Inserting~(\ref{para}) into~(\ref{s1}) and dividing the action
into a free piece $S_{\mbox{\scriptsize free}}[\psi_1,\psi_2]$
and an interacting part
$S_{\mbox{\scriptsize int}}[\psi_1,\psi_2]$
we obtain
\begin{eqnarray}\nonumber
S_{\mbox{\scriptsize free}}[v,\psi_1,\psi_2]
&=&\int_0^{\beta}d\tau\int d^dx\left\{-{1\over2}\mu v^2+
{i\over2}\psi_i\epsilon_{ij}\partial_{\tau}\psi_j+
\frac{1}{2}\psi_1\left[-\nabla^2+V^{\prime}+V^{\prime\prime}v^2
\right]\psi_1\right.\\
\label{s3}
&&+\left.
\frac{1}{2}\psi_2\left[-
\nabla^2+V^{\prime}
\right]\psi_2
\right\}\\
\label{s4}
S_{\mbox{\scriptsize int}}[v,\psi_1,\psi_2]
&=&\int_0^{\beta}d\tau\int d^dx
\Bigg\{{g\over 4}\left[v^4+4v\psi_1^3+4v\psi_2^3+\psi_1^4+2\psi_1^2\psi_2^2+\psi^4_2\right]
\Bigg\}.
\end{eqnarray}
Here $V(v)$ is the classical potential
\begin{eqnarray}
V(v)=-{1\over2}\mu v^2+{g\over4}v^4.
\end{eqnarray}
We will use primes to denote differentiation with respect to $v^2/2$, so
that $V^{\prime}=-\mu+gv^2$ and $V^{\prime\prime}=2gv^2$.
The free propagator corresponding to Eq.~(\ref{s3})
is a $2\times2$ matrix and in momentum space it reads
\begin{eqnarray}
\Delta (\omega_{n},p)=\frac{1}{\omega_{{p}}^2+
\omega_{n}^2}\left(\begin{array}{cc}
\epsilon_{p}+V^{\prime}&\omega_{n}\\
-\omega_{n}&\epsilon_{p}+V^{\prime}+V^{\prime\prime}v^2
\end{array}\right),
\end{eqnarray}
where
\begin{eqnarray}\nonumber
\epsilon_p&=&p^2\\ \nonumber
\omega_n&=&2\pi n T\\
\label{cla}
\omega_p&=&
\sqrt{\left[\epsilon_p+V^{\prime}+V^{\prime\prime}v^2
\right]\left[\epsilon_p+V^{\prime}\right]}.
\end{eqnarray}
In the broken phase the quadratic part of the action, Eq.~(\ref{s3}),
describes the propagation of Bogoliubov modes
with the dispersion relation
\begin{eqnarray}
\omega_p=p\sqrt{\epsilon_p+2\mu}.
\end{eqnarray}
The dispersion is linear in the long wavelength limit, corresponding to
the massless Goldstone mode (phonons). This reflects the spontaneous
symmetry breakdown of the $O(2)$ symmetry. In the short wavelength limit the dispersion relation is
quadratic and that of a free nonrelativistic particle.
\section{The One--Loop Effective Potential at Finite Temperature}
We are now ready to calculate quantum
corrections to the classical potential Eq.~(\ref{cla}).
In this section we compute the one--loop effective potential which we will
``RG improve'' in the next section. This method of deriving RG flow equations
is conceptually and technically simpler than the direct application of
exact or momentum-shell RG techniques which is demonstrated in the Appendix.
The one-loop effective potential reads
\begin{eqnarray}\nonumber
U_{\beta}(v)&=&V(v)+\mbox{Tr}\ln\Delta^{-1}(\omega_n,p)\\
&=&-{1\over2}\mu v^2+{g\over4}v^4
+{1\over 2}T\sum_n\int {d^dp\over(2\pi)^d}\ln[\omega_n^2+\omega_p^2].
\end{eqnarray}
The sum is over the Matsubara frequencies, which take on the values
$\omega_n=2\pi n T$, and the integration is over $d$--dimensional
momentum space.
We proceed by dividing the modes in the path integral into slow
and fast modes
separated by an infrared cutoff $k$. This is done by introducing a cutoff
function $R_k(p)$ which we keep general for the moment.
By adding a term to the action Eq.~(\ref{s1}):
\begin{eqnarray}
\label{eterm}
S_{\beta,k}[\psi_1,\psi_2]=S[\psi_1,\psi_2]+
\int_{0}^{\beta}d\tau\int d^3x\,
\mbox{$1\over2$}R_k(\sqrt{-\nabla^2})\nabla\psi_i\cdot\nabla\psi_i,
\end{eqnarray}
the modified propagator reads
\begin{eqnarray}
\Delta_k(\omega_{n},p)=\frac{1}{\omega_{{p}}^2+
\omega_{n}^2}\left(\begin{array}{cc}
\epsilon_{p}(R_k(p)+1)+V^{\prime}&\omega_{n}\\
-\omega_{n}&\epsilon_{p}(R_k(p)+1)+V^{\prime}+V^{\prime\prime}v^2
\end{array}\right),
\end{eqnarray}
and the modified dispersion relation is
\begin{eqnarray}
\label{cla2}
\omega_{p,k}&=&
\sqrt{\left[\epsilon_p(R_k(p)+1)+V^{\prime}+V^{\prime\prime}v^2
\right]\left[\epsilon_p(R_k(p)+1)+V^{\prime}\right]}.
\end{eqnarray}
By a judicious choice of $R_k(p)$,
we can suppress the low momentum modes in the path integral
and leave the high momentum modes essentially
unchanged. In section~\ref{res} we return to the actual choice of
cutoff functions.
It is useful to introduce a blocking function
$f_k(p)$ which is defined through
\begin{eqnarray}
R_k(p)={1-f_k(p)\over f_k(p)}.
\end{eqnarray}\\
The blocking function satisfies
\begin{eqnarray}
\lim_{p\rightarrow 0}f_k(p)=0,\hspace{1cm}
\lim_{p\rightarrow \infty}f_k(p)=1.
\end{eqnarray}
This implies that the function $R_k(p)$ satisfies
\begin{eqnarray}
\lim_{p\rightarrow 0}R_k(p)=\infty,\hspace{1cm}
\lim_{p\rightarrow \infty}R_k(p)=0.
\end{eqnarray}
These properties ensure that the low--momentum modes are suppressed by making
them very heavy and the high--momentum modes are left essentially unchanged.
We return to the one--loop effective potential, which now becomes
\begin{eqnarray}
\label{modi}
U_{\beta,k}(v)=V(v)+{1\over2}T\sum_n\int{d^dp\over(2\pi)^d}
\ln \left[\omega_n^2+\omega_{p,k}^2\right].
\end{eqnarray}
Here, the subscript $k$ indicates that the effective potential depends on the
infrared cutoff.
Upon summation over the Matsubara frequencies, we obtain
\begin{eqnarray}
\label{oneloop}
U_{\beta,k}(v)=
V(v)+\frac{1}{2}\int{d^dp\over(2\pi)^d}\bigg[\omega_{{p,k}}+
2T\ln\left[1-e^{-\beta\omega_{{p,k}}}\right]\bigg].
\end{eqnarray}
The first term in the brackets is the $T=0$ piece and represents the zero--point fluctuations. The second term includes thermal effects.
Differentiation with respect to the infrared cutoff $k$ yields:
\begin{eqnarray}
\label{rg1}
k{\partial\over \partial k}U_{\beta,k}=-{k\over2}\int {d^dp\over (2\pi)^d}
\left({\partial R_k\over\partial k}\right)\left[{1\over2\omega_{p,k}}+
{1\over\omega_{p,k}(e^{\beta\omega_{p,k}}-1)}\right]\left[2\epsilon_{p,k}
+2V^{\prime}+V^{\prime\prime}v^2
\right].
\end{eqnarray}
Eq.~(\ref{rg1}) is the differential equation for the one--loop effective potential.
It is obtained by integrating out each mode independently,
where the feedback from the fast modes to the slow modes is completely
ignored. Since all modes are integrated out independently,
this is sometimes called the independent
mode approximation~\cite{mike1}.
Equation~(\ref{oneloop}) provides an inadequate description
of the system at finite
temperature in several ways. Since the minimum of the one-loop
effective potential
at finite temperature is shifted away from the
classical minimum, the Goldstone theorem
is not satisfied. This theorem is known to be satisfied for temperatures
below $T_c$ to all orders in perturbation theory~\cite{gold},
and any reasonable
approximation must incorporate that fact.
Secondly, it is clear from Eqs.~(\ref{cla2}) and (\ref{oneloop})
that the one-loop effective potential
has an imaginary part for all temperatures and for sufficiently
small values of the
field $v$, when the bare chemical potential is positive.
However, we know that
a thermodynamically stable state for $T\geq T_c$
corresponds to $v=0$ and so the effective potential is purely
real for sufficiently high temperatures.
More generally, ordinary perturbation theory breaks down at high temperature
due to infrared divergences and this has been known since the work
on summation of ring diagrams in nonrelativistic QED in 1957 by
Gell-Mann and Br\"uckner~\cite{gell}.
In the next section we derive an RG equation, whose solution
has none of the above shortcomings.
\section{Renormalization Group Improvement}
In the previous section we derived the one--loop effective potential
at finite temperature and discussed the fact that it
is not capable of reliably describing
the system at finite temperature.
The lack of feedback from the fast modes to
the slow modes as we lower the infrared cutoff $k$ leads to a poor tracking
of the effective degrees of freedom causing the problems mentioned above.
The situation is
remedied by applying the renormalization group, which effectively
sums up important classes of Feynman diagrams~\cite{mike2}.
In order to obtain the differential equation for the RG--improved effective
potential, we do not integrate out all the modes between
$p=\infty$ and $p=k$ in one step. Instead, we
divide the integration volume into small
shells of thickness $\Delta k$, then lower the cutoff from $k$
to $k-\Delta k$ and repeat the one-loop calculation.
This is equivalent to replacing $V$ by $U_{\beta,k}$
on the right hand side
of Eq.~(\ref{rg1}),
making it self--consistent~\cite{mike1}:
\begin{eqnarray}
\label{rg2}
k{\partial \over \partial k}U_{\beta,k}
=-{k\over2}\int {d^dp\over (2\pi)^d}
\left({\partial R_k\over\partial k}\right)\left[{1\over2\omega_{p,k}}+
{1\over \omega_{p,k}(e^{\beta\omega_{p,k}}-1)}\right]\left[2\epsilon_p
+2U_{\beta,k}^{\prime}+U_{\beta,k}^{\prime\prime}v^2
\right],
\end{eqnarray}
where
\begin{eqnarray}
\label{ndisp}
\omega_{p,k}=\sqrt{\left[\epsilon_p(R_k(p)+1)+U^{\prime}_{\beta,k}
+U^{\prime\prime}_{\beta,k}v^2\right]\left[\epsilon_p(R_k(p)+1)+U^{\prime}_{\beta,k}\right]},
\end{eqnarray}
and the primes in Eqs.~(\ref{rg2}) and~(\ref{ndisp})
denote differentiation with respect to $v^2/2$.
The self--consistent Eq.~(\ref{rg2}) is not a perturbative approximation,
but is exact to leading order in the derivative expansion.
This equation is derived in the Appendix without performing a loop expansion.
Note that since
\begin{eqnarray}
2v{\partial U_{\beta,k}\over\partial v^2}={\partial U_{\beta,k}\over\partial v},
\end{eqnarray}
the dispersion relation at the minimum of the effective potential in the
broken phase reduces to
\begin{eqnarray}
\omega_{p,k=0}=p\sqrt{\epsilon_p+U^{\prime\prime}_{\beta,k=0}v^2}.
\end{eqnarray}
Hence,
the Goldstone theorem is automatically
satisfied for temperatures below $T_c$.
This equation interpolates between the bare theory for $k=\infty$
and $T=0$
and the physical theory at temperature $T$, for $k=0$, since we integrate out
both quantum and thermal modes as we lower the cutoff.
This implies that the boundary condition for the RG--equation is the
{\it bare} potential, $U_{\beta,k=\infty}(v)=V(v)$.
In Refs.~\cite{pepperoni1,berger} renormalization group ideas have been applied
to $\lambda\phi^4$ theory using the real time formalism.
In the real time formalism one can separate the free propagator into
a quantum and a thermal part~\cite{niemi},
and in~\cite{pepperoni1,berger}
the infrared cutoff is imposed only on the
thermal part of the propagator.
This implies that the theory interpolates between the physical theory
at $T=0$ and the physical theory at $T\neq 0$. Hence, the boundary condition
for the RG--equation in this approach is the physical effective potential at $T=0$.
However, if one imposes the infrared cutoff
on the both quantum and thermal part of the propagator, one can derive
Eq.~(\ref{rg2}), showing that identical results are obtained
using the two formalisms.
We close this section by commenting on the choice of cutoff function.
It is clear from Eq.~(\ref{rg2}) that the non-perturbative flow equation
depends explicitly on the choice of $R_k(p)$.
We know that the nonzero
Matsubara modes are strongly suppressed at high temperature and can
be integrated out perturbatively; the important point is to treat the
zero mode correctly.
For a thorough discussion of various finite temperature cutoff
functions applied to relativistic $\lambda\phi^4$ theory see
Ref.~\cite{mike3}.
\section{Results}\label{res}
In this section we present our results for the numerical solution of the
renormalization group flow equation and the calculations of the fixed point
and critical exponents. We consider the cases of a sharp cutoff and a
smooth cutoff separately.
\subsection{Sharp Cutoff}
The sharp cutoff function is defined by the blocking
function $f_k(p)=\theta (p-k)$, which
is displayed in Fig~\ref{cut} (solid line).
It provides a sharp separation between fast and slow modes.
Using the sharp cutoff the slow modes become
completely suppressed in the path integral, while the fast
modes are completely unaltered.
The advantage of using the sharp cutoff function compared to the smooth
cutoff functions considered in section~\ref{smooth} is that the integral
over $p$ can be done analytically, resulting
in a differential RG--equation. In this case Eq.~(\ref{rg2}) reduces to
\begin{eqnarray}
\label{rg}
k\frac{\partial}{\partial k}U_{\beta,k}=
-{1\over2}S_dk^d\Bigg[\omega_{k}+
2T\ln\left[1-e^{-\beta\omega_{{k}}}\right]\Bigg].
\end{eqnarray}
Here,
\begin{eqnarray}\nonumber
\omega_{{k}}&=&\sqrt{\left[\epsilon_{k}+U^{\prime}_{\beta,k}
+U^{\prime\prime}_{\beta,k}v^2\right]\left[\epsilon_{{k}}+U^{\prime}_{\beta,k}\right]}\\
S_d&=&{2\over(4\pi)^{d/2}\Gamma(d/2)}.
\end{eqnarray}
Eq.~(\ref{rg}) is derived in the Appendix.
We have solved Eq.~(\ref{rg}) numerically for $d=3$
and the result for different values
of $T$ are shown in Fig~\ref{rgep}. The curves clearly show that the phase
transition is second order. For $T<T_c$, the effective potential has a small
imaginary part, and we have shown only the real part in Fig.~\ref{rgep}.
The imaginary part of the effective
potential does, however, vanish for $T\geq T_c$ in contrast to the
independent mode approximation in which it does not.
The effective chemical potential $\mu_{\beta, k}$
as well as the quartic coupling constant
$g_{\beta, k}$
(defined as the discrete first and second derivatives of the
effective potential with respect to $v^2/2$) are displayed in Fig.~\ref{g2}
and both quantities vanish at the
critical point. The corresponding operators are relevant and must therefore
vanish at $T_c$, and we see that the renormalization group approach correctly
describes the behavior near criticality.
Moreover, the sectic coupling $g_{\beta, k}^{(6)}$
goes to a non-zero constant at $T_c$.
The inclusion of wavefunction renormalization effects
turns the marginal operator
$g_{\beta, k}^{(6)}$ into an irrelevant operator that diverges
at the critical temperature~\cite{mike2}.
The success of describing the phase transition using the renormalization group
is due to its ability to properly track the relevant degrees of freedom.
The dressing of the coupling constants as we integrate out the fast modes
is taken care of by the renormalization group and this is exactly where the
independent mode approximation fails.
In order to investigate the critical behavior near fixed points,
we write the flow equation in dimensionless form using
\begin{eqnarray}\nonumber
\bar{\beta}&=&\beta k^2\\\nonumber
\bar{v}&=&\beta^{1/2}k^{d-2}v\\\nonumber
\bar{U}_{\bar{\beta},k}&=&\beta k^{-d}U_{\beta,k}\\%\nonumber
\label{coll}
\bar{\omega}_k&=&k^{-2}\omega_k.
\end{eqnarray}
This yields
\begin{eqnarray}
0&=&
\label{dr1}
\left[k{\partial\over \partial k}-{1\over 2}(d-2)\bar{v}{\partial\over
\partial\bar{v}}
+d\right]\bar{U}_{\bar{\beta},k}+{S_d\over 2}\bar{\beta}\bar{\omega}_k
+S_d\ln\left[
1-e^{-\bar{\beta}\bar{\omega}_k}\right].
\end{eqnarray}
The critical potential is obtained by neglecting the derivative with respect
to $k$ on
the left hand side of Eq.~(\ref{dr1}).
Expanding in powers of $\bar{\beta}\bar{\omega}_k$ we get
\begin{eqnarray}
\left[
-{1\over 2}(d-2)\bar{v}{\partial \over\partial\bar{v}}
+d\right]\bar{U}_{\bar{\beta},k}=
-{S_d\over 2}\bar{\beta}\bar{\omega}_k
-S_d\ln
\left[\bar{\beta}\bar{\omega}_k\right].
\end{eqnarray}
Taking the limit $\bar{\beta}\rightarrow 0$ and
ignoring the term which is independent of $v$ leads to
\begin{eqnarray}
\left[
-{1\over 2}(d-2)\bar{v}{\partial \over\partial \bar{v}}
+d\right]\bar{U}_{\bar{\beta},k}&=&-{S_d\over2}
\Bigg[\ln\left[1+\bar{U}^{\prime}\right]
+\ln\left[1+\bar{U}^{\prime}+\bar{U}^{\prime\prime}\bar{v}^2\right]\Bigg].
\end{eqnarray}
This is exactly the same equation as obtained by Morris
for a relativistic $O(2)$--symmetric scalar theory in $d$ dimensions
to leading order in the derivative
expansion~\cite{morris}.
Therefore, the results for the
critical behavior at leading order in the derivative
expansion will be the same as those obtained in the $d$--dimensional
$O(2)$--model
at zero temperature
The above also demonstrates that the system behaves as
a $d$--dimensional one as the temperature becomes much higher
than any other scale in the problem
(dimensional crossover).
This is the usual dimensional reduction of field theories
at high temperatures, in which the nonzero Matsubara modes decouple
and the system can be described in terms of an effective field theory
for the $n=0$ mode in $d$ dimensions~\cite{lands}.
The effects of the nonzero Matsubara modes are encoded in the
coefficients of the three--dimensional effective theory.
The RG--equation~(\ref{rg1}) satisfied by $U_{\beta,k}[v]$ is highly nonlinear
and a direct measurement of the critical exponents from the numerical
solutions is very time-consuming. This becomes even worse as
one goes to higher orders in the derivative expansion and so it is important
to have an additional reliable approximation scheme for calculating critical
exponents. In the following we perform a polynomial expansion~\cite{poly}
of the effective potential, expand around $v=0$, and truncate
at $N$th order:
\begin{equation}
\label{pol}
U_{\beta ,k}(v)=-\mu_{\beta, k}{v^2\over2}+
{1\over2}g_{\beta,k}\left({v^2\over2}\right)^2+
\sum_{n=3}^{N}
\frac{g^{(2n)}_{\beta, k}}{n!}\left({v^2\over2}\right)^n
\end{equation}
The polynomial expansion turns the partial differential equation~(\ref{rg})
into a set of coupled ordinary differential equations.
In order to demonstrate the procedure we will show how the fixed points
and critical exponents are calculated at the lowest nontrivial order of
truncation ($N=2$). We write the equations in dimensionless form using
Eq.~(\ref{coll}) and
\begin{eqnarray}\nonumber
\bar{\mu}_{\bar{\beta},k}&=&k^{-2}\mu_{\beta,k}\\
\bar{g}^{}_{\bar{\beta},k}&=&\beta^{-1}k^{d-4}g^{}_{\beta,k}.
\end{eqnarray}
We then obtain the following set of
equations:
\begin{eqnarray}\nonumber
k\frac{\partial }{\partial k}\bar{\mu}_{\bar{\beta} ,k}&=&
-2\bar{\mu}_{\bar{\beta}}+S_d\bar{\beta}
\bar{g}_{\bar{\beta} ,k}
\left[2n(\bar{\omega}_k)+1\right]\\
\label{d}k\frac{\partial }{\partial k}
\bar{g}_{\bar{\beta} ,k}
&=&-\epsilon\bar{g}_{\bar{\beta},k}+
S_d\bar{\beta}\bar{g}_{\bar{\beta} ,k}^2
\left[\frac{1}{2(1-\bar{\mu}_{\bar{\beta},{k}})}
\left[2n(\bar{\omega}_k)+1\right]
+\bar{\beta}n(\bar{\omega}_k)\left[n(\bar{\omega}_k)+1\right]\right].
\end{eqnarray}
Here, $\epsilon=4-d$ and $n(\bar{\omega}_k)$ is
the Bose-Einstein distribution function written in terms of dimensionless
variables
\begin{eqnarray}
n(\bar{\omega}_k)={1\over e^{\bar{\beta}\bar{\omega}_k}-1}\,.
\end{eqnarray}
A similar set of equations has been obtained
in Ref.~\cite{henk} by considering the one--loop diagrams that contribute
to the running of the different vertices. They use the operator formalism and
normal ordering so the zero temperature part of the tadpole vanishes.
The equations for the fixed points are
\bq
k\frac{\partial}{\partial k}\bar{\mu}_{\bar{\beta},k}=0,\hspace{1cm}
k\frac{\partial}{\partial k}\bar{g}_{\bar{\beta},k}
=0.
\end{eqnarray}
Expanding in powers of $\bar{\beta}(1-\bar{\mu}_{\bar{\beta},k})$ one obtains
\bq
2\bar{\mu}_{\bar{\beta},k}-\frac{\bar{g}_{\bar{\beta},k}}{\pi^2}\frac{1}{1-\bar{\mu}_{\bar{\beta},k}}=0,\hspace{1cm
\bar{g}_{\bar{\beta},k}-\frac{\bar{g}_{\bar{\beta},k}^2}{2\pi^2}
\frac{5}{(1-\bar{\mu}_{\bar{\beta},k})^2}=0.
\end{eqnarray}
If we introduce the variables $r$ and $s$ through the relations
\begin{equation}
r=\frac{\bar{\mu}_{\bar{\beta},k}}{1-\bar{\mu}_{\bar{\beta},k}},\hspace{1cm}
s=\frac{g_{\bar{\beta},k}}{(1-\bar{\mu}_{\bar{\beta},k})^2},
\end{equation}
the RG--equations can be written as
\begin{eqnarray}
\label{lin}
\frac{\partial r}{\partial k}=-2\left[1+r\right]
\left[r-S_ds\right],\hspace{1cm}
\frac{\partial s}{\partial k}=-s\left[\epsilon+4r-9S_ds\right].
\end{eqnarray}
We have the trivial Gaussian fixed point $(r,s)=(0,0)$ as well as the
infinite temperature Gaussian fixed point $(-1,0)$. Finally, for
$\epsilon>0$ there is the
infrared Wilson--Fisher fixed point
$\left(\epsilon/5,\epsilon/\left(5S_d\right)\right)$~\cite{wilson}.
Setting $\epsilon=1$ and
linearizing Eq.~(\ref{lin}) around the fixed point, we find the eigenvalues
$(\lambda_1,\lambda_2)=(-1.278,1.878)$.
The critical exponent $\nu$ is given by
the inverse of the largest eigenvalue; $\nu=1/\lambda_2=0.532$.
This procedure can now be repeated including a larger number, $N$, of
terms in the
expansion Eq.~(\ref{pol}).
The result for $\nu$ is plotted in Fig.~\ref{terms} as a function of the
number of terms, $N$, in the expansion.
Our result agrees with
that of Morris, who considered the relativistic $O(2)$--model
in $d=3$ at zero temperature~\cite{morris}.
The critical exponent $\nu$ oscillates around the average value $0.73$.
The value of $\nu$ never
actually converges as $N\rightarrow\infty$, but continues
to fluctuate.
As Morris has pointed out in the $Z_2$--symmetric case,
these oscillations are due to the presence of a pole in the
complex $v$ plane in the corresponding fixed point RG
equation~\cite{vast1,japs}.
Our results should be compared to experiment ($^{4}$He) and the $\epsilon$--expansion which
both give a value of $0.67$~\cite{zinn}.
One expects that the critical exponent $\nu$ converges towards $0.67$
as one includes more terms in the derivative expansion.
\subsection{Smooth Cutoff}\label{smooth}
In the previous section we considered the sharp cutoff function that
divided the modes in the path integral sharply
into slow and fast modes separated by the infrared cutoff $k$. However,
there are alternative ways of doing this.
In this section we consider
a class of {\it smooth } cutoff functions $R_k^m(p)$ defined through
\begin{eqnarray}
f_k^m(p)={p^m\over p^m+k^m}.
\end{eqnarray}
In the limit $m\rightarrow \infty$ we recover the sharp cutoff function.
A typical smooth blocking function is shown in Fig.~\ref{cut} (dashed line).
We see that
the suppression of the slow modes is complete for $p=0$ and gradually
decreases as we approach
the infrared cutoff. Similarly, the high momentum modes are left
unchanged for $p=\infty$ and there is an increasing suppression, albeit
small, as one approaches $k$.
Since we cannot carry out the integration over $p$ analytically
in Eq.~(\ref{rg2}),
the RG flow equation is now more complicated.
Taking the limit $\bar{\beta}\rightarrow 0$ and
making a polynomial expansion as in the preceding subsection,
we obtain the following set of
dimensionless equations for $N=2$:
\begin{eqnarray}\nonumber
k{\partial\over\partial k}\bar{\mu}_{\bar{\beta},k}&=&-2\bar{\mu}_{\beta,k}+
{\bar{g}_{\bar{\beta},k}\over\pi^2}
\left[I_0+I_1\bar{\mu}_{\bar{\beta},k}\right]\\
k{\partial\over\partial k}\bar{g}_{\bar{\beta},k}&=&
-\epsilon\bar{g}+
{5\bar{g}_{\bar{\beta},k}^2\over\pi^2}
\left[I_1+I_2\bar{\mu}_{\bar{\beta},k}\right].
\end{eqnarray}
where
\begin{equation}
I_n(\bar{\mu}_{\bar{\beta},k})=\int_0^1{g^3(s,m)s^{n}ds\over[s\,\bar{\mu}_{\bar{\beta},k}^2+g^2(s,m)]^{n+1}}
,\hspace{1cm}g(s,m)=\left( {s \over s-1} \right)^{1/m}.
\end{equation}
In the case of a smooth cutoff function, we cannot calculate the fixed points
and critical exponents analytically, but have to resort to numerical
techniques.
In Fig.~\ref{dm} we have plotted the $m$--dependence of $\nu$ for different
truncations. Note in particular the strong dependence of $m$ for $N=10$.
In Fig.~\ref{shsm}. we have displayed the critical exponent $\nu$ as a function
of the number of terms $N$ in the polynomial expansion using
a smooth cutoff with $m=5$ (solid line). For comparison we have also plotted
the result in the case of a sharp cutoff (dashed line).
The value of $\nu$ continues
to fluctuate, but
the oscillations
are significantly
smaller for the smooth cutoff, and the convergence to its asymptotic
range is much faster.
Again, one expects the value of $\nu$ to converge to the value $0.67$ as
more terms in the derivative expansion are included.
\section{Summary and Discussions
In the present paper we have applied renormalization group methods
to the nonrelativistic homogeneous Bose gas at finite temperature.
We have explictly shown that the renormalization group improved effective
potential does not suffer from the two major flaws of the one--loop
effective potential; the Goldstone theorem is automatically satisfied
and the effective potential is purely real for temperatures above $T_c$.
The second order nature of the
phase transition and the vanishing of the
relevant couplings
at the critical temperature have also been verified numerically.
Truncating the RG equations at leading order in the derivative expansion,
we have investigated the critical exponent $\nu$ as a function of
the number of terms $N$ in the polynomial expansion of the effective potential
and the
smoothness of the cutoff function.
In particular, we
have demonstrated
that the oscillations around the value $\nu =0.73$ depends on the
smoothness of the cutoff function, and that the
oscillatory behavior can be improved by appropriately choosing the smoothness.
The value $m=5$ seems to be the optimal choice among the smooth cutoff
functions investigated in the present paper.
Whether the dependence on $m$ is reduced as one goes to higher
orders in
the derivative expansion is not clear at this point.
It is important to point out that it is not sufficient, as is conventional
wisdom, to include only the relevant operators and perhaps marginal
ones when calculating the $d=3$ exponents. Instead, one has to make a
careful study of the convergence of the exponents in question, as we
have demonstrated.
The present work can be extended in several ways.
Expanding around the minimum of the RG--improved effective potential
instead of the origin is one posibility.
This has been carried out in Ref.~\cite{japs} in the $Z_2$--symmetric
case and the rate of convergence as function of $N$ is larger.
However, in the $O(N)$--symmetric case this expansion is complicated
by the presence of infrared divergences due to the
Goldstone modes~\cite{berger}, and at
present we do not know how to address that problem (see also~\cite{henk}).
The inclusion of wave function renormalization effects by going to second
order in the derivative expansion will close the gap between the
critical exponents of experiment and the $\epsilon$--expansion on one
hand and the momentum shell renormalization group
approach on the other.
It is also of interest to investigate the influence of
these effects on nonuniversal quantities such as the critical temperature
and the superfluid fraction in the broken phase.
One can also study finite size effects by not integrating down to $k=0$,
but to some $k>0$ where $1/k$ characterizes the length scale of the
system under consideration.
Of course, the real challenge is to describe the trapped Bose gas using
renormalization group techniques.
\section*{Acknowledgments}
The authors would like to thank E. Braaten and
S.--B. Liao for useful discussions.
This work was supported in part by the U.~S. Department of Energy,
Division of High Energy Physics, under Grant DE-FG02-91-ER40690, by
the National Science Foundation under Grants No. PHY--9511923 and PHY--9258270,
and by a Faculty Development Grant from the Physics Department of The
Ohio State University.
J. O. A. was also supported in part
by a NATO Science Fellowship
from the Norwegian Research Council (project 124282/410).
| proofpile-arXiv_065-8589 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In this contribution we describe results based on many observations of HVC
complex~C toward and near the Seyfert galaxy Markarian\,290 (V=14.96, z=0.030).
Figure~1 shows the large-scale H{\small I}\ structure of the HVC this region. Among all
known extra-galactic probes of complex~C, Mark\,290 is the most favorable in
terms of the expected strength of absorption lines.
\par In Sect.~2 we discuss recent optical absorption-line data that allow us to
set a lower limit to the distance of the HVC. Sect.~3 describes the S{\small II}\
absorption-line data, and in Sect.~4 we present data on optical and radio
emission lines. Sect.~5 describes a method for deriving an ionization-corrected
metallicity, ionization fraction, density and pressure for the cloud. This
method is applied in Sect.~6, and the implications are discussed in Sect.~7.
\par
\begin{figure}[t]
\plotfiddle{Mark290_fig1.ps}{5cm}{0}{80}{80}{-230}{-30}
\caption{H{\small I}\ column density maps of part of complex~C (data from Hulsbosch \&
Wakker 1988). Brightness temperature contour levels are 0.05\,K and 25\%, 50\%,
75\% and 100\% of the peak value of the concentration nearest Mark 290. Filled
symbols indicate probe stars whose spectrum is included in Fig.~4. Larger
symbols are for more distant stars. Mark\,290 is shown by the filled circle.
}\end{figure}
\section{Distance limit}
Using the William Herschel Telescope (WHT) at La Palma, and the Utrecht Echelle
Spectrograph (UES), we observed Ca{\small II}\ H+K spectra of 12 stars in the region
around Mark\,290, at 6\,km/s resolution. The stars were selected from a list of
blue stars by Beers et al.\ (1996, and references therein). We chose stars that
were classified as Blue Horizontal Branch in low-resolution spectra; follow-up
spectroscopy confirms this for 80\% of such stars (Beers et al.\ 1992). For
these BHB candidates, a rough distance estimate can be made by assuming
$B$$-$$V$=0.05, and inserting this into the $B$$-$$V$ vs $M_V$ relation of
Preston et al.\ (1991). This gives $M_V$=0.86. Averaged over all possible
colors, the full range is $\pm$0.25 mag, so we calculated a probable distance
range using $M_V$=0.61 and 1.11. An extinction correction of $\sim$0.1 mag was
applied, using $A_V$ based on the map of Lucke (1978).
\par For the most distant stars the spectra are of relatively low quality, while
for some others stellar lines interfere with the detection of interstellar
features. Figure~2 shows five of the best Ca{\small II}\ K spectra, from which we can
set tentative limits on the strength of the Ca K absorption. The expected
equivalent widths can be derived from the Ca{\small II}/H{\small I}\ ratio of 29$\pm$2\tdex{-9}
found toward Mark\,290 by Wakker et al.\ (1996), combined with the N(H{\small I}) in the
direction of the star.
\par The estimated detection limit on the Ca{\small II}\ K absorption associated with
the HVC is always lower than the expected value. Since for three of the stars we
estimate D$\sim$5\,kpc, we conclude that the distance of complex~C is probably
$>$5\,kpc. However, there are three caveats.
\par a) The H{\small I}\ column densities are based on a 9\arcmin\ beam, but
considerable variations at arcminute scales are possible (Wakker \& Schwarz
1991). Only a ratio (expected/detection-limit)$>$5 allows a safe conclusion.
\par b) The equivalent width limits are still preliminary.
\par c) The stellar distances need to be improved using spectroscopy.
\par
\begin{figure}[ht]
\plotfiddle{Mark290_fig2.ps}{9.5cm}{0}{80}{80}{-140}{-40}
\caption{
Ca{\small II}\ K spectra of five stars projected on complex~C. The range of possible
distances is shown, as are the detection limit and expected equivalent width.
The velocities of the HVC (from an Effelsberg spectrum) and the star (from
stellar features elsewhere in the spectrum) are indicated by the lines labeled
"HVC" and "star".
}\end{figure}
\section{Observations - S{\small II}\ absorption}
\par We observed S{\small II}\ absorption with the G140M grating of the Goddard High
Resolution Spectrograph (GHRS) on the Hubble Space Telescope (HST). Sulphur is
one of a few elements not depleted onto dust in the ISM, and S$^+$ is the
dominant ionization stage in neutral gas (Savage \& Sembach 1996). Thus,
N(S$^+$) allows a good measure of the intrinsic metallicity of an H{\small I}\ cloud.
Among all similar ions, the S{\small II}\ $\lambda\lambda$1250, 1253, 1259 lines are the
easiest to observe.
\par The integration time was 90 minutes, the resolution 15\,km/s. The S{\small II}\
lines occur on top of strong Ly$\alpha$ emission associated with Mark\,290,
which increases the S/N ratio in the continuum at 1253\,\AA\ to 25, whereas it
is only 12 at 1259\,\AA. The $\lambda$1250 absorption is hidden by absorption
associated with Mark\,290.
\par The left top two panels of Fig.~3 show the spectra after continuum
normalization. The vertical lines correspond to zero velocity relative to the
LSR and to the two components observed at $-$138 and $-$115\,km/s in H{\small I}\
(Wakker et al.\ 1996 and Fig.~3). The HVC component at $-$138\,km/s is clearly
seen in both S{\small II}\ lines, but the $-$115\,km/s component is missing, although a
component at $-$110\,km/s may be present in the Ca{\small II}\ spectrum. This component
may be missing due to a combination of factors. First, it is wider (FWHM
31\,km/s vs 21\,km/s). Second, in the 9\arcmin\ beam it has a factor 1.6 lower
H{\small I}\ column density. Third, fine structure in the emission may decrease the H{\small I}\
column density even further; this is especially so for the $-$115\,km/s
component as Mark\,290 is near the edge of the $-$115\,km/s core (see Fig.~1).
In combination these may cause the S{\small II}\ peak optical depth to become too low to
detect the line with the current S/N ratio.
\par Fitting the absorption lines between $-$165 and $-$125\,km/s gives
equivalent widths for the $\lambda$1253 and $\lambda$1259 lines of 20.3$\pm$3.4
and 32.7$\pm$5.2\,m\AA. Assuming no saturation and using $f$-values from Verner
et al.\ (1994), these correspond to column densities of 1.50$\pm$0.25\tdex{14}
and 1.63$\pm$0.30\tdex{14}\,cm$^{-2}$. The average, weighted by the S/N in the
continuum, is 1.54$\pm$0.27\tdex{14}. Half of this error is associated with the
placement of the continuum.
\par These absorption lines are unresolved, but likely to be unsaturated. We
base this conclusion on three lines of evidence. First, the Ca{\small II}\ lines toward
Mark\,290 were resolved by Wakker et al.\ (1996) (FWHM 14\,km/s at 6\,km/s
resolution), so the expected linewidth for S{\small II}\ is 16\,km/s; an equivalent
width of 20.3$\pm$3.4\,m\AA\ would then give a column density of
1.6$\pm$0.3\tdex{14}\,cm$^{-2}$, in agreement with the value derived above.
Second, the observed ratio of equivalent widths is 1.6$\pm$0.4, which is
compatible with the expected ratio of 1.51. Third, for gas at temperature
$\sim$7000\,K (see Sect.~6), the thermal $b$-value for S is 1.9\,km/s; the
observed equivalent width then corresponds to an optical depth of 3.5 for the
$\lambda$1253 line. However, this predicts an equivalent width of 23.2\,m\AA\
for the $\lambda$1259 line, which is outside the error limit. If W(1259) were
1.51$\times$20.3$-$1$\sigma$=25.5\,m\AA, then for W(1253) to be 20.3\,m\AA\ one
requires $b$=2.9\,km/s with $\tau$=1.5 and 2.25 for the $\lambda$1253 and 1259
lines, respectively. This corresponds to a column density of
2.25\tdex{14}\,cm$^{-2}$, just 2.5$\sigma$ higher than the preferred value
above.
\par The S{\small II}\ column density likely represents the total S column density. In
the ISM sulphur will exist as S$^+$ and either S$^0$ or S$^{+2}$. S$^0$ has an
ionization potential of 10.4\,eV, and thus is easily ionized by the ambient
radiation field. We do not detect S{\small I}$\lambda$1262.86, setting a limit of
N(S$^0$)$<$2.6\tdex{14}\,cm$^{-2}$, or N(S$^+$)/N(S$^+$+S$^0$) $>$0.37. S$^+$
has an ionization potential of 23.3\,eV. If we assume that inside the part of
the cloud where H is fully ionized all S is S$^{+2}$ (which would imply that
there is no [S{\small II}] emission), then, since we always find N(H$^+$)$<$N(H{\small I})
(Sect.~6), we conclude that N(S$^+$)/N(S$^+$+S$^{+2}$)$>$0.5. In low-velocity
neutral gas this ratio is always observed to be $>$0.9.
\par
\begin{figure}[ht]
\plotfiddle{Mark290_fig3.ps}{9cm}{0}{75}{75}{-215}{-245}
\caption{
Spectra using Mark\,290 as a light bulb or centered on Mark\,290, aligned in
velocity. The labels give the spectral line and the resulting measured column
density or intensity for the HVC. For the [S{\small II}] emission spectrum (WHAM data)
two curves are drawn, corresponding to 1/10th and 1/30th the H$\alpha$\ intensity.
}\end{figure}
\section{Observations - emission lines}
\par To determine a metallicity from the S{\small II}\ column density, H{\small I}\ data with
the highest-possible resolution are required (see discussion in Wakker \& van
Woerden 1997). We have data from Westerbork at 1 arcmin resolution, but these
have not yet been analyzed. Until then we will use a 9 arcmin resolution
Effelsberg spectrum from Wakker et al.\ (1996). This spectrum is shown in
Fig.~3. There are two components, at $-$138 and $-$115\,km/s, with {\rm N(HI)}=68$\pm$3
and 43$\pm$7\tdex{18}\,cm$^{-2}$.
\par Using the Wisconsin H$\alpha$\ Mapper (WHAM, Reynolds et al.\ 1998), we observed
the H$\alpha$\ and [S{\small II}] $\lambda$6716 emission in a one-degree diameter field
centered on Mark\,290. H$\alpha$\ emission is clearly detected ($I_R$=0.187$\pm$0.010
R; where 1 Rayleigh is {$10^6/4\pi$} ph\,cm$^{-2}$\,s$^{-1}$\,sr$^{-1}$). The
spectrum in Fig.~3 has 12\,km/s resolution and is a combination of integrations
of 20 min for v$<$$-$50\,km/s and 30 sec for v$>$$-$100\,km/s. Most of the
``noise'' at lower velocities is fixed-pattern-noise.
\par [S{\small II}] emission is not detected, although the 1-$\sigma$ noise level is
0.006\,R. Thus, we can set a 3-$\sigma$ limit of $<$0.1 for the ratio of H$\alpha$\
and [S{\small II}] intensities. WHAM usually sees a ratio of $\sim$0.3.
\par To compare the H$\alpha$\ and H{\small I}, an H{\small I}\ spectrum of the full 1-degree WHAM
field is required, which was created from the Leiden-Dwingeloo survey (Hartmann
\& Burton 1997) by averaging the 4 spectra inside the WHAM beam. This spectrum
is also shown in Fig.~3 and gives column densities of 40.0$\pm$2.7 and
30.6$\pm$2.9\tdex{18}\,cm$^{-2}$ for the two H{\small I}\ components. The apparent
velocity shift between H$\alpha$\ and H{\small I}\ is probably due to a different intensity
ratio between the two high-velocity components and the lower velocity resolution
of the H$\alpha$\ data.
\section{Physical conditions - theory}
We now show how the H{\small I}, H$\alpha$, S{\small II}\ absorption and [S{\small II}] emission data can be
combined to derive an ionization-corrected S abundance ($A_{\rm S}$),
temperature ($T$), central density ($n_o$) and total ionization fraction ($X$).
We only need to assume a distance and a particular geometry, i.e.\ a density and
ionization structure.
\par We define the density structure in the sightline as $n(z)$ and the
ionization structure as $x(z)$, the ratio of ionized to total H (assuming
$n$(H$_2$)=0). The ``standard'' model consists of a cloud with diameter $L$ with
a fully neutral core with diameter $l$ and a fully ionized envelope with
constant density [$n(z)=n_o$ for $z$$<$$L/2$]. Figure~1 shows that the FWHM
angular diameter of the cloud, $\alpha$, is 2$\pm$0.2 degrees; the linear
diameter $L$ is the product of $\alpha$ and the distance ($D$).
\par To investigate the effects of different geometries we go one step beyond
this ``standard model'' and allow for a gaussian density distribution [$n(z)=
n_o\ \exp(-4\ln2\ z^2/L^2)$] and for the ionization fraction in the core and
envelope to be different from 0 or 1 [$x(z)$=$x_n$ for $z$$<$$l$; $x(z)$=$x_i$
for $z$$>$$l$]. We then need the following integrals: $$
\int_{-\infty}^\infty x(z) n(z) dz
= a n_o \ L\ [(1-r) x_n + r x_i ] = {\cal F}_1\ n_o \ L
$$$$
\int_{-\infty}^\infty x(z) n^2(z)dz
= {a\over b} n_o^2\ L\ [(1-r) x_n + r x_i ] = {\cal F}_2\ n_o^2\ L
$$$$
\int_{-\infty}^\infty x^2(z) n^2(z)dz
= {a\over b} n_o^2\ L\ [(1-r) x_n^2 + r x_i^2] = {\cal F}_3\ n_o^2\ L,
$$ where for a gaussian $r$=$1-{\rm erf}(\sqrt{4\ln2}\ l/L)$,
$a$=$\sqrt{\pi/4\ln2}$, $b$=$\sqrt{2}$, and for a uniform density distribution
$r$=$1-(2l/L)$, $a$=1, $b$=1. For the ``standard model'', all three ${\cal F}$ values
reduce to the ``filling factor''.
\par The definition of H$\alpha$\ emission measure and its relation to observables
are: $$
{\rm EM} = \int n_e(z)\ n({\rm H}^+)(z)\ dz
= 2.75\ T_4^{0.924}\ I_R\ \ {\rm cm}^{-6}\,{\rm pc},
$$ with $T_4$ the temperature in units of 10000\,K, and $I_R$ the H$\alpha$\ intensity
in Rayleigh.
\par For a cloud with substantial ionization, but still containing neutral gas,
most electrons will come from H, so that $n_e$=$\epsilon\,n({\rm H}^+)$, with
$\epsilon$$>$1. The first ionization potential of He is 24.6\,eV, so He will not
give a substantial contribution in mostly neutral gas. Because of their much
lower abundances all other elements combined contribute at most a few percent,
even if fully ionized. Thus, we have the following expressions for the H$^+$ and
H{\small I}\ column density and the H$\alpha$\ intensity in terms of the structure parameters:
$$
{\rm N(H}\ifmmode^+\else$^+$\fi) = \int_{-\infty}^\infty x(z) n(z) dz = {\cal F}_1\ n_o\ L
$$$$
{\rm N(HI)} = \int_{-\infty}^\infty (1-x(z))n(z) dz = (a-{\cal F}_1)\ n_o\ L
$$$$
2.75\ T_4^{0.924}\ I_R = {\rm EM} =
\int_{-\infty}^\infty \epsilon\,x^2(z) n^2(z) dz = \epsilon {\cal F}_3\ n_o^2\ L.
$$
\par To convert the observable (intensity) into the emission measure, we need to
know the gas temperature. This can be found by combining the S$^+$ emission and
absorption data. The ratio of S$^+$ and H$^+$ emissivity is:
$$
{{\rm \epsilon({\rm SII})}\over{\rm \epsilon({\rm HII})}}
= 7.73\times10^5\ T_4^{0.424}\ \exp\left({-2.14\over T_4}\right)\
\ \left({n_{S^+} \over n_{H^+}} \right)
= F(T)\ \left({n_{S^+} \over n_{H^+}} \right).
$$ The density of S$^+$ is $n({\rm S}^+)(z) = A_{\rm S} n(z)$, with
$A_{\rm S}$ the S$^+$ abundance. Thus, locally, the emissivity ratio is some
constant times the density ratio. If we assume that S$^+$ emission occurs only
in the part of the cloud where electrons are present, and if we assume a
constant temperature, then the intensity ratio is:
$$
E = {{\rm I({\rm SII})}\over{\rm I({\rm HII})}}
= F(T)\ {\int n_e n_S dz \over \int n_e n({\rm H}^+) dz }
= A_{\rm S} F(T)\ {\int x(z) n^2(z) dz \over \int x^2(z) n^2(z) dz }
= {{\cal F}_2\over{\cal F}_3}\ A_{\rm S} F(T).
$$ Our GHRS absorption measurement gives N(S$^+$) in the pencil beam to Mark
290. However, the [S{\small II}] emission measure is determined by the column density
within the WHAM beam, which we estimate by scaling with the ratio of average
{\rm N(HI)}\ in the WHAM beam to {\rm N(HI)}\ in the pencil beam to Mark 290. So: $$
A_{\rm S} = {y\ {\rm N(SII)} \over {\rm N(H}\ifmmode^+\else$^+$\fi) + {\rm N(HI)}_{\rm WHAM}},\ {\rm with\ }
y = {{\rm N(HI)}_{\rm WHAM} \over {\rm N(HI)}_{\rm Mark290}}.
$$
\par We now have five equations for the seven unknowns $T$, ${\rm N(H}\ifmmode^+\else$^+$\fi)$, $A_{\rm S}$,
$n_o$, $x_n$, $x_i$ and $r$, in terms of the observables {\rm N(HI)}$_{\rm WHAM}$,
{\rm N(HI)}$_{\rm Mark290}$, $\alpha$, $I$(H{\small II}), $I$([S{\small II}]), {\rm N(SII)}, and the distance.
We can solve this system using the following procedure. First, assume a
distance. Pick a value for $T$ to calculate EM(H{\small II}). Assume $x_n$ and $x_i$,
and solve for $r$, $n_o$ and {\rm N(H}\ifmmode^+\else$^+$\fi). From this calculate $A_{\rm S}$ and $E$.
Iterate until the derived and observed values of $E$ agree. Using the derived
values of $n_o$ and $T$, we can also calculate the pressure in the cloud as the
product $n_o T$.
\par
\begin{figure}[ht]
\plotfiddle{Mark290_fig4.ps}{11.5cm}{0}{64}{64}{-195}{-140}
\caption{
Derived values for gas temperature ($T$), S$^+$ abundance (A(S)), H$^+$ column
density (N(H$^+$)), ionization fraction ($X$), central density ($n_o$) and
central pressure ($P$), as a function of the unknown distance for four different
geometries (as indicated in the upper left panel).
}\end{figure}
\section{Physical conditions and metallicity - results}
We now derive the metallicity and physical conditions inside complex~C. We
calculate a reference value using the following assumptions: a) the geometry is
described by the ``standard model'' (constant density and a core-envelope
ionization structure); b) all electrons come from H ($\epsilon$=1); c) 50\% of
the H$\alpha$\ emission is associated with the $-$138\,km/s component; d)
I([S{\small II}])/I(H$\alpha$)=1/20; e) there is no saturation in the S{\small II}\ absorption; f) the
S{\small II}\ absorption is only associated with the $-$138\,km/s H{\small I}\ component; g) no
fine structure correction is needed; h) a distance of 10\,kpc.
\par With these assumptions, we can insert the observed values discussed in
Sects.~3 and 4 into the system of equations discussed in Sect.~5 to find a
sulphur abundance of 0.094$\pm$0.020 times solar. Here we used a solar abundance
of 1.86\tdex{-5} (Anders \& Grevesse 1989). With only the Effelsberg H{\small I}\ data
and the S{\small II}\ absorption column density we would have inferred an abundance of
0.121$\pm$0.022 times solar.
\par The error we give here is just the statistical error associated with the
measurements. It is dominated by the error in the S{\small II}\ column density. We now
discuss the systematic errors introduced by the required assumptions.
\par A) Figure 4 shows the influence of four different assumptions for the
geometry: a uniform or a gaussian density distribution in combination with
either of two ionization structures: a neutral core and fully ionized envelope
($x_n$=0, $x_i$=1), or the same partial ionization throughout ($x_n$=$x_i$=$x$).
Changing the density structure to a gaussian results in an abundance of 0.086,
changing the ionization structure to constant partial ionization gives 0.078,
changing both gives 0.072. So, the possible variation in the metallicity
associated with geometry is $^{+0.000}_{-0.022}$. Of all uncertainties we
discuss this is the only one that cannot easily be improved upon with better
observations.
\par B) The assumption that all electrons come from hydrogen has little effect
on the results: if in the fully-ionized region He were also fully ionized
($\epsilon$$\sim$1.14), the derived abundance would be 0.097 ($+0.003$).
\par C) The H$\alpha$\ emission is unresolved, so we cannot determine how much of the
emission is associated with each H{\small I}\ component. If instead of 50\%, 25\% or
75\% of the H$\alpha$\ emission is associated with the $-$138\,km/s component, the
abundance changes by $^{+0.007}_{-0.005}$. A higher angular resolution H$\alpha$\
spectrum can reduce this uncertainty.
\par D) We did not actually detect [S{\small II}] emission associated with complex~C,
but only have a (3$\sigma$) upper limit of $E$=I([S{\small II}])/I(H$\alpha$)$<$0.1. A deeper
observation of [S{\small II}] is being planned. For the sake of the calculation we
assumed a value of 0.05. This uncertainty mostly influences the derived
temperature. For E in the range 0.01--0.10, the derived sulphur abundance varies
between 0.100 and 0.091, a range of $^{+0.006}_{-0.003}$.
\par E) If saturation is a problem for the S{\small II}\ absorption lines, the column
density could be as high as 2.25\tdex{14}\,cm$^{-2}$, increasing the abundance
by 0.046.
\par F) A major problem is posed by the fact that the H{\small I}\ spectrum shows two
components, while only one is seen in the S{\small II}\ absorption spectrum. This
problem could be solved by a higher S/N and higher resolution observation of the
S{\small II}\ absorption. If we use the total H{\small I}\ column density of
111\tdex{18}\,cm$^{-2}$ (as well as the total H$\alpha$\ emission), the derived
abundance is 0.061 times solar, a change of $-$0.033.
\par G) We do not yet know the precise value of H{\small I}\ toward Mark\,290, as we
used an Effelsberg spectrum with 9\arcmin\ beam. We will correct this to the
value for a 1\arcmin\ beam using Westerbork observations. By comparing to
similar cases (Wakker \& Schwarz 1991, Lu et al.\ 1998) we expect a correction
in the range 0.7--1.5. This changes the derived sulphur abundance to
0.137--0.062, a range of $^{+0.043}_{-0.032}$.
\par H) For an assumed range of distances from 5 to 25\,kpc, the change in the
derived metallicity is $^{+0.009}_{-0.019}$.
\par The ranges given above represent the largest deviations that can reasonably
be expected, equivalent to a 3-$\sigma$ errorbar. To calculate a combined
systematic error, we therefore add the ranges in quadrature and divide by 3. The
sources of uncertainty can be split into three groups: those associated with
physics (A-E) ($^{+0.016}_{-0.008}$), those associated with the H{\small I}\ column
density (F,G) ($^{+0.014}_{-0.015}$), and the uncertainty associated with the
unknown distance (H) ($^{+0.003}_{-0.006}$). Combining these, the final value we
derive for the sulphur abundance in complex~C is
0.094$\pm$0.020$^{+0.022}_{-0.019}$ times solar.
\par Figure 4 shows the dependence on the assumed geometry and distance for the
derived parameters. Similar figures could be made showing the dependence on the
other assumptions. As was the case for the abundance, we can derive a fiducial
value and estimate the systematic error for the other parameters. We then find
that:
$X$=0.23$\pm$0.06$^{+0.07}_{-0.04}$,
$n_o$=0.048$\pm$0.010$^{+0.011}_{-0.002}~\left(D/10\right)^{-0.5}$~cm$^{-3}$.
$T$=6800$\pm$500$^{+750}_{-900}$\,K, and
$P$=330$\pm$70$^{+105}_{-45}~\left(D/10\right)^{-0.5}$~K\,cm$^{-3}$.
\section{Discussion}
\subsection{Thermal pressure vs hot halo gas}
We can compare the derived thermal pressure with the thermal pressure of hot
halo gas. Wolfire et al.\ (1995) give a semi-empirical formula, using a base
density for the hot gas n(z=0)=0.0023\,cm$^{-3}$ and a temperature of order
\dex6\,K. From the ROSAT X-ray data, Snowden et al.\ (1998) find that the
probable value of the halo temperature is \dex{6.02\pm0.08}\,K. The emission
measure is about 0.02\,cm$^{-6}$\,pc (to within a factor of order 2), which for
the density given above corresponds to a scaleheight of order 5\,kpc. Such a
scaleheight would be similar to that observed for the highly-ionized atoms of
C$^{+3}$, N$^{+4}$ and O$^{+5}$ (Savage et al.\ 1997, Widmann et al.\ 1998).
\par The middle of the three bold-faced curves in the lower-right panel of
Fig.~4 shows the semi-empirical relation for n(z=0)=0.0023\,cm$^{-3}$, and
T=\dex{6.02}\,K. Both the pressure relation derived by Wolfire et al.\ (1995)
and the pressure derived by us represent the actual thermal pressure. Thus, if
there is pressure equilibrium, the most likely values of $n(z=0)$ and $T$ imply
a distance to complex~C of 10\,kpc. If the hot halo temperature were
\dex{5.94}\,K and the density were half as large, the implied distance is
$<$3\,kpc, incompatible with the observed lower limit. On the other hand, a
higher temperature (\dex{6.10}\,K) and density (double the value) would result
in equilibrium at a distance of 30\,kpc. We conclude that for reasonable values
for the density and temperature of hot halo gas, complex~C cannot be more
distant than $\sim$30\,kpc.
\subsection{Mass, energy and mass flow}
The mass of complex~C can be calculated by summing the observed column
densities, in the manner described by Wakker \& van Woerden (1991). We make the
assumptions that all the gas is at the same distance (unlikely, but we have
insufficient information to justify a different assumption), that N(H$_2$) can
be ignored (see Wakker et al.\ 1997), that N(H$^+$)/N(Htot)=0.23 everywhere, and
that there is a 28\% mass fraction of He. This yields a mass of
2.0\tdex6\,(D/5\,kpc)$^2$\,M$_\odot$.
\par To calculate the kinetic energy and mass flow associated with complex~C
requires some knowledge of its spatial velocity. Observed is the velocity
relative to the LSR, which contains the motion of the Sun and a contribution
from galactic rotation at the position of the object. To correct for these
contributions we use the deviation velocity (Wakker 1991), the difference
between the observed LSR velocity and the maximum LSR velocity that can be
easily understood in terms of differential galactic rotation. It is the minimum
velocity that the cloud has relative to its local environment. Integrating the
product of the mass and the square of the deviation velocity at each point in
the cloud and correcting for H$^+$ and He, leads to a kinetic energy of
$>$5.6\tdex{46}\,(D/5\,kpc)$^2$\,J, equivalent to the total energy of $>$500
supernovae.
\par We estimate the mass flow by making two different
assumptions: A) the space velocity is completely radial ($v_z=v_{\rm dev}\
\sin\,b$), or B) it is completely vertical ($v_z=v_{\rm dev}/sin\,b$). This
gives 0.036 and 0.083\,(D/5\,kpc)\,M$_\odot$\,yr$^{-1}$, respectively. The area
covered by complex~C is 1623 square degrees (12.4\,(D/5\,kpc)$^2$\,kpc$^2$), so
the corresponding infall rate is
2.9--6.7\tdex{-3}\,(D/5\,kpc)$^{-1}$\,M$_\odot$\,yr$^{-1}$\,kpc$^{-2}$. This
value is similar to the rate of $\sim$4\tdex{-3}\,
M$_\odot$\,kpc$^{-2}$\,yr$^{-1}$ required by models of galactic chemical
evolution (Giovagnoli \& Tosi 1995). However, the theoretical rate should be
present over the whole Galactic Disk, whereas the HVCs cover only $\sim$18\% of
the sky. More HVC distances and metallicities are needed to solve this possible
discrepancy.
\subsection{Origins}
\par Our metallicity excludes that complex~C is part of a Galactic Fountain, as
its metallicity then should have been $>$0.3 solar, the lowest value found in
the outer galaxy (Afflerbach et al.\ 1997). The HVC thus must be
extra-galactic.
\par Oort (1970) presented a model of continuing infall, in which gas originally
near the (inhomogeneous) transition region separating Milky Way gas from
intergalactic gas is still falling onto the Milky Way. This gas starts out hot
and ionized, and becomes visible after interacting, cooling and mixing with
high-z galactic gas associated with activity in the disk. Oort's model predicts
z-heights of $\sim$1\,kpc and metallicities of $\sim$0.7 times solar. The
distance limits for HVC complexes~A (z=2.5--7\,kpc) and C (z$>$3\,kpc) (van
Woerden et al.\ 1998 and in these proceedings) and our metallicity result for
complex~C imply that the clouds would have to become neutral at much higher z
than Oort's model suggests.
\par A Local Group origin for HVCs was first suggested by Verschuur (1969), who
noted that according to the virial theorem some of the then-known clouds would
be gravitationally stable at distances of 400\,kpc. However, for most of the
clouds found in later surveys the stability distance is several to tens of Mpc,
implying M$\sim$\dex{10}\,M$_\odot$. Thus, this idea was no longer taken
seriously. Blitz et al.\ (1998) point out that dark (and/or ionized) matter may
be present, so that H{\small I}\ represents only 10--15\% of the total mass. This
reduces the average stability distance to 1\,Mpc. Based on this and many other
considerations, they suggest that the HVCs are remnants of the formation of the
Local Group. The large HVCs, including complex~C, would be nearby examples.
\par Extra-galactic HVCs could also represent gas orbiting the Milky Way, rather
than gas in the Local Group at large. This was originally suggested by Kerr \&
Sullivan (1969), who considered HVCs to be loosely bound intergalactic material,
too diffuse to contract into proto galaxies, orbiting the Galaxy at distances of
order 50\,kpc. They quote Toomre as suggesting that the source of this gas could
be tidal streams pulled out of the Magellanic Clouds during previous passages.
The metallicity would then be similar to that in the outer regions of the
Magellanic Clouds $>$5\,Gyr ago.
\par Mallouris et al.\ (1998) suggest that the HVCs are similar to Ly$\alpha$
absorbers. Vladilo (1998) suggests that damped Ly$\alpha$ absorbers are
associated with dwarf galaxies. Therefore, low-metallicity HVCs such as
complex~C could be failed dwarf galaxies, in which in the early universe some of
the gas formed stars, producing the observed metals, but where star formation
has currently stopped.
| proofpile-arXiv_065-8595 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The paradigm of correlated electron physics is based on the idea that
for a certain category of systems one better starts out with the electronic
structure of the atoms, treating the delocalization of the electrons
in the solid as a perturbation. Any student of physics has to struggle
through the theory of atomic multiplets, which is rather complicated
because of the intricacies associated with orbital angular momentum.
At first sight it is therefore remarkable that these orbital degrees
of freedom are completely neglected in the main stream of correlated
electron physics. Recently the interest in `orbitals' has been reviving,
especially since they appear to be relevant in one way or another in
the colossal magnetoresistance (CMR) manganites. In the wake of this
development, questions are asked on the relevancy of these orbitals
in the context of seemingly settled problems like the metal-insulator
transition in V$_2$O$_3$ \cite{Bao98}. In this contribution we will review
yet another recent development. Even in the Mott-insulating limit, where
the physics simplifies considerably, the interplay of orbital and spin
degrees of freedom poses a problem of principle.
There are two limits where the role of orbital degeneracy is well
understood: (i) The `band structure limit', which is
based on the assertion that electron correlations can be neglected.
In any modern local density approximation (LDA) band
structure calculation, orbitals are fully taken into account on the one
particle level, in so far as the atomic limit is of any relevance.
These translate into various bands, giving rise to multi-sheeted
fermi surfaces, etcetera. (ii) The localized, orbital and spin ordered case
which we will refer to as the `classical limit'. In Mott-insulators, orbital
degrees of freedom acquire a separate existence in much the
same way as the spins of the electrons do. The orbitals can be
parametrized by pseudospins and these form together with the physical spins a
low energy sector which is described by generalizations of the Heisenberg
spin-Hamiltonian \cite{Jan93}. These are the spin-orbital models, like the
Kugel-Khomskii (KK) model for $e_g$ degenerate cubic cuprates \cite{Kug82}.
The `classically' ordered states, becoming exact in the limit of infinite
dimensions ($d\rightarrow\infty$) and/or large (pseudo) spin ($S\rightarrow
\infty$), define what is usually meant with orbital and spin order.
The question arises if there are yet other possibilities. We started to study
this problem quite some time ago \cite{Crete}, well before the subject
revived due to the manganites. Our motivation was actually related to a
theoretical development flourishing in the 1980's: large $N$ theories
\cite{Aue94}. By enlarging the symmetry, say from $SU(2)$ to $SU(N)$ with
$N$ large, new saddle points (ordered states) appear which correspond to the
fluctuation dominated (non-perturbative) limit of the large $S$/large $d$
theories. For a single correlated impurity, orbital degeneracy leads in a
natural way to these large $N$ notions. We asked the question if these large
$N$ notions could become of relevance in lattice problems. We focussed on
the simple problem of the $e_g$ Jahn-Teller degenerate Mott-insulator,
rediscovering the KK Hamiltonian \cite{Kug82}. We tried to tackle this
problem using the techniques invented by Arovas and Auerbach for the $SU(N)$
symmetric Heisenberg model \cite{Aro88}. We found that the $SU(4)$ symmetry
is so badly broken that the large $N$ techniques were of little help, which
is another way of saying that the physics of the KK model is not
controlled by large global symmetry. However, we did find a special
approximate solution which revealed that the quantum fluctuations are
actually enhanced, and this motivated us to study these fluctuations in more
detail starting from the large $S$ limit. In this process we discovered that
the enhancement of the fluctuations is due to the control exerted by
a point in parameter space which can be either called an infinite order
quantum-critical point, or a point of perfect {\em dynamical frustration}
in the classical limit \cite{Fei97}.
This phenomenon will be discussed in the next section. It poses a
rather interesting theoretical problem. So much is clear that the
ground state degeneracy of the classical limit is lifted by
quantum fluctuations and the question is on the character of the
true ground state. As will be discussed, either the classical spin-orbital
order might survive, stabilized by an order-out-of-disorder mechanism,
or quantum-incompressible valence-bond like states might emerge. In
Section III the role of electron-phonon coupling will be addressed,
emphasizing the rather counter-intuitive result of LDA+U electronic
structure calculations that phonons play a rather secondary role
despite the fact that the lattice deformations are large.
Finally, the situation in the manganites will be shortly
discussed in Section IV.
\section{ The Kugel-Khomskii model and dynamical frustration }
Consider a Mott-insulator which is characterized by orbital degeneracy,
besides the usual spin degeneracy. Different from pure spin problems, these
spin-orbital problems are rather ungeneric and depend on the precise system
under consideration. A simple problem is a cubic lattice of $3d$-ions in a
$d^9$ configuration: the Kugel-Khomskii problem, which directly
applies to Cu perovskites like KCuF$_3$ or K$_2$CuF$_4$ \cite{Kug82}. The
large Mott gap in the charge excitation spectrum simplifies matters
considerably and one derives an effective Hamiltonian by insisting
on one hole per unit cell, deriving superexchange-like couplings
between the spin and orbital degrees of freedom by integrating
out virtual charge fluctuations.
The spins are described as usually in terms of an $su(2)$ algebra
($\vec{S}_i$). The orbital degrees of freedom are the $e_g$ cubic harmonics
$x^2-y^2 \sim |x\rangle$ and $3z^2-r^2 \sim |z\rangle$, which can be
parametrized in terms of pseudospins as
$|x\rangle ={\scriptsize\left( \begin{array}{c} 1\\ 0\end{array}\right)},\;
|z\rangle ={\scriptsize\left( \begin{array}{c} 0\\ 1\end{array}\right)}$.
Pauli matrices $\sigma^u$ ($u = x, y, z$) are introduced acting on these
states. Different from the spins, the $SU(2)$ symmetry associated with the
pseudospins is badly broken because the orbitals communicate with the
underlying lattice. Although the $e_g$ states are degenerate on a single
site, this degeneracy is broken by the virtual charge fluctuations, which
take place along the interatomic bonds, i.e., in a definite direction
with respect to the orientation of the orbitals. It is therefore convenient
to introduce operators which correspond to orbitals directed either along
or perpendicular to the three cubic axes $\alpha=a,b,c$, given by
$(\tau^{\alpha}_j-\frac{1}{2})$ and $(\tau^{\alpha}_j+\frac{1}{2})$, where
\begin{equation}
\tau^{a(b)}_i =\frac{1}{4}( -\sigma^z_i\pm\sqrt{3}\sigma^x_i ),
\hskip 1cm
\tau^c_i = \frac{1}{2} \sigma^z_i \;.
\label{orbop}
\end{equation}
In terms of these operators, the Kugel-Khomskii Hamiltonian can be written
as ($J=t^2/U$ and $t$ is the hopping along the $c$-axis) \cite{Fei97},
\begin{eqnarray}
\label{kk1}
H_1 = &J& \sum_{\langle ij\rangle,\alpha} \left[ 4(\vec{S}_i\cdot\vec{S}_j )
(\tau^{\alpha}_i - \frac{1}{2}) (\tau^{\alpha}_j - \frac{1}{2})\right.
\nonumber \\
& & \hskip 1.0cm + \left. (\tau^{\alpha}_i+\frac{1}{2})(\tau^{\alpha}_j
+ \frac{1}{2}) - 1 \right] ,
\end{eqnarray}
neglecting the Hund's rule splittings $\propto J_H$ of the intermediate
$d^8$ states ($J_H$ is the singlet-triplet splitting).
Including those up to order $\eta=J_H/U$ yields in addition,
\begin{eqnarray}
\label{kk2}
H_2 = & J\eta & \sum_{\langle ij\rangle,\alpha}
\left[ (\vec{S}_i\cdot\vec{S}_j)
(\tau^{\alpha}_i + \tau^{\alpha}_j - 1 ) \right. \nonumber \\
&+& \left. \frac{1}{2}(\tau^{\alpha}_i-\frac{1}{2})
(\tau^{\alpha}_j-\frac{1}{2})
+ \frac{3}{2} (\tau^{\alpha}_i \tau^{\alpha}_j - \frac{1}{4})\right] .
\end{eqnarray}
Eq.'s (\ref{kk1},\ref{kk2}) are rather unfamiliar: they describe a regular
Heisenberg spin problem coupled into a Pott's like orbital problem (choose
two out of three possibilities $\sim x^2-y^2, \sim y^2-z^2, \sim z^2-x^2$).
The oddity of Eq.'s (\ref{kk1},\ref{kk2}) becomes clear when one studies
the classical limit. As usually, the $\vec{S}$'s and the $\vec{\tau}$'s
are treated as classical vectors. In order to draw a phase diagram
we introduced another control parameter,
\begin{equation}
\label{kk3}
H_3 = - E_z \sum_i \tau^z_i,
\end{equation}
a "magnetic field" for the orbital pseudo-spins, loosely associated with
a uniaxial pressure along the $c$-axis. The classical limit phase diagram
as function of $\eta$ and $E_z$ is shown in Fig. 1.
\begin{figure} \unitlength1cm
\begin{picture}(8,8)
\put(0.,0.){\psfig{figure=orbfig1.ps,height=7.5cm,width=7.5cm,angle=0}}
\end{picture}
\caption{ Phase diagram of the Kugel-Khomskii model in the classical limit,
as function of the Hund's rule coupling $J_H$ and tetragonal crystal field
$E_z$ (reproduced from Ref. \protect\cite{Fei97}).}
\label{f1}
\end{figure}
For a detailed discussion of the various phases we refer to Ref. \cite{Fei97}.
To give some feeling, for large positive $E_z$ the $x^2-y^2$ orbitals are
occupied, forming $(a,b)$ planes of antiferromagnetically coupled spins
(AFxx). This is nothing else than the situation realized in, e.g.
La$_2$CuO$_4$. For large negative $E_z$ the $3z^2-r^2$ orbitals condense,
forming a 3D spatially anisotropic Heisenberg antiferromagnet [AFzz with
stronger exchange coupling along the $c$-axis than in the $(a,b)$ planes].
Finally, the MOFFA, MOAFF and MOAAF phases are variations of the basic
Kugel-Khomskii spin-orbital order \cite{Kug82} obtained by rotating the
magnetic and orbital structure by $\pi/2$. For the MOFFA phase at $E_z =0$,
the orbitals have a two-sublattice structure in the $(a,b)$-planes ($x^2-z^2$
and $y^2-z^2$ on the A- and B-sublattice, respectively). Along the $c$-axis
strong antiferromagnetic spin-spin couplings are found, while the spin
couplings in the $(a,b)$ planes are ferromagnetic with a strength $\sim\eta$.
The anomaly occurs at the origin $(E_z,\eta)=(0,0)$ of the phase diagram:
a 3D antiferromagnet (AFzz), a 2D antiferromagnet (AFxx) and a quasi-1D
A-type antiferromagnet (MOFFA/MOAFF/MOAAF) become degenerate! The emphasis on
the `uniaxial pressure' $E_z$ is misleading in the sense that the full
scope of the problem is not visible directly from this phase diagram: at
the origin of Fig. 1 an {\em infinity\/} of classical phases become
degenerate. This is trivial to understand. In the absence of Hund's rule
exchange, the Hamiltonian Eq. (\ref{kk1}) becomes the full story.
Assuming a 3D classical antiferromagnet, $\vec{S}_i \cdot \vec{S}_j = -1/4$,
and inserting this in Eq. (\ref{kk1}) yields,
\begin{equation}
H_{eff} = J\sum_{\langle ij\rangle,\alpha}
\left( \tau_i^{\alpha} + \tau_j^{\alpha} - 1 \right) .
\label{3ddeg}
\end{equation}
The orbital degrees of freedom are completely decoupled and all $2^N$ orbital
configurations have the same energy ($\sum_{\alpha}\tau_i^{\alpha}=0$)!
In addition, this infinity of different 3D spin systems has the same energy
as the MOFFA/MOAFF/MOAAF phases. It is actually so that at any finite
temperature the 3D antiferromagnet becomes stable because of the entropy
associated with the decoupled orbital sector \cite{janup}.
This `gauge' degeneracy is clearly a pathology of the classical limit. We
continued by studying the stability of the classical phase diagram with
respect to Gaussian quantum fluctuations. As discussed in more detail in Ref.
\cite{Foz98} this is a somewhat subtle affair. Intuitively, one could be
tempted to think that the orbitals and spins can be excited independently.
This is however not the case. The dynamical algebra of relevance to the
problem is an $so(4)$ algebra, and this implies that modes will occur which
excite at the same time the spins and the orbitals: the spin-and-orbital
waves (SOW)'s.
Next to a (longitudinal) sector of pure orbital excitations, a `transversal'
sector is found corresponding with spin-excitations which are mixed with
spin-and-orbital excitations, except for the acoustic modes at long
wavelength which become pure spin-waves as imposed by the Goldstone theorem.
We found that upon approaching the infinite critical point, the mass gap
associated with the discrete symmetry in the orbital sector collapses.
The (mixed) transverse modes give the dominating contribution to the
renormalization of energy and magnetic order parameter. In the AFxx (AFzz)
phase the lowest transverse mode softens along $\vec{k}=(\pi,0,k_z)$
[$\vec{k}=(k_x,0,0)$], and equivalent lines in the Brillouin zone (BZ),
regardless how one approaches the critical lines. Thus, these modes become
dispersionless along particular (soft-mode) lines in the BZ, where we find
{\em finite\/} masses in the perpendicular directions,
\begin{eqnarray}
\omega_{\rm AFxx}(\vec{k}) \rightarrow & \Delta_x &
+ B_x \left( k_x^4 + 14k_x^2k_y^2 + k_y^4 \right)^{1/2}, \nonumber \\
\omega_{\rm AFzz}(\vec{k}) \rightarrow & \Delta_z &
+ B_z \left( k_y^2 + 4k_z^2 \right),
\label{mass0}
\end{eqnarray}
with $\Delta_i=0$ and $B_i\neq 0$ at the $M$ point, and the quantum
fluctuations diverge logarithmically, $\langle\delta S^z\rangle\sim
\int d^3k/\omega(\vec{k})\sim\int d^2k/(\Delta_i+B_ik^2)\sim\ln\Delta_i$,
if $\Delta_i\rightarrow 0$ at the transition. We found that the quantum
correction to the order parameter $\langle S^z\rangle$ becomes large,
well before the critical point is reached. In Fig. 1 the lines are
indicated where $|\langle \delta S^z\rangle|=\langle S^z\rangle$:
in the area enclosed by the dashed and dotted lines classical order
cannot exist, at least not in gaussian order.
If the classical limit is as sick as explained in the previous
paragraphs, what is happening instead? {\it A priori\/} it is not
easy to give an answer to this question. There are no `off the shelf'
methods to treat quantum spin problems characterized by classical
frustration, and the situation is similar to what is found in, e.g.
$J_1-J_2-J_3$ problems \cite{Pre88}. A first possibility is quantum
order-out-of-disorder \cite{Chu91}: quantum fluctuations can stabilize
a particular classical state over other classically degenerate states, if
this particular state is characterized by softer excitations than any of the
other candidates. Khaliullin and Oudovenko \cite{Kha97} have suggested that
this mechanism is operative in the present context, where the AFzz
3D anisotropic antiferromagnet is the one becoming stable. Their original
argument was flawed because of the decoupling procedure they used, which
violates the $so(4)$ dynamical algebra constraints \cite{Foz98}. However,
Khaliullin claims to have found an `$so(4)$ preserving' self-consistent
decoupling procedure which does yield order-out-of-disorder \cite{Kha98}.
Nevertheless, there is yet another possibility: valence-bond (VB) singlet
(or spin-Peierls) order, which at the least appears in a more natural way
in the present context than is the case in higher dimensional spin-only
problems, because it is favored by the directional nature of the orbitals.
The essence of a (resonating) valence bond [(R)VB] state is that one
combines pairs of spins into singlets. In the short-range (R)VB states these
singlets involve nearest-neighbor spin pairs. Subsequently, one particular
covering of the lattice with these `spin-dimers' might be favored
(VB or spin-Peierls state), or the ground state might become a coherent
superposition of many of these coverings (RVB state). On a cubic lattice the
difficulty is that although much energy is gained in the formation of the
singlet pairs, the bonds between the singlets are treated poorly.
Nevertheless, both in 1D spin systems (Majumdar-Ghosh \cite{Maj69},
AKLT-systems \cite{Aff87}) and in the large $N$ limit of $SU(N)$ magnets in
2D, ground states are found characterized by spin-Peierls/VB order \cite{Read}.
It is straightforward to understand that the interplay of orbital- and spin
degrees of freedom tends to stabilize VB order. Since the orbital sector is
governed by a discrete symmetry, the orbitals
tend to condense in some classical orbital order. Different from
the fully classical phases, one now looks for orbital configurations
optimizing the energy of the spin VB configurations. The spin energy
is optimized by having orbitals $3\zeta^2-r^2$ on the nearest-neighbor
sites where the VB spin-pair lives, with $\zeta$ directed along the bond.
This choice maximizes the overlap between the wave functions, and thereby the
binding energy of the singlet. At the same time, this choice of orbitals
minimizes the unfavorable overlaps with spin pairs located in directions
orthogonal to $\zeta$. The net result is that VB states are much better
variational solutions for the KK model, as compared to the standard
Heisenberg spin systems.
\begin{figure} \unitlength1cm
\begin{picture}(8,8)
\put(0.,0.){\psfig{figure=orbfig2.ps,height=7.5cm,width=7.5cm,angle=0}}
\end{picture}
\caption{ A variety of valence bond solids (see text). }
\label{f2}
\end{figure}
Adressing this systematically, we found that two families of VB states
are most stable: (i) The `staggered' VB states like the PVBA and
PVBIc states of Fig. 2. These states have in common that the overlap
between neighboring VB pairs is minimized: the large lobes of the
$3\zeta^2-r^2$ wave functions of different pairs are never pointing
to each other. (ii) The `columnar' VB states like the VBc (or VBa)
state of Fig. 2. In the orbital sector, this is nothing else than
the AFzz state of Fig. 1 ($3z^2-r^2$ orbitals on every site). Different
from the AFzz state, the spin system living on this orbital backbone is
condensed in a 1D spin-Peierls state along the $z$-direction which is
characterized by strong exchange couplings. The spins in the
$a(b)$-directions stay uncorrelated, due to the weakness of the respective
exchange couplings as compared to the VB mass gap.
The energies of these VB states and the classical states dressed up with
quantum fluctuations are quite close together. A key issue is if the true
ground state is compressible (dressed classical state), or characterized
by a dynamical mass-gap (VB states). This will most likely depend on
subtleties beyond the reach of the relatively crude variational Ans\"atze
presented here \cite{notekhalu}. So the nature of the ground state of the
Kugel-Khomskii problem for small Hund's-rule coupling is still an open
problem.
\section{ Electron-phonon coupling in KC\lowercase{u}F$_3$ }
In the previous Section we discussed the orbital order as driven by
the electron-electron interactions. However, one can think quite
differently about the real systems: the deformations found in
KCuF$_3$ (or LaMnO$_3$) could in principle be entirely caused by
phonon-driven collective Jahn-Teller effects. This subject has
been intensely studied in the past and is well understood.
It starts out neglecting electron-electron interactions,
and the focus is instead on the electron-phonon coupling. In case
that the ions are characterized by a Jahn-Teller (orbital) degeneracy,
one can integrate out the (optical) phonons, and one finds effective
Hamiltonians with phonon mediated interactions between the orbitals.
In the specific case of $e_g$ degenerate ions in a cubic crystal, these
look quite similar to the KK Hamiltonian, except that the spin dependent
term is absent\cite{KKphon}. Any orbital order resulting from this
Hamiltonian is now accompanied by a lattice distortion of the same symmetry.
The size of the quadrupolar deformation in the $(a,b)$
plane of KCuF$_3$ is actually as large as 4 \% of the lattice constant ($a$).
It is therefore often argued that the orbital order is clearly phonon-driven,
and that the physics of the previous section is an irrelevancy. Although
appealing at first sight, this argument is flawed: large displacements
do not necessarily imply that phonons do all the work.
The deformations of the lattice and the orbital degrees of freedom cannot
be disentangled using general principles: they constitute an irreducible
subsector of the problem. The issue is therefore a quantitative one, and
in the absence of experimental guidance one would therefore like to address
the issue with a quantitative electronic structure method. The LDA+U method
is the method of choice. It is constructed to handle the physics of
electronic orbital ordering, keeping the accurate treatment of the
electron-lattice interaction of LDA intact. According to LDA+U calculations
the total energy gained by the deformation of the lattice is minute as
compared to the energies involved in the electronic orbital ordering
\cite{Lie95}. At the same time, the phonons are important on the macroscopic
scale and they contribute to driving KCuF$_3$ away from the infinite-critical
point of the phase diagram Fig. 1.
We start out with the observation
that according to LDA KCuF$_3$ would be an undistorted, cubic
system: the energy increases if the distortion is switched on (see
Fig. 3). The reason is that KCuF$_3$ would be a band metal according to
LDA (the usual Mott-gap problem) with a Fermi-surface which is not
susceptible to a band Jahn-Teller instability. LDA+U yields a drastically
different picture \cite{Lie95}. LDA can be looked at as unpolarized LDA+U,
and by letting both the orbitals and the spins polarize an energy is gained
of order of the band gap, i.e., of the order of 1 eV. The orbital- and
spin polarization is nearly complete and the situation is close to the
strong coupling limit underlying the spin-orbital models of Section II.
Also when the cubic lattice is kept fixed, the correct orbital and spin
ordering (MOFFA of Fig. 1) is found, with spin-exchange constants which
compare favorably with experiment \cite{Lie95}. Because the orbital
order has caused the electron density to become highly unsymmetric,
the cubic lattice is unstable. Further energy can be gained by letting the
lattice relax. The lattice distortion calculated in LDA+U ($\sim$ 3\% of $a$)
comes close to the actual distortion of KCuF$_3$ ($\sim$ 4 \%).
However, despite the fact that the distortion is large, the energy gained by
the lattice relaxation is rather minute: $\sim 50$ meV (see Fig. 3)!
Obviously, in the presence of the electronic orbital order the cubic lattice
becomes very soft with regard to the quadrupolar distortions and even a small
electron-phonon coupling can cause large distortions.
\begin{figure} \unitlength1cm
\begin{picture}(8,8)
\put(0.,0.){\psfig{figure=orbfig3.ps,height=7.5cm,width=7.5cm,angle=0}}
\end{picture}
\caption{The dependence of the total energy of KCuF$_3$ on the
quadrupolar lattice distortion according to LSDA and LDA+U band
structure calculations (after Ref. \protect\cite{Lie95}).}
\label{f3}
\end{figure}
Although the energy gained in the deformation of the lattice is rather
small, the electron-phonon coupling is quite effective in keeping KCuF$_3$
away from the physics associated with the origin of the phase diagram (Fig.
1). Since the ferromagnetic interactions in the $(a,b)$ plane of KCuF$_3$
are quite small ($J_{ab}=-0.2$ meV, as compared to the `1D' exchange
$J_c=17.5$ meV \cite{Ten95}), one might argue that the effective Hund's rule
coupling $J\eta$ as of relevance to the low energy theory is quite small.
Although this still needs further study, it might well be that in the absence
of the electron-phonon coupling KCuF$_3$ would be close to the origin of Fig.
1. However, the electron-phonon coupling can be looked at as yet another axis
emerging from the origin. In principle, the electron-phonon coupling
introduces two scales: (i) a retardation scale, which is governed by the
ratio of the phonon frequency and the electronic scale set by $J\sim 20$ meV.
Since $J$ is relatively small, KCuF$_3$ is close to the anti-adiabatic limit
where the lattice follows the electronic fluctuations, (ii) in
the anti-adiabatic limit the phonons are high energy modes which can be
integrated out, causing the effective orbital-orbital couplings we earlier
referred to. These couplings destroy the cancellations leading to Eq.
(\ref{3ddeg}), thereby driving the system away from the point of classical
degeneracy. The typical scale for the phonon induced effective orbital
interactions is at most of the order of the LDA+U lattice relaxation energy.
However, as the latter ($\sim 50$ meV) is quite a bit larger than $J$, the
effective interaction will likely be able to put KCuF$_3$ well outside the
`dangerous' region near the origin of the phase diagram.
In summary, although further work is needed it might be that phonons are
to a large extent responsible for the stability of KCuF$_3$'s classical
ground state. In any case, one cannot rely on the sheer size
of the lattice deformations to resolve this issue!
\section{How about the manganites ?}
Given the discussion so far, the search for interesting quantum effects
in orbital degenerate Mott-insulators should not be regarded as hopeless.
Unfortunately, the insulating parent compounds of the CMR manganites, such
as LaMnO$_3$, are {\it not\/} candidates for this kind of physics. The
reason is not necessarily phonons: also in the manganites the `Jahn-Teller'
lattice distortions are sizable, but this does not necessarily imply that
the phonons are dominating. Two of us derived a Kugel-Khomskii-type model
of relevance to this regime, and we did find a dynamical frustration of
$e_g$-superexchange at $J_H\simeq 0$ \cite{Fei98}. However, the system is
driven away from this point by two effects:
(i) the manganites are in the Hund's rule dominated regime, with a large
splitting between the lowest energy high-spin state at $U-5J_H$
(with $J_H=0.69$ eV \cite{Miz95}), and
the low-spin states at energies $\sim U$;
(ii) the additional $t_{2g}$-superexchange between the $S=3/2$ cores favours
an antiferromagnetic order in all three spatial directions.
The net outcome is that the ferromagnetic
interaction between the {\em total $S=2$ spins\/} in the $(a,b)$ planes
is of order of the $c$-axis exchange, signalling that the manganites are in
the Hund's rule stabilized regime of the phase diagram.
The mysteries of the manganites relate to what happens when
quantum-mechanical holes are added to the orbital/spin ordered insulator.
This is undoubtedly a problem with its own characteristics, which cannot
be reduced to a variation on the far simpler problems encountered in the
insulators. Nevertheless, we do believe that the study of the insulating
limit might be of some help in better appreciating what is going on in the
doped systems.
It is tempting to think about orbital degrees of freedom as
being spins in disguise. This is not quite the case. Orbitals are far less
quantum-mechanical -- they are more like Ising spins than Heisenberg spins.
Secondly, orbitals carry this unfamiliar property that depending on their
specific orientation in internal space, overlaps increase in particular real
space directions, while they diminish in orthogonal directions.
Our valence-bond constructions illustrate this peculiar phenomenon in the
case of spins, but the same logic is at work when the hole is delocalizing.
This intimate connection between internal symmetry and the directionality of
delocalization causes the dynamical frustration which has been highlighted
in this communication. This motive seems also at work in the doped system,
witness the many near degenerate states found both in mean-field
calculations \cite{Miz95,Nag98} and in experiment \cite{Tok94}.
Further work is needed on this fascinating problem.
{\it Acknowledgements}. We thank A. I. Lichtenstein for helpful discussions.
AMO acknowledges support by the Committee of Scientific Research (KBN) of
Poland, Project No. 2 P03B 175 14.
| proofpile-arXiv_065-8599 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
This is the second paper of a series aimed at studying the radio properties
of galaxies in the local universe. Paper I (Gavazzi \& Boselli 1998)
is devoted to a study of the dependence of the radio luminosity function
(RLF) on the
Hubble type and on the optical luminosity, using a deep radio/optical survey
of the Virgo cluster.\\
In the present paper we wish to discuss another issue:
is the local RLF of late-type galaxies universal or is it influenced
by the environmental properties of galaxies?\\
Jaffe \& Perola (1976) found that late-type galaxies in some clusters
(e.g. Coma) have unexpectedly overluminous radio
counterparts than "field" galaxies. Gavazzi \& Jaffe (1986) confirmed
these early claims by
comparing the RLF of late-type galaxies within and outside rich clusters.\\
To re-address this question we derive in this paper the RLFs of galaxies
in five nearby clusters and in less dense regions of the universe at
similar distances. For this purpose we take advantage from the unprecedented
homogeneous sky coverage of two recent
all sky radio surveys carried out with the VLA (NVSS and FIRST)
(Condon et al. 1998; White et al. 1997a).\\
Moreover precise photometric measurements became available in
the regions under study.
For example a Near-Infrared (NIR) survey was
recently completed (Gavazzi \&
Boselli 1996; Gavazzi et al. in preparation; Boselli et al. in preparation),
which is crucial for
meaningful determinations of the radio properties of galaxies.
In fact the radio emission from spiral and irregular galaxies is to
first order
proportional to their optical luminosity, as shown by Hummel (1981),
or to their mass, which is well traced by their NIR luminosity
(Gavazzi et al. 1996).
Hence the necessity of properly normalizing the radio to the optical or NIR
luminosities.\\
In Section 2 and 3 we discuss the optical sample used, the radio identification
procedure and the method for deriving the RLF.
Differences in the
RLFs of the individual clusters (Section 4) are discussed in the framework of their
X-ray properties in Section 5.
\section{The Sample}
\subsection{The Optical Data}
The present investigation is based on the nearby clusters of galaxies
A262, Cancer, Coma, A1367 and Virgo, and on relatively isolated
objects in the Coma supercluster.\\
The optical sample is taken from the
CGCG Catalogue (Zwicky et al. 1961-68) in the regions:\\
$01^h43^m<\alpha<02^h01^m; 34.5^{\circ}<\delta<38.5^{\circ}$ (A262);\\
$08^h11^m<\alpha<08^h25^m; 20.5^{\circ}<\delta<23^{\circ}$ (Cancer) and\\
$11^h30^m<\alpha<13^h30^m; 18^{\circ}<\delta<32^{\circ}$ (Coma--A1367).\\
The latter region, beside the two rich clusters, contains
about 50\% of galaxies in relatively low density environments,
belonging to the bridge between Coma and A1367 (see Gavazzi et al. 1998).
Within the limiting magnitude of the CGCG ($m_p\leq $15.7) these regions
contain 448 late-type (Sa-Irr) galaxies, all (but 3) with a redshift
measurement in the literature (see Gavazzi et al. 1998).
Coordinates are measured with 1-2 arcsec error.
Photographic photometry
with an uncertainty of $\sim$ 0.35 mag is given in the CGCG.
Near Infrared (H) photometry is also available for all galaxies
except 5 (see Gavazzi \& Boselli 1996 and Gavazzi et al. in preparation).\\
The Virgo sample, extracted from the VCC (Binggeli et al. 1985)
is fully described in Paper I. Here we use a subsample of 174 late-type objects
limited to $m_B\leq $14.0. The H band magnitudes are from
Gavazzi \& Boselli (1996) and Gavazzi et al. (in preparation).
\subsection{1.4 GHz continuum data}
Radio continuum 1.4 GHz data in the regions covered by the present
investigation are available from a variety of sources:\\
1) Full synthesys and snap-shot observations of specific regions
were undertaken with the VLA and with the WSRT ("pointed" observations).
Jaffe \& Gavazzi (1986), del Castillo et al. (1988) and
Gavazzi \& Contursi (1994) observed with the VLA several regions of the
Coma supercluster. Venturi et al. (1990) took similar data of the Coma
cluster with the WSRT. Bravo Alfaro (1997) derived some continuum measurements
of galaxies in the Coma cluster from his VLA 21 cm line survey.
Gioia \& Fabbiano (1986), Condon (1987) and Condon et al. (1990)
observed with the VLA relatively nearby galaxies projected onto the
Coma regions. Salpeter \& Dickey (1987) carried out a survey of the
Cancer cluster with the VLA. These surveys do not generally constitue a
complete set of observations.\\
2) Recently, two all-sky surveys carried out with the VLA at 1.4 GHz became
available:\\
a) the B array (FWHM = 5.4 arcsec) FIRST survey (1997 release) covers the sky north of
$\delta >22^{\circ}$, with an average rms=0.15 mJy (White et al. 1997a).\\
b) the D array (FWHM = 45 arcsec) NVSS survey covers the sky north of
$\delta >-40^{\circ}$, with an average rms=0.45 mJy (Condon et al. 1998).
Except in specific regions
of the sky near bright sources, where the local rms is higher than average,
these surveys
offer an unprecedented homogeneous sky coverage. They not only provide us
with extensive
catalogues of faint radio sources, but also with homogeneous upper limits
at any celestial position.\\
Since radio data from more than one source exist for several target galaxies,
we choose between them adopting the following list of priority:\\
1) in general we prefer NVSS data to any other source because of its
homogeneous character,
relatively low flux density limit and because its FWHM beam better
matches the
apparent sizes of galaxies under study, thus providing us with flux
estimates little affected by missing extended flux.\\
2) For individual bright radio galaxies (eg. M87, N3862, N4874) we prefer
data from specific "pointed" observations since they should provide us
with more reliable estimates of their total flux.\\
3) in all cases where the flux densities from NVSS are lower than those
given in other references
we privilege the reference carrying the highest flux density.\\
4) in the region of the Coma superluster north of 22$^{\circ}$ we made
a comparison between the flux measurements derived from all available
surveys (including FIRST).
As expected, the NVSS flux densities are systematically 1.9 times larger
than the FIRST ones.
Furthermore several NVSS sources are undetected in the FIRST data-base.
These correspond
to slightly extended sources resolved by the FIRST beam, thus with peak
flux density lower than the survey limit.
Conversely it seldom happens that FIRST sources are undetected in the
NVSS data-base.
These are faint compact sources below the NVSS limiting flux density.
In both cases the detections are often confirmed by
independent "pointed" measurements. Thus we assume the reference carrying the
largest flux density.
\subsection{The radio-optical identifications}
At the position of all optically selected galaxies
we search for
a radio-optical coincidence. For the remaining undetected galaxies
we compute an upper limit flux using $4\times rms$.
For this purpose we proceed as in Paper I with the following modifications:\\
\noindent
1) In the Coma supercluster region ($\delta >22^{\circ}$)
we pre-select sources from the FIRST data-base with a maximum radio-optical
positional discrepancy of 30 arcsec from the target galaxies.\\
\noindent
2) In all regions
we pre-select sources from the NVSS data-base, allowing for a maximum
radio-optical positional discrepancy of 30 arcsec.\\
3) at the position of all pre-selected optical-radio matches we compute an
"identification class" (ID) according to Paper I.
\noindent
\begin{figure*}
\vbox{\null\vskip 16.0cm
\special{psfile=gg8155f1.ps voffset=-130 hoffset=0 hscale=90 vscale=90 angle=0}
}
\caption{the differential (a) and cumulative (b) RLFs as a function of the radio/optical ratio $R_B$ for late-type galaxies in 5 clusters and for isolated, multiplets and groups
in the Coma supercluster. The RLF of isolated (open dotes) is given in all other panels
for comparison.
}
\label{Fig.1}
\end{figure*}
\begin{figure*}
\vbox{\null\vskip 16.0cm
\special{psfile=gg8155f2.ps voffset=-130 hoffset=0 hscale=90 vscale=90 angle=0}
}
\caption{the differential (a) and cumulative (b) RLFs as a function of the radio/NIR ratio $R_H$ for late-type galaxies in 5 clusters and for isolated, multiplets and groups
in the Coma supercluster. The RLF of isolated (open dotes) is given in all other panels
for comparison.
}
\label{Fig.2}
\end{figure*}
The 408 positive radio-optical matches are listed in Table 1 as follows: \newline
Column 1: the CGCG (Zwicky et al. 1961-68) designation.\newline
Column 2: the photographic magnitude corrected for extinction in
our Galaxy according to Burstein \& Heiles (1982) and for internal extinction
following the prescriptions of Gavazzi \& Boselli (1996). \\
Column 3: the H band (1.65 $\mu m$) magnitude corrected for internal
extinction following the prescriptions of Gavazzi \& Boselli (1996). \\
Column 4: the morphological classification.\\
Column 5: the membership to the individual clusters and clouds as
defined in Gavazzi et al. (1998).\\
Columns 6, 7: the (B1950) optical celestial coordinates of the target galaxy.\\
Columns 8, 9: the (B1950) celestial coordinates of the radio source.\\
Column 10: the radio-optical offset (arcsec).\\
Columns 11: the identification class (see Paper I for details).
ID=1 and 2 are good identifications. ID=4 correspond to radio sources projected
within the galaxy optical extent. ID=3 are dubious identifications not used in
the following analysis.\\
Column 12: the 1.4 GHz total flux density (mJy).\\
Columns 13, 14: the extension parameters of the radio source (major and minor axes
in arcsec).\\
Column 15: reference to the 1.4 GHz data.\\
All sources listed in Table 1 are found within 30 arcsec from the central
optical coordinates of the parent galaxies.
An estimate of the number of possible chance-identifications ($N_{c.i.}$)
among the 408 sources/galaxies listed in Table 1 is carried out using Condon
et al. (1998) Fig. 6. The probability
of finding an unrelated source within 30 arcsec of an arbitrary position
is 1~\%. Thus about 4 sources in Table 1 should be spurious associations.
\section{The RLF}
Spiral galaxies are well known to develop radio sources with an average
radio luminosity proportional to their optical luminosity (see Paper I).
For these objects it is convenient
to define the (distance independent) radio/optical ratio:
$R_\lambda= S_{1.4} / k(\lambda) 10^{-0.4*m(\lambda)}$,
where $m(\lambda)$ is the magnitude at some wavelength $\lambda$.
$k(\lambda)=4.44\times 10^6$ and $1.03 \times 10^6$ are the factors
appropriate
to transform in mJy the broad-band B and H magnitudes respectively.
$R_B$ gives the ratio of the radio emission per unit light emitted
by the relatively
young stellar population, while the Near Infrared $R_H$ gives the ratio
of the radio emission per unit
light emitted by the old stellar population, thus per unit dynamical
mass of the system (see Gavazzi 1993; Gavazzi et al. 1996).
The Fractional Radio Luminosity Function (RLF), that is the
probability distribution
$f(R_\lambda)$ that galaxies develop a radio source of a given radio/optical
ratio $R_\lambda$,
from a complete, optically selected sample of galaxies is derived
using equation (5) of Paper I.
\section{Results}
\subsection{The environmental dependence of the RLF}
The analysis in this section is aimed at determining whether the radio
properties of
late-type (S-Irr) galaxies depend on their environmental conditions.
For this purpose we compare the RLFs of galaxies in 5 rich clusters with
those of galaxies belonging to the relatively isolated regions of the Coma
supercluster.
According to the definition of Gavazzi et al. (1998), who studied the
3-D distribution of galaxies in this supercluster, galaxies with no
companion within 0.3 Mpc projected radius can be considered "isolated";
"multiplets"
have at least one companion within 0.3 Mpc projected radius
and within 600 km~s$^{-1}~$, and "groups" have at least 8 galaxies within 0.9
Mpc projected radius and within 600 km~s$^{-1}~$.\\
We derive for these objects the differential frequency distributions
$f(R_H)$ and $f(R_B)$ and the
corresponding integral distributions $F(\geq R_H)$ and $F(\geq R_B)$,
binned in intervals
of $\Delta R_\lambda = 0.4$. These are shown in Fig. 1 and 2, a and b
respectively.\\
a) The shape of the differential $f(R_B)$s is typical of a
normal distribution peaked at log $(R_B)$ between -0.5 and 0. This confirms
that there is a
direct proportionality between the mean radio and optical luminosities.
About 15 \% of all galaxies are detected at the peak of the distribution.
About 50\% of all galaxies
have a radio/optical ratio greater than 0.01.\\
b) The shape of the differential $f(R_H)$s is similar to the $f(R_B)$s,
but with a somewhat
larger dispersion. This confirms that the radio luminosity better
correlates with the young than with the old stellar population.\\
c) It appears that both $f(R_H)$ and $f(R_B)$ of isolated galaxies,
mambers of groups and of the Cancer,
A262 and Virgo clusters are statistically consistent.
Moreover we searched for possible differences among the various
subclusters within the Virgo cluster (cluster A, B, southern extension)
and found none. These results confirm that
the RLFs of the Virgo, Cancer and A262 clusters are similar to the
field one, as claimed by
Kotanyi (1980), Perola et al. (1980) and by Fanti et al. (1982),
resepectively.\\
d) The RLFs of the Coma and A1367 clusters and the Coma supercluster
multiplets, on the contrary, show significantly enhanced radio emission:
at any given log $R_\lambda$ above -0.5, the probability of finding a
radio source
associated with a late-type galaxy in these clusters is a factor of
$\sim$ 10 higher than for other
galaxies. Conversely at fixed $f(R_\lambda)$ (i.e. 10 \%) these galaxies
have the
ratio $R_\lambda$ a factor of 5 higher than for the remaining galaxies.
The overluminosity of Coma with respect to the "field" was claimed by
Jaffe \& Perola (1976), later confirmed by Gavazzi \& Jaffe (1986). Similar
evidence was found for A1367 by Gavazzi (1979) and confirmed by
Gavazzi \& Contursi (1994).\\
Evidences c) and d) are even more clear-cut in the cumulative distributions.
However, the reader should remember that cumulative distributions tend to
emphasize differences if they are present in the highest $R_\lambda$ bin.
Fig. 3 shows that the cumulative fraction of galaxies with $F(> R_B=0.2)$
is consistently below 10\% for isolated galaxies, members of A262,
Cancer and Virgo, and consistently above 30\% for multiplets and members of A1367
and Coma.
\begin{figure*}
\vbox{\null\vskip 7.5cm
\special{psfile=gg8155f3.ps voffset=-300 hoffset=-30 hscale=100 vscale=100 angle=0}
}
\caption{ the cumulative RLF for $log R_B >~0.2$ is given for the various clusters and
substructures.
}
\label{Fig.3}
\end{figure*}
\section{Discussion and Conclusions}
\subsection{Cluster Galaxies}
We have shown (Section 4.1) that late-type galaxies in the
clusters A1367 and Coma develop radio sources more frequently
than galaxies
in the remaining clusters or more isolated galaxies.
Here we wish to discuss if this evidence is connected with
the properties of the hot gas permeating the clusters (IGM),
which might emphasize the role of the environment.\\
Enhanced radio emission and morphological disturbances
both in the radio and in the optical have been observed in three
Irr galaxies in A1367 (CGCG 97073, 97079 and 98087) showing radio trails exceeding
50 kpc in length (see Gavazzi \& Jaffe 1985;
Gavazzi et al. 1995). A highly asymmetrical HI structure has been reported in
NGC 4654 in the Virgo cluster (Phookun \& Mundy, 1995) and several other
examples are discussed in Gavazzi (1989).
These peculiarities have been interpreted
in the ram-pressure scenario: galaxies in fast motion
through the inter-galactic medium experience enough dynamical pressure
to compress their magnetosphere on the up-stream side, form
a tail-like radio structure on the down-stream side and produce
a net enhancement of the radio luminosity.
These galaxies should have experienced such a pressure for a relatively
short time, otherwise their HI content would have been strongly reduced
by stripping, contrary to the observations.
A similar interpretation has been proposed to explain the asymmetries in
NGC 1961 (Lisenfeld et al. 1998) and NGC 2276 (Hummel \& Beck 1995).
In these cases however the gravitational interaction with companions provides an
alternative interpretation of the observed asymmetry (see Davis et al. 1997).
Although
these phenomena have not been observed in the Coma cluster, perhaps due
to the lack of appropriate sensitivity/resolution, it cannot be excluded
that galaxies in this cluster have radio luminosities enhanced
by the same mechanism.\\
It is in fact remarkable that the radio/H ratio $R_H$ of the detected
galaxies indicates that the radio emissivity increases with
the transit velocity through the IGM.\\
Fig. 4 shows $log R_H$ as a function of the deviation of the
individual velocities (projected along the line of sight)
from the mean velocity of the cluster to which they belong.
Highly HI deficient objects ($Def_{HI}>0.8$) are excluded from the plot because
for these galaxies the star
formation rate, thus the radio emissivity, might have been totally quenched by the dynamical pressure, due to complete gas removal (see Cayatte et al. 1990).
Galaxies populate a "wedge" region in the $R_H$ vs.
$|\Delta V|=|V_g - <V_{cl}>|$ plane.
This is because some galaxies in fast transverse motion through the cluster
might appear at low $|\Delta V|$ if their motion is parallel to the plane
of the sky. For example the 50 kpc long radio trails associated with
three A1367 galaxies mentioned above testifies that a significant component of
their velocity lies in a plane perpendicular to the line of sight.
The wedge pattern is observed in all clusters, but to a lesser degree in Virgo.
Fig. 4. also reports $<log R_H>$ averaged below and above $|\Delta V|=720~km~s^{-1}$,
showing that the average contrast between the radio emissivity at low vs. high $|\Delta V|$
ranges between a factor of 2 and 7, with a mean of 3. This evidence by itself is sufficient
to rule out that enhanced radio activity is associated with galaxy-galaxy
interactions, which is expected to be more effective at small velocity differences.\\
An estimate of the average dynamical pressure:
$P_{ram} \sim n_e \times \Delta V^2$
can be derived from the global cluster X-ray luminosity and temperature:
$n_e^2 = k L_x/T_x^{1/2}$.
Adopting $L_x$ and $T_x$ for Virgo, A262, A1367 and Coma from the recent compilation by White et al. (1997b) (their Table 1) and for Cancer from Trinchieri (1997)
and adopting the velocity dispersions of the
individual clusters (taken from White et al. 1997b and from our own data
for Cancer) we compute that the effective dynamical pressure experienced by
galaxies in the individual clusters is nearly absent in the Cancer cluster, 10 times
higher in A262 and Virgo, 30 times higher in A1367 and 300 times higher in the Coma cluster,
providing a hint to explain the excess radio emission in A1367 and Coma with respect
to all other clusters.
\begin{figure*}
\vbox{\null\vskip 16.0cm
\special{psfile=gg8155f4.ps voffset=-150 hoffset=0 hscale=90 vscale=90 angle=0}
}
\caption{the distribution of the NIR $R_H$ as a function of the deviation (along the
line of sight) of the individual velocities from the cluster average velocity. The plot
includes the detected galaxies separately for the 5 clusters and grouped all together
(bottom-right panel). The values of $<log R_H>$ averaged below and above $|\Delta V|=720~km~ s^{-1}$ are given in each panel.}
\label{Fig.4}
\end{figure*}
\subsection{Multiplets}
Also multiple systems in the Coma supercluster bridge have radio/optical
ratios significantly larger than isolated galaxies, suggestive of an enhanced
star formation rate in galaxies showing some degree of interaction (our multiplets
have projected separations lower than 300 kpc).
Hummel et al. (1990) compared the central (within 10 arcsec from the nuclei) radio luminosity
of isolated galaxies with those of double systems with average separations 4-5
effective radii. They found that
the central radio sources in interacting spiral galaxies are on average
a factor of 5 stronger than the ones in the more isolated galaxies.
This difference is almost completely due to the activity in the HII
region nuclei.
Menon (1995) analyzed the radio luminosity of spiral/Irr galaxies in Hickson Compact Groups (HCG).
He found that the radio
radiation from the nuclear regions is more than 10 times that from comparable
regions in the comparison sample, but that the extended radiation from HCG spirals is
lower than from a comparison sample of isolated galaxies.
This evidence is interpreted in
a scenario in which galaxy interactions cause massive inflows
of gas towards the centers. The resulting
star formation in the center leads to the formation of supernovae and
subsequent radio radiation.
Our radio observations have a mix of resolutions (from about 5 to 45 arcsec)
which unfortunately do not allow us to resolve homogeneously
the contribution of nuclear sources to the total flux.
We must conclude that, to the first order, the radio emissivity
in our sample of multiplets is dominated by the enhanced nuclear activity.
\section{Summary}
In summary the present investigation brought us to the following empirical results:\\
1) the RLF of Cancer, A262 and Virgo are consistent with that of isolated galaxies. \\
2) galaxies in A1367 and Coma have their radio emissivity enhanced by a factor
$\sim 5$ with respect to isolated objects.
We find that the radio excess is statistically larger for cluster galaxies with
large velocity deviations with respect to the average cluster velocity.
This is coherent with the idea that enhanced radio continuum activity is produced by
magnetic field compression of galaxies in fast transit motion through the intra-cluster gas,
and is inconsistent with the hypothesis that the phenomenon is due to galaxy-galaxy interactions.
The higher X-ray luminosies and temperatures of Coma and A1367 compared with the
remaining three studied clusters provides a clue for explaining why the radio enhancement
is observed primarely in these two clusters. \\
3) Multiple systems in the Coma supercluster bridge (with projected separations smaller
than 300 kpc) have radio/optical
ratios significantly larger than isolated galaxies, suggestive of an enhanced
star formation rate probably taking place in the nuclear regions.
\acknowledgements {We wish to thank P. Pedotti for her contribution
to this work and T. Maccacaro for useful discussions.
We wish also to acknowledge the NVSS and FIRST teams for their
magnificent work.}
| proofpile-arXiv_065-8600 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
$\; \; \; \;$ Tunneling is occurrent in almost all branches of physics including cosmology.
Some of the most significant examples are, the decay of the QFT false vacuum
in the inflationary model of universe \cite{B}, fluctuations as changes in the
topology in semiclassical gravity via tunneling process \cite{K},
tunneling rates for the production of pairs of black holes \cite{G},
minisuperspace quantum cosmology \cite{H}, and etc.
Here in this article we are concerned with the calculation of the tunneling
rate of the universe from nothing to the FRW universe based on the minisuperspace
model with only one degree of freedom by applying the dilute-instanton approximation
on the Duru-Kleinert path integral.
The Duru-Kleinert path integral formula for fixed energy amplitude is an alternative
approach to handle systems with singular potentials \cite{DK}. The heart of
this viewpoint is based on the Duru-Kleinert equivalence of actions, leading to
the same fixed energy amplitude, by means of arbitrary regulating functions originating
from the local reparametrization invariance.
On the other hand, reparametrization invariance is a basic property of general
relativity \cite{H}, so it is expected that:
One can establish a relation between Duru-Kleinert path integral for fixed
zero energy amplitude and the standard path integral in quantum cosmology, and
by using the corresponding Duru-Kleinert equivalence of actions it is possible
to work with the action that contains the standard quadratic kinetic term
instead of non-standard one.
In this paper we have studied these two subjects in the context of a ``mini-superspace''
model with only one degree of freedom.
In section ${\bf 2}$, the Duru-Kleinert path integral formula and Duru-Kleinert
equivalence of corresponding actions is briefly reviewed. In section ${\bf 3}$,
the standard path integral in quantum cosmology and its relation to Duru-Kleinert
path integral for closed FRW cosmology with only one degree of freedom (the scale factor) is investigated.
This section ends by introducing an equivalent standard quadratic action for
this cosmology.
Finally in section ${\bf 4}$, the rate of tunneling from nothing to the FRW
universe is calculated through the dilute instanton approximation to first order
in $\hbar$ \cite{B}, where its prefactor is calculated by the heat kernel method \cite{C},
using the shape invariance symmetry \cite{D}.
\section{Duru-Kleinert equivalence}
$\; \; \; \;$ In this section we briefly review Ref. 1.
The fundamental object of path integration is the time displacement
amplitude or propagator of a system, $ (X_b \: t_b\: | \: X_a \: t_a) $.
For a system with a time independent Hamiltonian, the object
$ (X_b \: t_b \: | \: X_a \: t_a) $ supplied by a path integral is the causal
propagator
\begin{equation}
(X_b \: t_b \: | \: X_a \: t_a)=\theta(t_a-t_b)<X_b|\exp(-i\hat{H}(t_b-t_a)/\hbar)|X_a>.
\end{equation}
Fourier transforming the causal propagator in the time variable, we
obtain the fixed energy amplitude
\begin{equation}
(X_b \: | \: X_a \: )_E = \int_{t_a}^\infty dt_b e^{iE(t_b-t_a)/\hbar}
(X_b \: t_b\: | \: X_a \: t_a)
\end{equation}
This amplitude contains as much information on the system as the propagator
$(X_b \: t_b\: | \: X_a \: t_a)$, and its path integral form is as follows:
\begin{equation}
(X_b \: | \: X_a)_E = \int_{t_a}^{\infty} dt_b \int {\cal D}x(t) e^{i{\cal A}_E/\hbar}
\end{equation}
with the action
\begin{equation}
{\cal A}_E = \int_{t_a}^{t_b} dt [\frac{M}{2}\dot{x}^2(t)-V(x(t))+E]
\end{equation}
where $ \dot{x} $ denotes the derivatives with respect to $ t $ .
In Ref. 1, it has been shown that fixed energy amplitude (3) is equivalent
with the following fixed energy amplitude,
\begin{equation}
(X_b \: | \: X_a)_E = \int_{0}^{\infty} dS [f_r(x_b)f_l(x_a)\int {\cal D}x(s)
e^{i{\cal A}_{E}^{f}/\hbar}]
\end{equation}
with the action
\begin{equation}
{\cal A}_{E}^{f} = \int_{0}^{S} ds \{ \frac{M}{2f(x(s))}x'^2(s)-f(x(s))
[V(x(s))-E] \}
\end{equation}
where $ f_r $ and $ f_l $ are arbitrary regulating functions and $ x'$ denotes
the derivatives with respect to $s$.
The actions $ {\cal A}_E $ and $ {\cal A}_{E}^{f} $,
both of which lead to the same fixed-energy amplitude $ (X_b \: | \: X_a)_E $ are called
Duru-Kleinert equivalent \footnote{Of course a third action
$ {\cal A}_{E,\varepsilon}^{DK} $ is also Duru-Kleinert equivalent of
$ {\cal A}_E $ and $ {\cal A}_E^f $ which we do not consider here.}.
The motivation of Duru and Kleinert, using this equivalence, was to investigate
the path integrals for singular potentials.
In the following section we show that one can use this equivalence to
investigate the quantum cosmological models, with one degree of freedom.\, To
see this rewrite the action $ {\cal A}_{E}^{f} $ in a suitable form such that
it describes a system with zero energy; as only in this sense can we describe
a quantum cosmological model with zero energy.\\
Imposing $ E = 0 $ in (6), with a simple manipulation, gives
\begin{equation}
{\cal A}_{E}^{f} = \int_{0}^{1} ds' S f(X(s')) \{ \frac{M}{2[Sf(X(s'))]^2}
\dot{X}^2(s')-V(X(s')) \}
\end{equation}
where $ \dot{X} $ denotes the derivative with respect to new parameter $ s' $ defined by
\begin{equation}
s' = S^{-1} s
\end{equation}
with $S$ as a dimensionless scale parameter.
After a Wick rotation $ s'=-i\tau $, we get the required Euclidean action and
the path integral
\begin{equation}
I_{0}^{f} = \int_{0}^{1} d\tau S f(X(\tau)) \{ \frac{M}{2[Sf(X(\tau))]^2}
\dot{X}^2(\tau)+V(X(\tau)) \}
\end{equation}
\begin{equation}
(X_b \: | \: X_a) = \int_{0}^{\infty} dS [f_r(X_b)f_l(X_a) \int{\cal D}X(\tau)
e^{{-I_{0}^{f}}/\hbar}].
\end{equation}
where $\tau$ is the Euclidean time. We will use eqs. (9) and (10) in the following section.
\section{Path integral in Quantum Cosmology }
$\; \; \; \;$ General formalism of quantum cosmology is based on the Hamiltonian formulation
of general relativity, specially Dirac quantization procedure in which the wave
function of the universe $ \Psi $ is obtained by solving the Wheeler-DeWitt
equation
\begin{equation}
\hat{H} \Psi = 0.
\end{equation}
A more general and more powerful tool for calculating the wave function is the path
integral. In ref. 2 it is shown that the path integral for the propagation
amplitude between fixed initial and final configurations can be written as
\begin{equation}
(X_b \: | \: X_a)=\int_{0}^{\infty} dN <X_b,N \: | \: X_a,0> \: =\int_{0}^{\infty}
dN \int {\cal D}X e^{-I[X(\tau),N]/\hbar}
\end{equation}
where $ <X_b,N \: | \: X_a,0> $ is a Green function for the Wheeler-DeWitt
equation and $N$ is the lapse function. The Euclidean action $I$ is defined on
minisuperspace in the gauge $ \dot{N} = 0 $ as
\begin{equation}
I[X(\tau),N] = \int_{0}^{1} d\tau N [\frac{1}{2N^2}f_{ab}(X)\dot{X}^{a}\dot{X}^{b}
+V(X)]
\end{equation}
where $ f_{ab}(X) $ is the metric defined on minisuperspace and has indefinite signature.
Here we consider a model in which the metric $ f_{ab}(X) $ is defined by only
one component and takes the following Euclidean action \cite{H},
\begin{equation}
I = \int_{0}^{1} d\tau N [\frac{R\dot{R}^2}{2N^2}+\frac{1}{2}(R-\frac{R^3}
{a_{0}^2})]
\end{equation}
where $R$ is the scale factor and $ R_{0}^2 = \frac{3}{\Lambda} $ is interpreted
as the minimum radius of the universe after tunneling from nothing \cite{H} ($ \Lambda $
is cosmological constant).\, This model describes the closed FRW universe with
one degree of freedom $R$.\\
Now we rewrite the action (14) as
\begin{equation}
I = \int_{0}^{1} d\tau N R^{-1} [\frac{\dot{R}^2}{2N^2R^{-2}}+\frac{1}{2}
(R^2-\frac{R^4}{R_{0}^2})].
\end{equation}
Comparing this action with (9) (with $ M = 1 $) we find that by choosing
\begin{equation}
N R^{-1} = S f(R)
\end{equation}
we obtain (9) in the form
\begin{equation}
I = I_{0}^f = \int_{0}^{1} d\tau S f(R) [\frac{\dot{R}^2}{2[Sf(R)]^2}+V(R)]
\end{equation}
such that
\begin{equation}
V(R) = \frac{1}{2}(R^2-\frac{R^4}{R_0^2}).
\end{equation}
The gauge $ \dot{N} = 0 $ gives
\begin{equation}
f(R) = C R^{-1}
\end{equation}
where $C$ is a constant which we set to $C=R_a^{-1}$ so that
$$
S = N\,R_a.
$$
Now, one can show that the path integral (10) corresponds
to the path integral (12).\\
To see this, assume
\begin{equation}
f_r(R) = 1 \;\;\; , \;\;\; f_l(R) = f(R)
\end{equation}
so that the path integral (10) can be written as
\begin{equation}
(R_b \: | \: R_a) = \int_{0}^{\infty} dN \int {\cal D}R \:
e^{{-I_0^f}/\hbar}
\end{equation}
where $ I_0^f $ is given by (17). This shows that the Duru-Kleinert
path integral (21) is exactly in the form of (12) as a path integral
for this cosmological model. Now, using the Duru-Kleinert equivalence,
we can work with the standard quadratic action
\begin{equation}
I_0 = \int_{\tau_a}^{\tau_b} d\tau [\frac{1}{2}\dot{R}^2(\tau)+\frac{1}{2}
(R^2-\frac{R^4}{R_{0}^2})]
\end{equation}
instead of the action (17) or (14), where a Wick rotation
with $ E = 0 $ has also been used in the equation (4).
\section{Tunneling rate}
$\; \; \; \;$ The Euclidean type Lagrangian corresponding to the action (22) has
the following quadratic form
\begin{equation}
L_E = \frac{1}{2}\dot{R}^2 + \frac{1}{2}(R^2 - \frac{R^4}{R_0^2}).
\end{equation}
The corresponding Hamiltonian is obtained by a Legendre transformation
\begin{equation}
H_E = \frac{\dot{R}^2}{2} - \frac{1}{2}(R^2 - \frac{R^4}{R_0^2}).
\end{equation}
Imposing $ H_E = 0 $ gives a nontrivial ``instanton solution''as
\begin{equation}
R(\tau) = \frac{R_0}{\cosh(\tau)},
\end{equation}
which describes a particle rolling down from the top of a potential $ -V(R) $
at $ \tau \rightarrow -\infty $ and $ R = 0 $, bouncing back at $ \tau = 0 $ and
$ R = R_0 $ and finally reaching the top of the potential at $ \tau \rightarrow
+\infty $ and $ R = 0 $.\\
The region of the barrier $ 0 < R < R_0 $ is classically forbidden for the zero energy
particle, but quantum mechanically it can tunnel through it with a tunneling
probability which is calculated making use of the instanton solution (25).\\
The quantized FRW universe is mathematically equivalent to this particle, such
that the particle at $ R = 0 $ and $ R = R_0 $ represents ``nothing'' and ``FRW''
universes respectively. Therefore one can find the probability
$$
|<FRW(R_0) \: | \: nothing>|^2 .
$$
The rate of tunneling $ \Gamma $ is calculated through the dilute instanton
approximation to first order in $\hbar$ as \cite{B}
\begin{equation}
\Gamma = [\frac{det'(-\partial_{\tau}^2 + V''(R))}{det(-\partial_{\tau}^2 + \omega^2)}]^{-1/2}
e^{\frac{-I_0(R)}{\hbar}} [\frac{I_0(R)}{2\pi\hbar}]^{1/2}
\end{equation}
where det' is the determinant without the zero eigenvalue,\, $ V(R)'' $ is the
second derivative of the potential at the instanton solution (25 ), $ \omega $
corresponds to the real part of the energy of the false vacuum $ (|nothing>) $
and $ I_0(R) $ is the corresponding Euclidean action.
The determinant in the numerator is defined as
\begin{equation}
det'[-\partial_{\tau}^2 + V''(R)] \equiv \prod_{n=1}^{\infty}|\lambda_n|
\end{equation}
where $ \lambda_n $ are the non-zero eigenvalues of the operator
$ -\partial_{\tau}^2 + V''(R) $.\\
The explicit form of this operator is obtained as
\begin{equation}
O \equiv [-\frac{d^2}{dx^2} + 1 - \frac{6}{\cosh^2(x)}]
\end{equation}
where we have used Eqs.(18) and (25) and a change of variable $ x = \tau $
has been done.\\
Now, we can calculate the ratio of the determinants as follows:
First we explain very briefly how one can calculate the determinant of an
operator through the heat kernel method \cite{C}. We introduce the generalized
Riemann zeta function of the operator $A$ by
\begin{equation}
\zeta_A(s) = \sum_{m} \frac{1}{|\lambda_m|^s}
\end{equation}
where $ \lambda_m $ are eigenvalues of the operator $A$,\, and the determinant
of the operator $A$ is given by
\begin{equation}
det \, A = e^{-\zeta'_{A}(0)}.
\end{equation}
On the other hand $ \zeta_A(s) $ is the Mellin transformation of the heat kernel
$ G(x,\, y,\, t)$
\footnote{Here $t$ is a typical time parameter.}
which satisfies the following heat diffusion equation,
\begin{equation}
A \, G(x,\,y,\, t) = -\frac{\partial \, G(x,\,y,\, t)}{\partial t}
\end{equation}
with an initial condition $ G(x,\,y,\,0) = \delta(x - y) $.\, Note that
$ G(x,\,y,\, t) $ can be written in terms of its spectrum
\begin{equation}
G(x,\,y,\, t) = \sum_{m} e^{-\lambda_{m}t} \psi_{m}^{*}(x) \psi_{m}(y).
\end{equation}
An integral is written for the sum if the spectrum is continuous.
From relations (30) and (31) it is clear that
\begin{equation}
\zeta_{A}(s) = \frac{1}{\Gamma(s)} \int_{0}^{\infty} dt \, t^{s-1}
\int_{-\infty}^{+\infty} dx \, G(x,\,x,\, t).
\end{equation}
Now, in order to calculate the ratio of the determinants in (26), called a
prefactor, we note that it is required to find the difference of the functions
$ G(x, y, t) $.\\
We rewrite the operator (28) as:
\begin{equation}
[ (-\frac{d^2}{dx^2}-\frac{2(2+1)}{\cosh^{2}(x)}+4)-3 ].
\end{equation}
This is the same as the operator which appears in Ref.5, for values of
$ l = 2 $, $ h = -3 $; so the heat kernel $ G(x, y, t)$ corresponding to
the operator (34) is given by
\begin{equation}
G_{\Delta_{2}(0)-3}(x, y, t) = \frac{e^{-(4-3)t}}{2\sqrt{\pi t}}
e^{-(x-y)^2/{4 t}}
\end{equation}
and
\begin{equation}
\begin{array}{lll}
G_{\Delta_{2}-3}(x, y, t) & = & \psi_{2,0}^{\ast}(x)\psi_{2,0}(y)e^{-|-3|t}\\
& & \\
& & +\int_{-\infty}^{+\infty} \frac{dk}{2\pi} \frac{e^{(1+k^2)t}}{(k^2+1)(k^2+4)}
[ \left( B_{2}^{\dag}(x) B_{1}^{\dag}(x) e^{-ikx} \right) ]
[ \left( B_{2}^{\dag}(y) B_{1}^{\dag}(y) e^{iky} \right) ].
\end{array}
\end{equation}
The functions $ \psi_{l,m} $ and $ \psi_{l,k} $ are the eigenfunctions corresponding
to discrete spectrum $ E_{l,m} = m(2l-m) $ and continuous spectrum $ E_{l,k} =
l^2 + k^2 $ of the following operator
$$
-\frac{d^2}{dx^2}-\frac{l(l+1)}{\cosh^2(x)}+l^2
$$
respectively, and are given by \cite{D}
$$
\psi_{l,m}(x) = \sqrt{\frac{2(2m-1)!}{\prod_{j=1}^{m} j(2l-j)}}\frac{1}{2^m(m-1)!}
B_{l}^{\dag}(x)B_{l-1}^{\dag}(x) \cdots B_{m+1}^{\dag}(x)\frac{1}{\cosh^{m}(x)}
$$
and
$$
\psi_{l,k}(x) = \frac{B_{l}^{\dag}(x)}{\sqrt{k^2+l^2}}
\frac{B_{l-1}^{\dag}(x)}{\sqrt{k^2+(l-1)^2}} \cdots
\frac{B_{1}^{\dag}(x)}{\sqrt{k^2+1^2}} \frac{e^{ikx}}{\sqrt{2 \pi}}
$$
where
$$
B_{l}(x) := \frac{d}{dx}+l \; \tanh(x), \hspace{10mm}
B_{l}^{\dag}(x) := - \frac{d}{dx}+l \; \tanh(x).
$$
Now, we can write
$$
\int_{-\infty}^{+\infty} dx [G_{\Delta_{2}-3}(x, x, t) - G_{\Delta_{2}(0)-3}(x, x, t)]
= e^{-3t} -\frac{3}{\pi} e^{-t} \int_{-\infty}^{+\infty} dk \: \frac{(k^2+2)e^{-k^2 t}}
{(k^2+1)(k^2+4)}
$$
and
$$
\begin{array}{lll}
\zeta_{\Delta_{2}-3}(s) - \zeta_{\Delta_{2}(0)-3}(s) & = &
\frac{1}{3^s} -\frac{3}{\pi} \int_{-\infty}^{+\infty} \frac{dk}{(k^2+1)^s(k^2+4)}
-\frac{3}{\pi} \int_{-\infty}^{+\infty} \frac{dk}{(k^2+1)^{s+1}(k^2+4)} \\
& & \\
& = & \frac{1}{3^s} -\frac{3}{\sqrt{\pi}} \frac{1}{2^{2s+1}} \frac{\Gamma(s+\frac{1}{2})}{\Gamma(s+1)}
\{ F(s, s+\frac{1}{2}, s+1; \frac{3}{4}) \\
& & \\
& & \hspace{36mm} + \frac{2s+1}{8(s+1)} F(s+1, s+\frac{3}{2}, s+2; \frac{3}{4}) \}.
\end{array}
$$
Thus, we have
$$
\hspace{-40mm} \zeta_{\Delta_{2}-3}(0) - \zeta_{\Delta_{2}(0)-3}(0) = -1 \: , \hspace{10mm}
\zeta'_{\Delta_{2}-3}(0) - \zeta'_{\Delta_{2}(0)-3}(0) = Ln(12)
$$
So we get the following value for the prefactor in (26)
\begin{equation}
{\it prefactor} = \: e^{-ln(12)} \: = \: \frac{1}{12} \:
\end{equation}
Substituting the result (37) in Eq. (26) gives the tunneling rate
of the universe from nothing to the FRW universe
$$
\Gamma = \frac{2}{\sqrt{\pi \hbar}} R_0 e^{- \frac{2 R_0^2}{3\hbar}} \: + \: O(\hbar).
$$
in a good agreement with the result obtained by WKB approximation (Atkatz \cite{H}).
\section{Conclusions}
$\; \; \; \;$ We have shown in this paper that one can obtain the path integral formula of
quantum cosmology by Duru-Kleinert path integral formula, at least for a model
with one degree of freedom. The prime point is that, to the extent the path integral
quantum cosmology is concerned, one can work with the standard action instead
of non standard one, by using the Duru-Kleinert equivalence of the actions.
This will be valuable in simplifying the possible technical problems which may appear in working with
non standard actions. We have concentrated on the model with only one degree of
freedom. Whether this procedure works for higher degrees of freedom is a question
which requires further investigation.
\newpage
| proofpile-arXiv_065-8616 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
An important feature of classical self-gravitating systems
is that in general they are not
in equilibrium states. This instability gives rise to
spontaneous creation of structure which is assumed
to lead to
{\it increasing} lumpiness as time increases.
This rise of structure with time
in self-gravitating systems seems to run counter
to the usual intuition in classical statistical
physics where increasing time is associated with the
increase in microscopic disorder and hence to
macroscopic uniformity
in matter. The main reason for this difference
is thought to be due to the long range and
unshielded nature of the gravitational force \cite{Padmanabhan90}.
This dichotomy
has led to the question of possible existence
of gravitational entropy
and its connection with the usual thermodynamic
entropy (see for e.g. \cite{Penrose79,Book-Arrow-Time} and references
therein).
Whether a satisfactory notion of gravitational
entropy exists and whatever its nature may be,
it is of interest to find out the extent
to which the assumption regarding
the increase in
structuration with increasing time
holds in relativistic cosmological models. This would be of value
for a variety of reasons, including its potential relevance
for debates concerning structure formation in the Universe.
Furthermore, the presence or absence of indicators
that evolve monotonically in time
could inform the debates regarding gravitational
entropy and in particular whether, if it in fact exists,
it should be
monotonic in models which possess recollapsing phases
(see for e.g. \cite{Hawking85,Page,Hawkingetal93}).
Since indicators that best codify such structuration
are not known a priori, rather than
focusing on a single indicator, we shall here consider families
of spatially covariant indicators which measure the density contrast
and which include as special cases
indicators
put forward by
Bonnor \cite{Bonnor86} and Tavakol \& Ellis \cite{Tavakol-Ellis}.
We shall also consider for completeness some
indicators previously used in the literature,
including that given by Szafron \& Wainwright \cite{Szafron-Wainwright}
and the non-covariant indicators of
Silk \cite{Silk} and
Bonnor \cite{Bonnor74},
even though the latter two may, in view of their lack of covariance, be viewed as
suspect from a physical point of view.
\\
We shall employ inhomogeneous cosmological models
of Lemaitre--Tolman (LT) \cite{Lemaitre,Tolman}
and Szekeres \cite{Szekeres}
in order to make a
comparative study of these indicators in the
general relativistic inhomogeneous settings.
Since these models involve arbitrary\footnote{These functions
are not in fact totally arbitrary as they need
to satisfy certain constraints which we shall
discuss below.}
functions,
the integrals involved in the definitions of
our indicators cannot be performed in general.
As a result, we look at the asymptotic behaviour
of both ever-expanding and
recollapsing families of models as well as
their behavious near the origin,
and in particular look for
conditions under which the
asymptotic evolution of these indicators
is monotonic with time.
Clearly these can only give
necessary conditions for the all-time
monotonic evolution
of these indicators. To partially extend these results to
all times we also calculate these indicators
for a number of concrete models given in the literature.
The crucial point is that our asymptotic results
bring out some general points that seem
to be supported by our all-time study of
the concrete models.
\\
The organisation of the paper is as follows. In section 2
we introduce and motivate families of density contrast indicators.
Sections 3--6 contain the calculations of these
measures for LT and Szekeres models respectively.
In section 7 we consider the behaviour of the
dimensionless analogues of these indicators. In section 8 we
study the behaviour of these indicators
near the initial singularity. Section
9 gives
a discussion of some of the consequences of
our results and
finally section 10 contains our conclusions.
\\
Throughout we use units in which $c=G=1$ and lower case
latin indices take values 0 to 3.
\section{Density contrast indicators}
\label{DC-Indicators}
Intuitively one would expect the rise of structuration
in cosmology to be related to the
coarse grained spatial variance of various
physical scalars.
An important scalar in cosmology
is the energy density $\rho$ (or in a multi-component
version of this, densities $\rho^{(i)}$
corresponding to the different components of the content
of the Universe). Here for simplicity
we confine ourselves to the one component case and
consider the rotation-free dust setting. We
can then
introduce a global spatial variability index defined
as
\begin{equation}
{\Large\int} \frac{\left |\rho -
{\rho}_0 \right |}{
\rho_0} dV
\end{equation}
where $\rho_0$ is the mean density defined appropriately
and $dV$ is the comoving volume element.
One can make this notion spatially covariant by, for example, expressing it
in terms of the fractional density gradient
introduced by Ellis $\&$ Bruni \cite{Ellis-Bruni},
\begin{equation}
\chi_{a}= \frac{h_a^b}{\rho}\frac{\partial\rho}{\partial x^b}
\label{def}
\end{equation}
where $h_{ab} = g_{ab} + u_a u_b $ projects orthogonal to the unit
4-velocity $u^a$, which we shall throughout assume to be
uniquely defined.
A
related covariant spatial variability index can then
be defined thus
\begin{equation}
\int_\Sigma \left |\chi_{a} \right | dV
\label{index}
\end{equation}
where the integration is over a 3-surface $\Sigma$ or part thereof.
\\
Now it is not a priori clear what are the indicators
that best codify such structuration and in particular
what their monotonic properties may be. As a result,
instead of
concentrating on a single indicator, we shall
introduce a two parameter
family of possible covariant indicators, which we refer to
as {\it density contrast indicators}, $S_{IK}$,
in the form
\begin{equation}
\label{index-rewrite}
S_{IK}=
\int_\Sigma \left | \frac{h^{ab}}{\rho^I}
\frac{\partial \rho}{\partial x^a}
\frac{\partial \rho}{\partial x^b}\right |^{K} dV, ~~~~I\in{\BBB R},~K \in {\BBB R}
\setminus \{0\}.
\end{equation}
An important feature of this family is that it may be treated as
local or global depending upon the
size of $\Sigma$. It also includes as special
cases the indicator given by Tavakol \& Ellis \cite{Tavakol-Ellis},
for which $I=2$ and $K=1/2$, and the pointwise
indicator previously given
by Bonnor \cite{Bonnor86}
\begin{equation}
B1 =\frac{h^{ab}}{\rho^2}
\frac{\partial \rho}{\partial x^a}
\frac{\partial \rho}{\partial x^b}.
\end{equation}
In the cosmological context we might expect it
to be more appropriate to normalise these indicators
with the comoving volume. We therefore define
the corresponding density contrast indicators per unit volume,
$S_{IKV}$, by
\begin{equation}
\label{comoving-ind}
S_{IKV}=\frac{S_{IK}}{V}
\end{equation}
where $ V = \int_\Sigma dV$ is the comoving volume.
We shall also consider dimensionless analogues
of these indicators in section 7.
Indicators (\ref{index-rewrite}) and (\ref{comoving-ind})
are of potential interest for a number of reasons,
including their operational definability and hence
their potential relevance to
observations regarding the evolution of structure
in the Universe
and their possible connection to the question of gravitational entropy.
\\
For completeness we shall also consider
the spatially covariant indicator introduced by
Szafron \& Wainwright \cite{Szafron-Wainwright}
\begin{equation}
SW = -\frac{1}{\dot\rho}\sqrt{h^{ab}
\frac{\partial \rho}{\partial x^a}
\frac{\partial \rho}{\partial x^b}}
\end{equation}
as well as the non-covariant indicators given for
LT models by
Bonnor \cite{Bonnor74}
\begin{equation}
B2 = \frac{1}{\rho}\frac{\partial \rho}{\partial r}
\end{equation}
and
Silk \cite{Silk}
\begin{equation}
SL= \frac{r}{\rho}\frac{\partial \rho}{\partial r}
\end{equation}
where $r$ in these expressions is the $r$
coordinate of the LT models introduced in the next
section. We note that some of these latter indicators
have been used as measures of homogeneity in
the past \cite{Bonnor74,Szafron-Wainwright}, a question we shall return to
section 9.
\\
In the following, in analogy with the notion
of {\it cosmological arrow} which points in the
direction of the dynamical evolution of the Universe,
we shall employ the notion of
{\it density contrast arrow} which is in the
direction of the evolution of the
density contrast indicator employed.
The aim here is to test these families of
indicators in the context
of
LT and Szekeres models
in order to determine the subset of these indicators (and models)
for which the asymptotic evolution and the evolution near the
origin is
monotonically increasing with time, i.e there is
a unique density contrast arrow which points in the direction
of increasing time.
\section{Lemaitre--Tolman models in brief}
The Lemaitre--Tolman models \cite{Lemaitre, Tolman, Bondi} (see also
\cite{Krasinski})
are given by
\begin{equation}
ds^2 = -dt^2 + \frac{{R^{'}}^2}{1+f} dr^2 +R^2 (d\theta^2 + \sin^2
\theta
d\phi^2)
\label{tolman}
\end{equation}
where $r, \theta, \phi$ are the comoving coordinates, $R=R (r,t)$ and $f=f(r)$ are arbitrary $C^2$ real functions
such
that $f > -1$ and $R (r,t)\ge 0$. In this section
a dot and a prime
denote $\partial / \partial t$ and $\partial / \partial r$ respectively.
The evolution of these models is then given by
\begin{equation}
\label{tol-eq}
{\dot R}^2 = \frac{F}{R} +f
\end{equation}
where $F=F(r)$ is another $C^2$ arbitrary real function, assumed
to be positive in order to ensure the positivity
of the gravitational mass.
Equation (\ref{tol-eq})
can be solved
for different values of $f$
in the following
parametric forms:
\\
\noindent {For $f<0$}:
\begin{eqnarray}
\label{elliptic}
&R = \frac{F}{2(-f)}(1-\cos\eta) \nonumber \\
&(\eta-\sin\eta) = \frac{2(-f)^\frac{3}{2}}{F}(t-a)
\end{eqnarray}
where $0< \eta < 2\pi$ and $a$ is a third arbitrary real function of $r$.
\\
\noindent {For $f>0$}:
\begin{eqnarray}
\label{hyperbolic}
&R = \frac{F}{2f}(\cosh\eta-1), \nonumber \\
&(\sinh\eta-\eta) = \frac{2f^\frac{3}{2}}{F}(t-a)
\end{eqnarray}
where $\eta > 0$.
\\
\noindent {For $f=0$}:
\begin{equation}
\label{parabolic}
R=\left(\frac{9F}{4}\right)^\frac{1}{3}(t-a)^\frac{2}{3}.
\end{equation}
The solutions corresponding to $f>0$, $f=0$ and $f<0$ are referred
to as hyperbolic,
parabolic and elliptic, respectively. In the elliptic case,
there is a recollapse to a second singularity, while the
other two classes of models are ever-expanding.
In all three cases the matter density can be written as
\begin{equation}
\label{density}
\rho(r,t) = \frac{F^{'}}{8\pi R^{'} R^2}.
\end{equation}
Now the fact that $\rho$ must be non-divergent (except on
initial and final singularities) and positive everywhere imposes
restrictions
on the arbitrary functions \cite {Hellaby85}, with the positivity
of the density
implying that $R'$ and $F'$ have the same sign.
\section{Evolution of the density contrast in Lemaitre--Tolman models}
For these models the indicators (\ref{index-rewrite}) can be written
as
\begin{equation}
\label{entropy}
S_{IK} = 4\pi \int \left |\frac{1+f}{R'^2}\frac{1}{\rho^I} \left (\frac{\partial
\rho}{\partial r} \right)^2 \right |^K \frac{R^2 |R'|}{\sqrt{1+f}} dr
\end{equation}
with the
time derivative
\begin{eqnarray}
\label{rate}
\dot{S}_{IK} = 4\pi &\int & (1+f)^{K-\frac{1}{2}}
\left [ \frac{\partial}{\partial t} \left(R^2|R'|^{1-2K}\right) \frac{1}{\rho^I}
\left(\frac{\partial\rho}{\partial r}\right) \right. \nonumber \\
& + & \left. R^2|R'|^{1-2K} \frac{\partial}{\partial t}
\left(\frac{1}{\rho^I} \left(\frac{\partial \rho}{\partial r}
\right)^2\right) \right] dr.
\end{eqnarray}
In the following we consider
different classes of LT models given by different
types of $f$.
Clearly in general
$\rho$ in such models depends on the
functions $f$ and $F$ and as a result the integrals
arising in $S_{IK}$ and $S_{IKV}$ cannot be performed in general.
We shall therefore look at the
asymptotic behaviour of these indicators
and in some special cases we study the behaviour of the indicators
for all times.
We note that in some studies concerning the
question of gravitational entropy the function $a$ has been taken
to be a constant or zero (see for example Bonnor \cite{Bonnor85}),
in order to avoid the presence of white holes.
Here, in line with these studies, we shall also
take $a$ to be a constant in all sections below,
apart from section (9.2) where the shortcomings
associated with taking
$a = a(r)$ will not affect our
results.
\subsection{Parabolic LT models}
Models of this type
with $a=$ constant, reduce to the Einstein--de Sitter model \cite {Bonnor74}
for which the density depends only on $t$
giving
$S_{IK}=0=S_{IKV}$ for all time.
\subsection{Hyperbolic LT models}
\label{LT-Hyperbolic}
For large $\eta$ we have
\begin{equation}
\label{r6}
R\approx f^{\frac{1}{2}}(t-a)
\end{equation}
which gives
\begin{eqnarray}
\rho &\approx & \frac{F'}{4\pi f'f^{\frac{1}{2}}(t-a)^{3}}\\
\dot{\rho} &\approx & \frac{-3 F'}{4\pi
f'f^{\frac{1}{2}}(t-a)^4}
\end{eqnarray}
where for positivity of $\rho$, $F'$ must have the same
sign
as $f'$.
This then gives
\begin{eqnarray}
S_{IK} & \approx & 4\pi
\int \alpha_1 (t-a)^{3IK-8K+3}dr \\
S_{IKV} & \approx & 2\frac{\int \alpha_1 (t-a)^{3IK-8K} dr}{\int |f'|f^{\frac{1}{2}}(t-a)^3 dr}
\end{eqnarray}
where
$\alpha_1>0$ is purely a function of $r$.
The dominant asymptotic temporal behaviour
of the density contrast
indicators described in section 2 are calculated and
summarised in the column 2 of
Table (\ref{Indicators-Tolman}) and the conditions for $S_{IK}$ and $S_{IKV}$
to be monotonically increasing
can then be readily calculated and are summarised in
Table (\ref{DC-Tolman}).
\subsection{Elliptic LT models}
In the limit $\eta\to\ 2\pi$, $R$, $\rho$ and
$\dot\rho$ can be written as
\begin{eqnarray}
R & \approx & \phi_{1}\phi_{2}^{\frac{2}{3}} \\
\rho & \approx & \frac{3F'}{16\pi\phi_1 \phi_2'\phi_2}\\
\dot{\rho} &\approx & \frac{27F'(-f)^{\frac{3}{2}}}{\pi\phi_1 F}
\left(\frac{F'}{F}-\frac{3}{2}\frac{f'}{f}\right)
\frac{\phi_2+\frac{(-f)^{\frac{3}{2}}}{F}t}{\phi_2'^2\phi_2^2}
\end{eqnarray}
where
$\phi_{2}(r,t)=12\left [\frac{(-f)^{\frac{3}{2}}}{F}(t-a)-\pi\right]$,
$\phi_{2}'(r,t)=-12\frac{(-f)^{\frac{3}{2}}}{F}\left[
\left(\frac{F'}{F}-\frac{3}{2}\frac{f'}{f}\right)t\right]$ and \\
$\phi_{1}(r)=\frac{F}{4(-f)}$.
Now $\phi_{2}$ satisfies
$\phi_{2}<0$
and $\dot{\phi_{2}}>0$ and $\phi_2'$ and $F'$ must have opposite
signs to ensure the positivity of $\rho$. The indicators are
then given by
\begin{eqnarray}
S_{IK} &\approx &4\pi \int \alpha_2 |\phi_2|^{IK-\frac{10}{3}K+1} dr \\
\dot{S}_{IK} & \approx & 4\pi \int
\left(IK-\frac{10}{3}K+1\right)\alpha_2 \phi_2'
|\phi_2|^{IK-\frac{10}{3}K} \nonumber \\
&+&\dot{\alpha_2}' |\phi_2|^{IK-\frac{10}{3}K+1} dr
\end{eqnarray}
where $\alpha_2(r,t)=\frac{2}{3}|\phi_2'| \phi_1^3 \left|\left(\frac{3}{2}\right
)^4\frac{(1+f)F'^2}{\phi_1^8 \phi_2'^2}\left(\frac{2\phi_1^3 \phi_2'}{3F'}\right)^I\right|^K >0$.
Similarly the density contrast per unit volume can be calculated to be
\begin{equation}
S_{IKV}=\frac{3}{2}\frac{\int \alpha_2 |\phi_2|^{IK-\frac{10}{3}K+1}dr}{\int
\phi_1^3 |\phi_2| dr}.
\end{equation}
Now with the choice of $\left(\frac{F'}{F}-\frac{3}{2}\frac{f'}{f}\right) =0$,
the model becomes homogeneous with $S_{IK} =0 = S_{IKV}$.
When $\left(\frac{F'}{F}-\frac{3}{2}\frac{f'}{f}\right)\neq0$,
we have
$\frac{\partial |\phi'_2|}{\partial t}>0$
and $\frac{\partial |\phi_2|}{\partial t}<0$ and
our results are summarised
in Tables (\ref{Indicators-Tolman}) and (\ref{DC-Tolman}).
We note that the asymptotic behaviour of
$S_{IKV}$ cannot be deduced in general
and therefore the corresponding entries in these tables
were derived using pointwise
versions of the indicators.
\begin{table}[!htb]
\begin{center}
\begin{tabular}{cll}
\hline
Indicators & $f>0$ & $f<0$ \\
\hline
\\
$S_{IK}$ & $(t-a)^{3IK-8K+3}$ &
$\phi_2^{IK-\frac{10}{3}K+1}$\\
\\
$S_{IKV}$ & $(t-a)^{3IK-8K}$
& $\phi_2^{IK- \frac{10}{3} K}$\\
\\
$B1$& $(t-a)^{-2}$
& $\phi_2^{-1/3}$\\
\\
$B2$&
$const.$ & $\phi_2^{-1}$\\
\\
SL &
$const.$ & $\phi_2^{-1}$\\
\\
SW & $const.$
& $\phi_2^{1/3}$\\
\\
\hline
\end{tabular}
\caption[Indicators-Tolman]{\label{Indicators-Tolman}Asymptotic evolution of
density contrast indicators given in section (\ref{DC-Indicators}), for
hyperbolic and elliptic LT models. The constants in the second
column are different for each $r$.}
\end{center}
\end{table}
\begin{table}[!htb]
\begin{center}
\begin{tabular}{ccc}
\hline
~Models~~~~~ & $~~~~~S_{IK}~~~~~$ & $~~~~~S_{IKV}~~~~~$ \\
\hline
\\
$f >0 $ & $I>\frac{8}{3}-\frac{1}{K}$ & $I>\frac{8}{3}$\\
\\
$f <0 $ & $I<\frac{10}{3}-\frac{1}{K}$ & $I<\frac{10}{3}$\\
\\
\hline
\end{tabular}
\caption[DC-Tolman]{\label{DC-Tolman} Constraints on $I$ and $K$
in order to ensure
$\dot{S}_{IK}>0; \dot{S}_{IKV}>0$ asymptotically in
hyperbolic and elliptic LT models.}
\end{center}
\end{table}
\vskip .2in
To summarise, the results of this section indicate
that for both ever-expanding ($f>0$) and
recollapsing ($f<0$) LT models,
$I$ and $K$ can always be chosen such that $S_{IK}$ and $S_{IKV}$
both grow asymptotically.
However, there are special cases of interest,
such as $I=2$, for which no such
intervals can be found.
\subsection{Special LT models}
\label{Special}
So far we have studied the behaviour of the $S_{IK}$ and $S_{IKV}$
asymptotically. To partly extend these results
to all times,
we shall in this section consider some concrete examples
of LT models which have been considered in the literature.
\\
\noindent{\em Parabolic examples}:
\\\\
Models
with $a=0$ (see e.g. those
in \cite{Bonnor74} and \cite{Maartens})
are
homogeneous with trivial behaviour for the indicators.
\\
\noindent{\em Hyperbolic examples}:
\\\\
We considered a number of examples of this type
given by Gibbs \cite{Gibbs}, Humphreys et al. \cite{Maartens},
and Ribeiro \cite{Ribeiro93},
the details of which are summarised
in Table (\ref{tablehyperbolic}).
We found that for all these models
there exist
ranges of $I$ and $K$ such that
indicators $S_{IK}$ increase monotonically for all time.
In particular, we found that the condition
for all time monotonicity with $K=1/2$ and $K=1$ are
given by $I \in \left[1, +\infty\right[$ and
$I \in \left[2,+\infty \right[$ respectively.
\\
\begin{table}[!htb]
\begin{center}
\begin{tabular}{clll}
\hline
References & $F(r)$ & $f(r)$ \\
\hline
\\
Humphreys et al. & $F=\frac{1}{2}r^4$ & $f=r^2$ \\
\\
Humphreys et al. & $F=\frac{1}{2}r^3$ & $f=r^3$ \\
\\
Gibbs & $F=F_0 \tanh r^3$ & $f=f_0 \sinh^2 r$ &\\
\\
Ribeiro & $F=F_0 r^p$ & $f=\sinh^2 r $ \\
\\
\hline
\end{tabular}
\caption[tablehyperbolic]{\label{tablehyperbolic}Examples of
hyperbolic LT models, where
$F_0, p$ and $f_0$ are
positive constants.}
\end{center}
\end{table}
\noindent{\em Elliptic examples:}
\\\\
We considered examples of this type,
given by Bonnor \cite{Bonnor85c}
and Hellaby \& Lake \cite{Hellaby85},
details of which are summarised in Table (\ref{tableelliptic}).
Again we found that for all these models
there exist
ranges of $I$ and $K$ such that
indicators $S_{IK}$ are monotonic for all time.
In particular, we found that the condition
for all time monotonicity with $K=1/2$ and $K=1$ are
given by $I \in \left ]0, 1 \right ]$ and
$I \in \left]0, 2 \right]$ respectively.
For values of $K=1/2$ and $I$ outside the range $I< 4/3$, e.g. $I=2$,
we find that $S_{I\frac{1}{2}}$ increases in the
expanding phase while decreasing after a certain time (which depends on
$r$)
in the contracting phase and tending to zero
as the second singularity
approaches.
\begin{table}[!htb]
\begin{center}
\begin{tabular}{clll}
\hline
References & $F(r)$ & $f(r)$ \\
\hline
\\
Bonnor & $F=F_0 r^3$ & $f=-f_0\frac{r^2}{1+r^2}$ \\
\\
Hellaby \& Lake & $F=F_0 \frac{r^m}{1+r^n}$ & $f=-f_0 \frac
{r^n}{1+r^n}$ \\
\\
\hline
\end{tabular}
\caption[tableelliptic]{\label{tableelliptic}Examples of elliptic
LT models, where
$F_0$ and $f_0 \ne 1$ are positive real constants and $m$, $n$ are integers such
that $m>n$.}
\end{center}
\end{table}
\\
To summarise, we have found that for all these concrete examples
there exists values of $I$ and $K$ such that
indicators $S_{IK}$ increase monotonically for all time.
However, the allowed intervals of $I$ and $K$ are in general narrowed
relative to those obtained by
the asymptotic considerations.
Finally the indicators $S_{IKV}$ in all these models
lead to non elementary
integrals.
\section{Szekeres models in brief}
\label{Szekeres}
The Szekeres metric is given by \cite{Szekeres,Goode-Wainwright}
\begin{equation}
\label{Metric-Szekeres}
ds^2=-dt^2+R^2e^{2\nu}(dx^2+dy^2)+R^2H^2W^2dz^2
\end{equation}
where $W=W(z)$ and $\nu=\nu (x,y,z)$ are functions to
be
specified within each Szekeres class and $R=R(z,t)$
obeys the evolution equation
\begin{equation}
\label{evolution}
\dot{R}^2=-k+2\frac{M}{R},
\end{equation}
where $M=M(z)$ is a positive arbitrary function for the
class I models and a positive constant for class II models,
defined below.
The function $H$ is given by
\begin{equation}
H=A-\beta _{+}f_{+}-\beta _{-}f_{-}
\end{equation}
where functions $A =A(x,y,z)$,
$H$, $R$ and $W$ are assumed to be positive;
$\beta _{+}$ and $\beta _{-}$ are functions of $z$
and $f_{+}$ and $f_{-}$ are functions of $z$ and $t$, corresponding to the
growing
and decaying modes of the solutions $X$ of the equation
\begin{equation}
\label{perturbation}
\ddot{X}+\frac{2\dot{R}\dot{X}}{R}-\frac{3MX}{R^3}=0.
\end{equation}
The density for these models is given by
\begin{equation}
\label{Density-Szekeres}
\rho(x,y,z,t)=\frac{3MA}{4\pi R^3H}.
\end{equation}
The solutions to (\ref{evolution}) are given by
\begin{eqnarray}
\label{solevolution}
&R=M\frac{dh(\eta)}{d\eta} \nonumber \\
&t-T(z)=Mh(\eta)
\end{eqnarray}
where
\begin{equation}
h(\eta)=\left\{\begin{array}{lll}
\eta-\sin\eta & (k=+1) &0<\eta<2\pi \\
\sinh\eta-\eta & (k=-1) & 0<\eta \\
\frac{1}{6}\eta^3 & (k=0) & 0<\eta,
\end{array}
\right.
\end{equation}
which in turn allows the solutions to (\ref{perturbation}) to be written as
\begin{eqnarray}
\label{fplus}
f_{+}(\eta)&=&\left\{\begin{array}{lll}
6MR^{-1}(1-\frac{1}{2}\eta\cot\frac{1}{2}\eta)-1 &
k=+1 \\
6MR^{-1}(1-\frac{1}{2}\eta\coth\frac{1}{2}\eta)+1 &
k=-1\\
\frac{1}{10}\eta^2 & k=0
\end{array}
\right. \\
f_{-}(\eta)&=&\left\{\begin{array}{lll}
6MR^{-1}\cot\frac{1}{2}\eta & k=1\\
6MR^{-1}\coth\frac{1}{2}\eta & k=-1 \\
24\eta^{-3} & k=0.
\end{array}
\right.
\end{eqnarray}
The Szekeres models are divided into two classes, depending upon
whether $\frac{\partial (Re^\nu)}{\partial z}\ne 0$ or
$\frac{\partial (Re^\nu)}{\partial z}=0$.
The functions in the metric take different forms for each class
and for completeness are
summarised in the Appendix.
\section{Evolution of the density contrast in Szekeres models}
In these models the density contrast indicators are given by
\begin{eqnarray}
\label{Entropy-Szekeres}
S_{IK}=
&\int_\Sigma& \left |
\frac{1}{e^{2\nu}R^2\rho^I}\left(\frac{\partial \rho}{\partial
x}\right)^2+\frac{1}{e^{2\nu}R^2\rho^I}
\left(\frac{\partial \rho}{\partial
y}\right)^2 \right. \nonumber \\
&+& \left. \frac{1}{H^2W^2R^2\rho^I}
\left(\frac{\partial \rho}{\partial z}\right)^2
\right |^KR^3e^{2\nu}HWdxdydz
\end{eqnarray}
We note that the singularities
in these models are given not
only by $R=0$ but also by $H=0$, which define the so called
shell crossing singularities.
For example, choosing $\beta_+>0$
eventually results in
a shell crossing singularity at a finite time given by
$A=\beta_+f_+$ (see \cite{Goode-Wainwright} for
a detailed discussion). Here in order to avoid shell
crossing singularities, we either assume $\beta_+<0$ in all cases, or
alternatively $\beta_+ >0$ and
$\beta_+(z)<A(x,y,z)$, in the $k=-1$ case.
\\
We recall that for class II models, $T=$ constant.
In the following we shall, in line with the work of
Bonnor \cite{Bonnor86}, also make this assumption in
the case of
class I models (implying $\beta_-=0$) in order to make the
initial singularity simultaneous and hence
avoid
white holes.
\\
We consider the
two
Szekeres classes in turn and in each case we study the
three subclasses referred to as hyperbolic, parabolic and elliptic,
corresponding to the Gaussian curvatures $k=-1,0,+1$
respectively.
\subsection{Evolution of the density contrast in class I models}
Assuming $\beta_+ =0$ in this class makes
$\rho=\rho(t)$\footnote{In fact, the Szekeres models, with $T=$ constant, reduce
to FLRW iff $\beta_+=\beta_-=0$ \cite{Goode-Wainwright}.},
which implies $S_{IK}=0=S_{IKV}$ for all time.
We shall therefore assume $\beta_+$ to be non-zero
and possibly $z$ dependent. The contribution of $\beta_-$ would be important
near the initial and final singularities but is irrelevant for the asymptotic
behaviour of parabolic and hyperbolic models.
\\
\noindent {\em Parabolic class I Szekeres models}:
\\
For the models of this type
with $T=$ constant, the density depends only on $t$
which trivially gives
$S_{IK}=0=S_{IKV}$ for all time.
\\
\noindent {\em Hyperbolic class I Szekeres models}:
\label{Subsection-SzekeresI-Hyperbolic}
\\
For large $\eta$ we have
\begin{eqnarray}
R & \approx & t-T \\
f_+ & \approx & \frac{-6\eta}{e^{\eta}}+1
\end{eqnarray}
resulting
in
\begin{eqnarray}
\rho &\approx &\frac{3MA}{4\pi (A-\beta_+)(t-T)^3}\\
\dot{\rho} &\approx &\frac{-9MA}
{4\pi (A-\beta_+)(t-T)^4}
\end{eqnarray}
which have the same $t$ dependence as the
corresponding LT models.
Using (\ref{Entropy-Szekeres}) we obtain
\begin{eqnarray}
S_{IK} & \approx & \int \alpha_4 (t-T)^{3KI-8K+3} dxdydz\\
S_{IKV} & \approx & \frac{\int \alpha_4 (t-T)^{3KI-8K+3} dxdydz}
{\int e^{2\nu} W (A-\beta_+) (t-T)^{3} dxdydz}
\end{eqnarray}
where $\alpha_4$ is a positive function of $x,y,z$.
\\
The dominant asymptotic temporal behaviour
of the density contrast
indicators
for these models and the conditions for their
monotonic behaviour
with time are summarised in Tables (\ref{Indicators-Szekeres}) and
(\ref{DC-Szekeres}).
\\
\noindent {\em Elliptic class I Szekeres models:}
\\\\
Using (\ref{Density-Szekeres}), together
with the following approximations
\begin{eqnarray}
R & \approx & \frac{6^{\frac{2}{3}}M\psi_2^{\frac{2}{3}} }{2}\\
f_+ & \approx & \frac{-6}{\psi_2}
\end{eqnarray}
where $\psi_2=\left(\frac{t-T}{M}-2\pi\right)$,
gives
\begin{eqnarray}
\label{rho-zek+1}
\rho &\approx &\frac{6A}{\pi M^2\beta_+\psi_2}\\
\dot{\rho} &\approx &\frac{-6A}
{\pi M^3 \beta_+\psi_2^2}
\end{eqnarray}
which has again the same temporal dependence as for the corresponding
closed LT models.
Using (\ref{Entropy-Szekeres}) we obtain
\begin{eqnarray}
S_{IK} & \approx & \int \alpha_5 \psi_2^{IK-\frac{10}{3}K+1} dxdydz\\
S_{IKV} & \approx & \frac{2}{9}\frac{\int \alpha_5 \psi_2^{IK-\frac{10}{3}K+1} dxdydz}
{\int M e^{2\nu} W \beta_+ \psi_2 dxdydz},
\end{eqnarray}
where\\ $\alpha_5
(x,y,z,t)=\frac{9}{2}MWe^{2\nu}\pi\beta_+
\left|\left(\frac{96^2 48^{-I}6^{-\frac{4}{3}}A^{-I}e^{-2\nu}}
{M^{\frac{14}{3}-2I}
(\pi\beta_+)^{2-I}}\right)
\left(
\left(\frac{\partial A}{\partial x}\right)^2+
(\frac{\partial A}{\partial
y})^2+\frac{e^{2\nu}\psi_2^{'2}}{W^2(\pi\beta_+)^2}\right)
\right|^K$, and as $\eta\to 2\pi$, $t\to 2\pi M+T$ and $\psi_2' \to
-2\pi M'$.
\\
Our results are
summarised in Tables (\ref{Indicators-Szekeres}) and
(\ref{DC-Szekeres}).
We note that the asymptotic behaviour of
$S_{IKV}$ cannot be deduced in general
for these models and therefore the corresponding entries in these tables
were derived using pointwise
versions of the indicators.
\subsection{Evolution of the density contrast in class II models}
Recall that in this class $T$
and $M$ are constants in all cases. Again if $\beta_+=\beta_-=0$ we recover the
homogeneous models giving $S_{IK}=0=S_{IKV}$. In what follows we shall therefore consider
the general cases with $\beta_+=\beta_+(z)$ and $\beta_-=\beta_-(z)$.
\\
\noindent {\em Parabolic class II Szekeres models:}
\\\\
The asymptotic evolution of $\rho$ in this case is
given by
\begin{eqnarray}
\rho & \approx & \frac{5A}{3\pi(-\beta_+)(t-T)^{\frac{8}{3}}} \\
\dot{\rho} & \approx & \frac{40A}
{9\pi \beta_+(t-T)^{\frac{11}{3}}}.
\end{eqnarray}
Using (\ref{Entropy-Szekeres}) we obtain
\begin{eqnarray}
S_{IK} & \approx & \int \alpha_6(t-T)^{\frac{8}{3}KI-
\frac{20}{3}K+\frac{8}{3}} dxdydz\\
S_{IKV} & \approx & \frac{2}{9}\frac{\int \alpha_6 (t-T)^{\frac{8}{3}KI-
\frac{20}{3}K+\frac{8}{3}} dxdydz}
{\int e^{2\nu} |\beta_+| M W (t-T)^{\frac{8}{3}}dxdydz}
\end{eqnarray}
where $\alpha_6$ is a positive function of $x,y$ and $z$ only.
Our results are shown in
Tables (\ref{Indicators-Szekeres}) and
(\ref{DC-Szekeres}).
\\
\noindent {\em Hyperbolic class II Szekeres models:}
\\\\
Here we use the approximations for $R, H, f_+$ and $f_-$ already shown for class I.
The asymptotic evolution of $\rho$ and $\partial\rho / \partial t$ are
in this case given by
\begin{eqnarray}
\rho & \approx & \frac{6MA}{\pi(A-\beta_+)(t-T)^3} \\
\dot{\rho} & \approx & \frac{-18MA}
{\pi (A-\beta_+)(t-T)^4}
\end{eqnarray}
Using (\ref{Entropy-Szekeres}) we obtain
\begin{eqnarray}
S_{IK} & \approx & \int \alpha_7 (t-T)^{3IK-8K+3} dxdydz \\
S_{IKV} & \approx & \frac{\int \alpha_7 (t-T)^{3IK-8K+3} dxdydz}
{\int e^{2\nu} W (A-\beta_+) (t-T)^3 dxdydz}
\end{eqnarray}
where $\alpha_7$ is a positive function of $x,y$ and $z$.
Our results are depicted in
Tables (\ref{Indicators-Szekeres}) and
(\ref{DC-Szekeres}).
\\
\noindent {\em Elliptic class II Szekeres models:}
\\\\
The asymptotic evolution of $\rho$ and its time derivative are
in this case is given by
\begin{eqnarray}
\rho & \approx & \frac{6A}{\pi M^2 (\pi\beta_+-\beta_-)\psi_2} \\
\dot{\rho} & \approx & \frac{-6A}{\pi M^3
(\pi\beta_+-\beta_-)\psi_2^2}
\end{eqnarray}
where $\psi_2(z,t)=\left(\frac{t-T}{M}-2\pi\right)$.
Here we impose the restriction $\pi\beta_+-\beta_-<0$ to ensure
the positivity of $\rho$ and
by (\ref{Entropy-Szekeres}) we have
\begin{eqnarray}
S_{IK} & \approx & \int \alpha_8 |\psi_2|^{KI-\frac{10}{3}K+1} dxdydz\\
S_{IKV} & \approx & \frac{1}{27}\frac{\int \alpha_8
|\psi_2|^{KI-\frac{10}{3}K+1} dxdydz}
{\int e^{2\nu} (\pi\beta_+-\beta_-) M \psi_2 dxdydz}
\end{eqnarray}
where $\alpha_8$ is a positive function independent of $t$.
The asymptotic behaviours of the indicators
for these models are identical to the corresponding models in class I
(except in the parabolic case)
and the conditions for their
monotonic evolution
with time are
summarised in Tables (\ref{Indicators-Szekeres}) and (\ref{DC-Szekeres}).
Again the asymptotic behaviour of
$S_{IKV}$ cannot be deduced in general
for these models and therefore the corresponding entries in these Tables
were derived using pointwise
versions of the indicators.
\begin{table}[!htb]
\begin{center}
\begin{tabular}{c|l|ll}
\hline
\rule{0cm}{0.7cm}
&~~~~~~$Class$ II & \multicolumn{2}{c}{$Classes$ I \& II} \\
&&&\\
\hline
Indicators& $~~~~~~~~k=0$ & $~~~~~k=-1$ & $~~~k=+1$\\
\hline
$S_{IK}$ & \rule{0cm}{0.7cm} $(t-T)^{\frac{8}{3}IK-\frac{20}{3}K+2}$
& $(t-T)^{3IK-8K+3}$ & $\psi_2^{IK-\frac{10}{3}K+1}$\\
$S_{IKV}$ & \rule{0cm}{0.7cm} $(t-T)^{\frac{8}{3}IK-\frac{20}{3}K}$ & $(t-T)^{3IK-8K}$ & $\psi_2^{IK- \frac{10}{3} K}$\\
$B1$& \rule{0cm}{0.7cm} $(t-T)^{-\frac{4}{3}}$ & $(t-T)^{-2}$
& $\psi_2^{-\frac{1}{3}}$\\
SW & \rule{0cm}{0.7cm} $(t-T)^{-\frac{2}{3}}$ & $const.$
& $\psi_2^{\frac{1}{3}}$\\
&&&\\
\hline
\end{tabular}
\caption[Indicators-Szekeres]{\label{Indicators-Szekeres}Asymptotic evolution of a number of
density contrast indicators for the class I and class II
Szekeres models. Columns 3 and 4 represent the
behaviours
for the hyperbolic and elliptic models
which are identical for both classes,
while the second column represents the behaviour for
class II models.}
\end{center}
\end{table}
\begin{table}[!htb]
\begin{center}
\begin{tabular}{l|cc|cc}
\hline
&&&&\\
&~~~~~~~~~~~$Class$ I & & \multicolumn{2}{c}{$Class$ II} \\
&&&&\\
\hline
Models & $S_{IK}$ & $S_{IKV}$ & $S_{IK}$ & $S_{IKV}$\\
\hline
$k=0$ & \rule{0cm}{0.7cm}$-$ & $-$ &
$I> \frac{5}{2}-\frac{1}{K}$ & $I> \frac{5}{2} $\\
$k=-1$ & \rule{0cm}{0.7cm} $I> \frac{8}{3}-\frac{1}{K}$ & $I>\frac{8}{3}$ &
$I> \frac{8}{3}-\frac{1}{K}$ & $I> \frac{8}{3} $ \\
$k=+1$ & \rule{0cm}{0.7cm} $I< \frac{10}{3}-\frac{1}{K}$ & $I<\frac{10}{3}$ &
$I< \frac{10}{3}-\frac{1}{K}$ & $I< \frac{10}{3}$\\
&&&&\\
\hline
\end{tabular}
\caption[DC-Szekeres]{\label{DC-Szekeres} Constraints on $I$ and $K$
in order to ensure $\dot{S}_{IK}>0;
\dot{S}_{IKV}>0$ asymptotically in
class I and class II Szekeres models.}
\end{center}
\end{table}
\vskip .2in
To summarise, the results of this section indicate
that for both ever-expanding ($k=0$ and $k=-1$) and
recollapsing ($k=+1$) Szekeres models,
we can always choose $I$ and $K$ such that the $S_{IK}$ and $S_{IKV}$
are asymptotically increasing both separately and
simultaneously. However, for some cases of interest,
such as e.g. $I=2$,
no such
interval can be found for which both sets of
indicators simultaneously have
this property.
Also as can be seen from Table (\ref{Indicators-Szekeres})
different indicators can, for different values of $I$ and
$K$, give different predictions
concerning the asymptotic homogenisation of these
models.
\\
We also note that the similarity between the results
of this section (for $k=+1$ and $k=-1$) and those for LT models
(for $f>0$ and $f<0$) is partially due to the
fact that the density functions $\rho$
has the same time dependence in both
of these sub-families of LT and Szekeres models.
A nice way of seeing this, as was pointed out to us
by van Elst, is that for the
above dust models
the evolution equations for the density,
expansion, shear and
electric Weyl curvature
constitute a closed dynamical system which
is identical for both models (see \cite{henk}).
This, however,
does not necessarily imply that
our indicators should also have
identical time evolutions for all times
for both models, since they also include
$h^{ab}$ and $dV$ in their definitions.
It turns out that asymptotically
they are the same in the cases considered
here.
\section{Evolution of dimensionless indicators}
As they stand, indicators (\ref{comoving-ind}) are not
dimensionless in general. To make them so, we shall also
briefly consider their dimensionless analogues
given by
\begin{equation}
\label{dimeless-ind}
S_{IKL}=\frac{S_{IK}}{V^L}
\end{equation}
where $L$ is a real number which depends on $I$ and
$K$ thus
\begin{equation}
L=\frac{2}{3}IK-2K+1.
\end{equation}
Asymptotic behaviour
of $S_{IKL}$
for LT and Szekeres models are summarised in Table (\ref{SIKL-Evolution}).
This demonstrates that there
are still intervals for $I$ and $K$ ($I \in \left]2,4\right[,
K\in{\BBB R}\setminus\{0\}$)
such that these
dimensionless indicators asymptotically increase with time.
In this way the results of the previous sections
remain qualitatively unchanged.
\begin{table}[!htb]
\begin{center}
\begin{tabular}{lccc}
\\
\hline
\\
Models & Tolman & Szekeres I & Szekeres II \\
\\
\hline
\\
Parabolic & -- & -- & $(t-T)^{\frac{4}{3}IK-\frac{8}{3}K}$ \\
\\
Hyperbolic & $(t-a)^{IK-2K}$ & $(t-T)^{IK-2K}$
& $(t-T)^{IK-2K}$ \\
\\
Elliptic & $\phi_2^{\frac{1}{3}IK-\frac{4}{3}K}$
& $\psi_2^{\frac{1}{3}IK-\frac{4}{3}K}$
& $\psi_2^{\frac{1}{3}IK-\frac{4}{3}K}$ \\
\\
\hline
\end{tabular}
\caption[SIKL-Evolution]{\label{SIKL-Evolution} Asymptotic behaviour
of $S_{IKL}$
for LT and Szekeres models. In the elliptic cases
pointwise versions of the indicators were used.}
\end{center}
\end{table}
\section{Behaviour near the initial singularity}
So far we have studied the asymptotic behaviour of these models
at late times.
To further understand the possible monotonic
behaviour of these indicators,
it is of interest to study
their behaviour near the initial singularities.
Our results are
summarised in
Table
(\ref{Initial-Sing}) and the constraints on $I$ and $K$
in order to ensure the non-divergence of $S_{IK}$, $S_{IKV}$ and
$S_{IKL}$ near the singularities
are given in Table
(\ref{Initial-Constraints}).
As can be seen from Table (\ref{Initial-Sing}),
apart from the case of Szekeres II with decaying modes ($\beta_- \neq 0$),
indicators
have the same behaviour as we approach the origin
for all other
models. On the other hand, in
presence of decaying modes,
the constraints on $I$ and $K$ necessary to
ensure initial non-divergence and final
monotonic increase are disjoint, ie, there
are no intervals of $I$ and $K$ such that these conditions
are simultaneously satisfied.
This in turn seems to imply that there is an
incompatibility between the presence of
decaying modes (see also \cite{Silk,Bonnor86,Goode-Wainwright})
and a unique density contrast arrow.
\begin{table}[!htb]
\begin{center}
\begin{tabular}{lllll}
\hline
\\
Indicators & Tolman & Szekeres I & Szekeres II & Szekeres II \\
& & & $(\beta_-\neq 0)$ & $(\beta_-=0)$\\
\\
\hline
\\
$S_{IK}$ & $(t-a)^{2IK-4K+2}$ & $(t-T)^{2IK-4K+2}$
& $(t-T)^{IK-\frac{10}{3}K+1}$ & $(t-T)^{2IK-4 K+2}$ \\
\\
$S_{IKV}$ & $(t-a)^{2IK-4K}$ & $(t-T)^{2IK-4K}$
& $(t-T)^{IK-\frac{10}{3}K}$ & $(t-T)^{2 IK-4K}$ \\
\\
$S_{IKL}$ & $(t-a)^{\frac{2}{3}IK}$ & $(t-T)^{\frac{2}{3}IK}$
& $(t-T)^{\frac{1}{3}IK-\frac{4}{3}K}$ & $(t-T)^{\frac{2}{3}IK}$ \\
\\
\hline
\end{tabular}
\caption[Initial-Sing]{\label{Initial-Sing} The behaviour of
the indicators $S_{IK}$, $S_{IKV}$ and $S_{IKL}$ in LT
and Szekeres models near the initial singularity.}
\end{center}
\end{table}
\begin{table}[!htb]
\begin{center}
\begin{tabular}{lccc}
\hline
\\
Models & $S_{IK}$ & $S_{IKV}$ & $S_{IKL}$\\
\\
\hline
\\
Lemaitre--Tolman & $I> 2-\frac{1}{K}$ & $I> 2 $ & $IK>0$\\
\\
Szekeres I & $I> 2-\frac{1}{K}$ & $I> 2 $ & $IK>0$\\
\\
Szekeres II ($\beta_-=0$) & $I> 2-\frac{1}{K}$ & $I> 2 $ & $IK>0$ \\
\\
Szekeres II ($\beta_-\ne 0$) & $I> \frac{10}{3}-\frac{1}{K}$ & $I>\frac{10}{3}$
& $I>4$\\
\\
\hline
\end{tabular}
\caption[Initial-Constraints]{\label{Initial-Constraints} Constraints on $I$ and $K$
in order to ensure the non-divergence and monotonic increase
of $S_{IK}$, $S_{IKV}$ and $S_{IKL}$ near the origin of time.}
\end{center}
\end{table}
\section{Some consequences}
In the following we shall briefly discuss some of the
conceptual issues that the
above considerations give rise to.
\subsection{Expanding versus recollapsing phases}
An interesting question regarding cosmological evolution is
whether the expanding and recollapsing evolutionary
phases do (or should) possess the same characteristics
in terms of density contrast indicators and in particular
whether or not the rise of structuration is expected
to increase monotonically throughout the history of the
Universe independently of the cosmological
direction of its evolution, in other words, whether
the density contrast and the cosmological arrows
have the same sign.
The answer to this question is of potential importance
not only with regard to the issue of structure formation
but also in connection with debates concerning
the nature of gravitational entropy and its likely behaviour
in expanding and contracting phases of the Universe
(see e.g.
Hawking \cite{Hawking85}, Page \cite{Page} and also
references in \cite{Book-Arrow-Time}).
We note
that the underlying
intuitions in such debates seem to have come
mainly from considerations of the Friedman-Lemaitre-Robertson-Walker (FLRW)
models
together with some studies involving
small inhomogeneous perturbations about such models \cite{Hawkingetal93}.
Our studies
here, though classical, can therefore provide potentially
useful complementary intuitions
by considering exact inhomogeneous models.
In particular, our results
in the previous sections raise a number
of points that can potentially
inform this debate, amongst them:
\\
\begin{enumerate}
\item {\it Indicators in recollapsing models}: We have found that
different indicators behave differently in
expanding and recollapsing phases of cosmological evolution,
with some that remain monotonically increasing
in both phases and others which change their signs in the expanding
and recollapsing phases.
In particular we find that there is a larger class of indicators
that remain monotonic in the ever-expanding models and in this
way we could say that
it is more difficult to maintain
monotonic growth in
recollapsing phases.
\item {\it Spatial dependence of turning time in recollapsing models}:
As opposed to the closed FLRW
models, the turning time in inhomogeneous models
can be position dependent. For example,
in the LT models
with $a=0$, the turning time is given by $t=\pi F/
(2f^\frac{3}{2})$, which in general depends on $r$.
In this sense there are epochs over which the Universe
possesses a multiplicity of cosmological arrows.
This raises the interesting question of
whether
there can be observationally viable inhomogeneous cosmological
models which allow ranges of epochs and neighbourhoods,
such that over these epochs different
neighbourhoods (labelled
$\Sigma_N$ in the definition of the indicators (4), (6)
and (\ref{dimeless-ind}))
can give rise to
different
local density contrast arrows
within the same model.
\\
We note that in the
ever-expanding models this problem is less likely
to arise,
which raises the question of whether this can
be taken as an argument in favour of
ever-expanding models which
more easily allow uni-directionality in their
density contrast arrows.
\end{enumerate}
\subsection{Connection to homogeneity}
\label{Connection-Homogeneity}
Another question is what is the connection
between homogeneity and the behaviour of density contrast indicators?
We start by recalling a result of Spero \& Szafron \cite{Spero-Szafron}
according to which Szekeres models are spatially homogeneous
(with $S_{IK}=0 = S_{IKV}$) if the density
in these models is a function of $t$ only.
Therefore in these models
$S_{IK}=0 \Longrightarrow $ homogeneity.
Now in general
$S_{IK}=0$ may not imply homogeneity,
which raises the interesting question
of
what is the set of inhomogeneous models
which satisfy this property? A related question has
been considered
by van Elst \cite{van-Elst} who points out some of the
difficulties involved.
We hope to return to this question in future.
\\
Another point in this regard is that
different indicators (even covariant ones)
can make contradictory statements
about whether or not a model homogenises
asymptotically.
Here to illustrate this point
we look at the following examples.
\\
First, we consider the parabolic LT model
studied by
Bonnor \cite{Bonnor74} in the form
\begin{equation}
\label{metric}
ds^2=-dt^2+(t-a)^{4/3}
\left(\left(1+\frac{2\tilde{r}a'}{3(t+a)}\right)d\tilde{r}^2+\tilde{r}^2d\Omega^2
\right)
\end{equation}
where
$a=a(\tilde{r})$.
Employing the indicator $B2$,
Bonnor deduced that for fixed $\tilde{r}$
this model
approaches homogeneity, as $t\to \infty$, irrespective of its
initial conditions.
This indicator is not covariant. But even for
covariant indicators (such as $S_{IK}$) one can
find
ranges of $I$ and $K$ (namely $I >\frac{11}{3}-\frac{1}{K}$
and its complement) which give
opposite conclusions regarding asymptotic
homogenisation.
\\
As another example, we consider the hyperbolic LT models
studied by Bonnor \cite{Bonnor74} and given by $F=bf^{3/2}$, with
$b=$ constant.
According to the $B1, B2, SL$ and $SW$ indicators
these models approach homogeneity, whereas there are
ranges of values of $I$ and $K$ (namely $I>\frac{10}{3}-\frac{1}{K}$)
for which
our measures $S_{IK}$
increase monotonically.
\\
We also take the hyperbolic class of
LT models with $a=0$, which were studied in
the subsection (\ref{LT-Hyperbolic}) above. As can be seen from
Table (\ref{DC-Tolman}), there are ranges of $I$ and $K$
for which the asymptotic behaviour
of both $S_{IK}$ and
$S_{IKV}$ can be the opposite to
the $B1$ indicator.
\\
Finally, It is also of interest to compare our results with the
studies of asymptotic behaviour for Szekeres models.
An interesting result in this
connection is due to Goode \& Wainwright \cite{Goode-Wainwright},
according to which Szekeres models with $k=0,-1$
and $\beta_+ = 0$ become homogeneous asymptotically,
a result that was also obtained for the class II models
by Bonnor \& Tomimura \cite{Bonnor-Tomimura}.
We recall that for
the models with simultaneous initial singularity considered
here, the assumption of $\beta_+ = 0$ reduces
the class I models to that of
FLRW. For the class II models (with $\beta_-\ne 0$), the
asymptotic behaviour of our indicators depends on the choice of
the indices $I$ and $K$. In particular, to ensure the
asymptotic increase of the indicators $S_{IK}$, $S_{IKV}$, $S_{IKL}$
for $k=0$ and $k=-1$, it is necessary and sufficient to have
$I> \frac{11}{3}-\frac{1}{K}$, $I>\frac{11}{3}$, $I>5$ and
$I> \frac{10}{3}-\frac{1}{K}$, $I>\frac{10}{3}$, $I>4$
respectively.
\section{Conclusions}
An important assumption in gravitational physics concerns the
tendency of gravitational systems to become increasingly
lumpy with time.
Here, we have tried to study this possibility
in the context of inhomogeneous
models of LT and Szekeres, by
using a number of
two parameter families of density contrast indicators.
\begin{figure}
\centerline{\def\epsfsize#1#2{0.5#1}\epsffile{Kversusy.eps}}
\caption{\label{Graph_dc}
The region between the curves gives the ranges of values of
$I$ and $K$ such that $S_{IK}$ is
increasing both near the singularity and asymptotically for all the models considered here.
The squares show the special examples of
indicators with $I=1, K=1/2$ and $I=2, K=1$.}
\end{figure}
Given the arbitrary functions in these models, we have only been
able
to establish conditions for monotonicity of our indicators
as we approach the origin and asymptotically.
Even though these are necessary but not sufficient conditions
for monotonicity for all times, nevertheless our studies
illustrate some general points that seem to be
supported by our all time study of a number of special models.
Our results show:
\begin{enumerate}
\item Different density contrast indicators can behave differently
even for the same model.
We find there is a larger class of indicators
that grow monotonically for ever-expanding models
than for the
recollapsing ones. In particular,
in the absence of decaying modes ($\beta_-=0$),
we find that
indicators exist
which grow monotonically with time
for all the models considered here.
Figure (1) gives a brief summary of our results
by depicting the
ranges of $I$ and $K$
such that $S_{IK}$ grow monotonically near the origin as
well as asymptotically
for all the models considered here.
An example of a special indicator that
lies in this range is
given by $K=1, I=2$ (a non pointwise
version of $B1$). However, the indicator
given by
$K=1/2, I=2$ (which is
linear in the derivatives of density
\cite{Tavakol-Ellis}) does not
satisfy this property.
\\
\item
If decaying modes exist
(i.e. $\beta_- \neq 0$), we find
no such indicators which grow monotonically with time
for all the models considered here.
Recalling a theorem of Goode and Wainwright \cite{Goode-Wainwright},
namely that $\beta_-=0$ is the necessary and sufficient condition
for the initial data (singularity) to be FLRW--like,
our results seem to imply that the presence
of monotonicity in the evolution of
density contrast indicators considered here
is directly related to the nature of
initial data.
This is of potential relevance in the recent debates
regarding the nature of arrow of time (to the extent
that such an arrow can be identified with the density contrast arrow)
in ever expanding and recollapsing models
\cite{Hawking85,Page,Book-Arrow-Time}.
\item Our considerations seem to indicate that the
notion of asymptotic
homogenisation as deduced from
density contrast indicators may not
be unique.
\item The possible spatial dependence of turning points in
inhomogeneous models can lead to multiplicity of
local density contrast arrows at certain epochs, which could
have consequences regarding
the corresponding cosmological arrow(s) of time.
\end{enumerate}
Finally we note that given the potential operational definability of
our indicators, it is interesting that the overall approach to homogeneity
may not necessarily be accompanied by a decrease
in the density contrast, as measured by the different
indicators. In this way different covariant density contrast indicators
may give
different insights as to what may be observed
asymptotically in such inhomogeneous models.
\vspace{.3in}
\centerline{\bf Acknowledgments}
\vspace{.3in}
We would like to thank Bill Bonnor, George Ellis, Henk van Elst
and Malcolm MacCallum for many helpful comments and discussions.
FCM wishes to thank Centro de Matem\'{a}tica da Universidade do Minho
for support and FCT (Portugal) for grant PRAXIS XXI BD/16012/98.
RT benefited from PPARC UK Grant No. L39094.
\section{Appendix}
The different forms of the functions in Szekeres class I and II models
\cite{Goode-Wainwright}.
$$ Class~I$$
\begin{equation}
\begin{array}{l}
R=R(z,t)\\
f_{\pm}=f_{\pm}(t,z)\\
T=T(z)\\
M=M(z)\\
e^\nu=f[a(x^2+y^2)+2bx+2cy+d]^{-1}\\
\end{array}
\end{equation}
where functions in the metric are subjected to the conditions:
\begin{equation}
\label{c1}
\begin{array}{ll}
ad-b^2-c^2=\frac{1}{4}\varepsilon & \varepsilon=0,\pm 1\\
A(x,y,z)=f{\nu_z}-k\beta_{+} \\
W(z)^2=(\varepsilon-kf^2)^{-1} \\
\beta_{+}(z)=-kfM_z(3M)^{-1} \\
\beta_{-}(z)=fT_z(6M)^{-1}
\end{array}
\end{equation}
The functions $a, b, c, d$ and $f$ are all functions of $z$ that are
only
required to satisfy equations (\ref{c1}).
$$Class~II$$
\begin{equation}
\begin{array}{l}
R=R(t) \\
f_{\pm}=f_{\pm}(t)\\
T=const.\\
M=const.\\
e^\nu=[1+\frac{1}{4}k(x^2+y^2)]^{-1}\\
W=1
\end{array}
\end{equation}
\begin{equation}
\begin{array}{lll}
A=\left\{\begin {array}{lll}
e^\nu[a[1-\frac{1}{4}k(x^2+y^2)]+bx+cy]-k\beta_{+} & if & k=\pm
1\\
a+bx+cy-\frac{1}{2}\beta_{+}(x^2+y^2) & if & k=0
\end{array}
\right.
\end{array}
\end{equation}
in this case $a, b, c, \beta_{+}$ and $\beta_{-}$ are arbitrary
functions of $z$. The curvature is given by $k$.
\section*{References}
| proofpile-arXiv_065-8617 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction and Prospects}
With the aim to describe the collision of two nuclei at intermediate or even
high energies one is confronted with the fact that the dynamics has to include
particles like the $\Delta^{33}$ or $\rho$-meson resonances with life-times of
less than 2 fm/c or equivalently with damping rates above 100 MeV. Also the
collision rates deduced from presently used transport codes are comparable in
magnitude, whereas typical mean kinetic energies as given by the temperature
range between 70 to 150 MeV depending on beam energy. Thus, the damping width
of most of the constituents in the system can no longer be treated as a
perturbation.
As a consequence the mass spectra of the particles in dense matter are no
longer sharp delta functions but rather acquire a width due to collisions and
decays. The corresponding quantum propagators $G$ (Green's functions) are no
longer the ones as in standard text books for fixed mass, but have to be
folded over a spectral function $A(\epsilon,{\vec p})$ of finite width. One
thus comes to a picture which unifies {\em resonances} which have already a
decay width in vacuum with the ``states'' of particles in dense
matter, which obtain a width due to collisions (collisional broadening). The
theoretical concepts for a proper many body description in terms of a real
time non equilibrium field theory have already been devised by Schwinger,
Kadanoff, Baym and Keldysh \cite{SKBK} in the early sixties. First
investigations of the quantum effects on the Boltzmann collision term were
given Danielewicz \cite{D}, the principal conceptual problems on the level of
quantum field theory were investigated by Landsmann \cite{Landsmann}, while
applications which seriously include the finite width of the particles in
transport descriptions were carried out only in recent times,
e.g
\citerange[D,DB-KV] For resonances, e.g. the $\Delta^{33}$-resonance, it was
natural to consider broad mass distributions and ad hoc recipes have been
invented to include this in transport simulation models. However, many of
these recipes are not correct as they violate some basic principles like
detailed balance \cite{DB}, and the description of resonances in dense matter
has to be improved.
In this talk the transport dynamics of short life time particles are reviewed
and discussed. In the first part some known properties of resonances are
presented. These concern the equilibrium and low density (virial) limits.
Some example discussions are given for the di-lepton spectrum resulting from
the decay of $\rho$-mesons in a dense nuclear environment, both in thermal
equilibrium and in a quasi-free scattering process. On the basis of this some
deficiencies of presently used transport codes for the treatment of broad
resonances are disclosed and quantified. They affect the di-lepton spectra
already on a qualitative level and signal that the low mass side is grossly
underestimated in the respective calculations. This motivates the question
discussed in the second part, namely, how to come to a self-consistent,
conserving and thermodynamically consistent transport description of particles
with finite mass width. The conceptual starting point will be a formulation
within the real-time non-equilibrium field theory. The derivation is based on
and generalizes Baym's $\Phi$-functional method \cite{Baym}. The first-order
gradient approximation provides a set of coupled equations of
time-irreversible generalized kinetic equations for the slowly varying
space-time part of the phase-space distributions supplemented by retarded
equations. The latter account for the fast micro-scale dynamics represented by
the four-momentum part of the distributions. Functional methods permit to
derive a conserved energy-momentum tensor which also includes corrections
arising from fluctuations besides the standard quasi-particle terms. Memory
effects
\citerange[CGreiner-IKV2] appearing in collision term diagrams of higher order
as well as the formulation of a non-equilibrium kinetic entropy flow can also
be addressed \cite{IKV2}.
\section{Preliminaries}
The standard text-book transition rate in terms of Fermi's golden
rule, e.g. for the photon radiation from some initial
state $\left|i\right>$ with occupation $n_i$ to final states
$\left|f\right>$
\def\put(1,-1.1){\VFermion}\put(0.5,-.8){\makebox(0,0){$i$}{\put(1,-1.1){\thicklines\put(0,0){\vector(0,1){.8}}\put(0.5,-.8){\makebox(0,0){$i$}}
\put(0.5,1.2){\makebox(0,0){$f$}}
\put(1,0.2){\thicklines\put(0,0){\vector(0,1){.8}}\put(1,0.2){\thicklines\multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}}
\def\put(1.5,0.2){\AVFermion}\put(2,-.8){\makebox(0,0){$i$}{\put(1.5,0.2){\thicklines\put(0,0){\vector(0,-1){.8}}\put(2,-.8){\makebox(0,0){$i$}}
\put(2,1.2){\makebox(0,0){$f$}}
\put(1.5,1.5){\thicklines\put(0,0){\vector(0,-1){.8}}\put(.5,.2){\thicklines\multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}}
\begin{eqnarray}\label{Wif}\unitlength5mm
W=&\sum_{if}&n_i(1-n_f)\;\unitlength5mm
\left|\begin{picture}(2.5,1.5)\put(1,-1.1){\VFermion}\put(0.5,-.8){\makebox(0,0){$i$}\end{picture}\right|^{\;\mbox{2}}\;
(1+n_\omega)\;\delta(E_i-E_f-\omega_{\vec q})
\end{eqnarray}
with occupation $n_{\omega}$ for the photon, is limited to the concept of
asymptotic states. It is therefore inappropriate for problems which deal with
particles of finite life time. One rather has to go to the ``closed'' diagram
picture, where the same rate emerges as
\begin{eqnarray}\unitlength8mm\label{sigma-+}
W=\begin{picture}(3.75,1.3)\put(.2,.2){\oneloop}\thicklines
\put(1.9,.95){\vector(-1,0){.4}}\put(1.5,-.55){\vector(1,0){.4}}\end{picture}
(1+n_\omega)\delta(\omega-\omega_{\vec q})\\ \nonumber
\end{eqnarray}
with now two types of vertices $-$ and $+$ for the time-ordered and the
anti-time ordered parts of the square of the amplitude. Together with the
orientation of the $\stackrel{+~-}{\longrightarrow}$ and
$\stackrel{-~+}{\longrightarrow}$ propagator lines one obtains unique
diagrammatic rules for the calculation of rates rather than amplitudes. The
just mentioned propagator lines define the densities of occupied states or
those of available states, respectively. Therefore {\em all standard
diagrammatic rules} can be used again. One simply has to extend those rules to
the two types of vertices with marks $-$ and $+$ and the corresponding 4
propagators, the usual time-ordered propagator
$\stackrel{-~-}{\longrightarrow}$ between two $-$ vertices, the
anti-time-ordered one $\stackrel{+~+}{\longrightarrow}$ between two $+$
vertices and the mixed $\stackrel{+~-}{\longrightarrow}$ or
$\stackrel{-~+}{\longrightarrow}$ ones with fixed operator ordering
(Wightman-functions) as densities of occupied and available states. For
details I refer to the textbook of Lifshitz and Pitaevski \cite{LP}.
\unitlength10mm
\begin{center}
\unitlength10mm
\begin{picture}(17,3.)
\put(.0,1.){\contourxy}
\end{picture}\\
\small Fig.~1: Closed real-time contour with two external points $x,y$ on the
contour.
\end{center}
Equivalently the non-equilibrium theory can entirely be formulated on one
special time contour, the so called closed time path \cite{SKBK}, fig.~1, with
the time argument running from some initial time $t_0$ to infinity and back
with external points placed on this contour, e.g., for the four different
components of Green's functions or self energies. The special $-+$ or $+-$
components of the self energies define the gain and loss terms in transport
problems, c.f. eq. (\ref{sigma-+}) and eqs. (\ref{Coll(kin)}-\ref{G-def})
below.
The advantage of the formulation in terms of ``correlation'' diagrams, which
no longer refer to amplitudes but directly to physical observables,
like rates, is that now one is no longer restricted to the concept of
asymptotic states. Rather all internal lines, also the ones which originally
referred to the ``in'' or ``out'' states are now treated on equal footing.
Therefore now one can deal with ``states'' which have a broad mass spectrum.
The corresponding Wigner densities $\stackrel{+~-}{\longrightarrow}$ or
$\stackrel{-~+}{\longrightarrow}$ are then no longer on-shell
$\delta$-functions in energy (on-mass shell) but rather acquire a width, as
we shall discuss in more detail.
For slightly inhomogeneous and slowly evolving systems, the degrees of freedom
can be subdivided into rapid and slow ones. Any kinetic approximation is
essentially based on this assumption. Then for any two-point function
$F(x,y)$, one separates the variable $\xi =(t_1-t_2, \vec{r_1}-\vec{r_2})$,
which relates to the rapid and short-ranged microscopic processes, and the
variable $X= \frac{1}{2}(t_1+t_2,\vec{r_1}+\vec{r_2})$, which refers to slow
and long-ranged collective motions. The Wigner transformation, i.e. the
Fourier transformation in four-space difference $\xi=x-y$ to four-momentum $p$
of the contour decomposed components of any two-point contour function
\begin{equation}
\label{W-transf}
F^{ij}(X;p)=\int {\rm d} \xi e^{\ii p\xi}
F^{ij}\left(X+\xi/2,X-\xi/2\right),\quad\mbox{where $i,j\in\{-+\}$ }
\end{equation}
leads to a (co-variant) four phase-space formulation of two-point functions.
The Wigner transformation of Dyson's equation (\ref{varG/phi}) in $\{-+\}$
notation is straight forward. For details and the extensions to include the
coupling to classical field equations we refer to ref. \cite{IKV1}.
Standard transport descriptions usually involve two approximation steps: (i)
the gradient expansion for the slow degrees of freedom, as well as (ii) the
quasi-particle approximation for rapid ones. We intend to avoid the latter
approximation and will solely deal with the gradient approximation for slow
collective motions by performing the gradient expansion of the coupled Dyson
equations. This step indeed preserves all the invariances of the $\Phi$
functional in a $\Phi$-derivable approximation.
It is helpful to avoid all the imaginary factors inherent in the standard
Green's function formulation and change to quantities which are real and
positive either in the homogeneous or in appropriate coarse graining
limits. They then have a straight physical interpretation analogously to the
Boltzmann equation. We define
\begin{eqnarray}
\label{F}
\left.
\begin{array}{rcl}
\Fd (X,p) &=& A (X,p) \fd (X,p)
= \mp \ii G}\def\Ga{G}\def\Se{\Sigma}\def\Sa{\Sigma^{-+} (X,p) ,\\
\Fdt (X,p) &=& A (X,p) [1 \mp \fd (X,p)] = \ii G}\def\Ga{G}\def\Se{\Sigma}\def\Sa{\Sigma^{+-} (X,p),
\end{array}\right\}\quad{\rm with}\quad
\label{A}
A (X,p) &\equiv& -2\Im G}\def\Ga{G}\def\Se{\Sigma}\def\Sa{\Sigma^R (X,p) = \Fdt \pm \Fd\hspace*{0.5cm}
\end{eqnarray}
for the generalized Wigner functions $\F$ and $\Ft$ with the corresponding {\em
four} phase space distribution functions $\fd(X,p)$, the Fermi/Bose factors
$[1 \mp \fd (X,p)]$ and spectral function $A (X,p)$. According to the
retarded relations between Green's functions $G}\def\Ga{G}\def\Se{\Sigma}\def\Sa{\Sigma^{ij}$, {\em only two of
these real functions are required for a complete dynamical
description}. Here and below upper signs relate to fermion quantities,
whereas lower signs refer to boson quantities. As shown in ref. \cite{IKV1}
mean fields and condensates, i.e. non-vanishing expectation values of
one-point functions can also be included.
\section{Thermodynamic Equilibrium}
The thermodynamic equilibrium leads to a lot of simplifying relations among
the kinetic quantities. All quantities become space-time independent. The
Kubo-Martin-Schwinger condition determines the distribution functions to be of
Fermi-Dirac or Bose-Einstein type, respectively
\begin{eqnarray}
\label{Feq}
f_{\rm eq}(X,p)=1/\left(\exp\left((p_0-\mu)/T\right)\pm 1\right).
\end{eqnarray}
Here $\mu$ is the chemical potential. The spectral function attains a form
\begin{eqnarray}
\label{Aeq}
A_{\rm eq}(X,p)&=&\frac{\Gamma(p)}{M^2(p)+\Gamma^2(p)/4}\quad{\rm with}\quad
\left\{
\begin{array}{rcl}
\Gamma(p)&=&-2\Im \Sa^{\rm R}(p),\\
M(p)&=&M_0(p)-\mbox{Re}\;}\def\Im{\mbox{Im}\; \Sa^{\rm R}(p),\quad M_0(p)=p_0^{\kappa}-p_0^{\kappa}({\vec
p}).
\end{array}\right.
\end{eqnarray}
This form is exact through the four-momentum $p=(p_0,{\vec p})$ dependence of
the retarded self-energy $\Sa^{\rm R}(p)$. Thereby
$M_0(p)=p_0^{\kappa}-p_0^{\kappa}({\vec p})=0$ is the free dispersion relation
with $\kappa=1$ or 2 for the non-relativistic Schr\"odinger or the
relativistic Klein-Gordon case, respectively. In the non-equilibrium case all
quantities become functions of the space-time coordinates $X$ and, of course,
the distribution functions $f(X,p)$ generally also depend on three momentum
$\vec p$.
\section{The Virial Limit}
Another simplifying case is provided by the low density limit, i.e. the virial
limit. Since Beth-Uhlenbeck (1937) \cite{BethU} it is known that the
corrections to the level density are given by the asymptotic properties of
binary scattering processes, i.e. in a partial wave decomposition by means of
phase-shifts, see also
\citerange[Huang-Mekjian]
. The reasoning can be
summarized as follows. While for any pair the c.m. motion remains unaltered
the relative motion is affected by the mutual two-body interaction.
Considering a large quantization volume of radius $R$ and a partial wave of
angular momentum $j$, the levels follow the
quantization condition \def}\def\blue{{}\def\blue{}
\def}\def\Green{}\def\Black{{}\def\Green{}\def\Black{}
\begin{eqnarray}\label{psi}
\unitlength1cm
\begin{picture}(8,1.5)
\put(0,-0.6){
\put(0,1.7){$}\def\blue{\psi_j(r)\longrightarrow \sin(kr+{}\def\Green{}\def\Black{ \delta_j(E)})$}
\put(7.93,0.8){\makebox(-0.2,0){$|$}}
\put(8.1,0.4){\makebox(-0.2,0){$R$}}
\put(0,0){\epsfig{file=psi.eps,width=8cm,height=1.5cm}}
}
\end{picture}\quad\quad\Rightarrow\quad\quad
kR+\delta_j(E)=n\pi,
\\ \nonumber
\end{eqnarray}
where $\delta_j(E)$ is the phase-shift at relative energy $E$ and $n$ is an
integer counting the levels. The $kR$ term accounts for the free motion part.
The corresponding corrections to both, the level density and thermodynamic
partition sum, are
given by\\
\begin{minipage}[b]{7cm}
\begin{picture}(6,10.5)
\put(0.5,3.4){
\put(0,0){\epsfig{file=delta.eps,width=6cm,height=7cm}}
\put(3,5.7){\makebox(0,0){$}\def\Green{}\def\Black{\delta(\pi^+\pi^-)$}}
\put(0,6.95){\makebox(0,0){$\pi$}}
\put(0,3.678){\makebox(0,0){$\displaystyle\frac{\pi}{\small 2}$}}
\put(3,-.3){\makebox(0,0){\small$E\mbox{ [MeV]}$}}}
\put(1.4,1.2){}\def\blue{
\put(1.5,0){\vector(-1,1){0.8}}\put(0.4,1.1){\line(1,-1){0.8}}
\put(1.5,0){\vector(-1,-1){0.8}}\put(0.4,-1.1){\line(1,1){0.8}}
\put(0,1.2){\makebox(0,0){$\pi^+$}}
\put(0,-1.1){\makebox(0,0){$\pi^-$}}
\put(1.5,0){\Grho}\put(2.25,.7){\makebox(0,0){\Green$\rho$}}
\put(3,0){\line(1,1){0.8}}\put(4.1,1.1){\vector(-1,-1){0.6}}
\put(3,0){\line(1,-1){0.8}}\put(4.1,-1.1){\vector(-1,1){0.6}}
\put(4.5,1.2){\makebox(0,0){\small $\pi^+$}}
\put(4.5,-1.1){\makebox(0,0){\small $\pi^-$}}
\put(1.5,0){\circle*{0.2}}
\put(3,0){\circle*{0.2}}
}
\end{picture}
\begin{center}\small Fig.~2: $\pi^+\pi^-$ $p$-wave phase-shifts\\
and scattering diagram.\\[3mm] $ $
\end{center}
\end{minipage}
\begin{minipage}[b]{11.2cm}
\begin{eqnarray}
\displaystyle
{}\def\Green{}\def\Black{\frac{{\rm d} n}{{\rm d} E}}&=&\frac{{\rm d} n^{\rm free}}{{\rm d} E}
+{\frac{2j+1}{\pi}\;\frac{{\rm d} {}\def\Green{}\def\Black{\delta_j}}{{\rm d} E}}\\[3mm]
Z&=&\sum_i e^{-E_i/T}=\int {\rm d} E {}\def\Green{}\def\Black{\frac{{\rm d} n}{{\rm d} E}} e^{-E/T}
\end{eqnarray}
Since $Z$ determines the equation of state, its low density limit is uniquely
given by the scattering phase-shifts. The energy derivatives of the
phase-shifts are also responsible for the time-delays discussed in ref.
\cite{DP} and also for the virial corrections to the Boltzmann collision term
recently discussed in ref. \cite{Mor98}. The latter is directly connected to
the $B$-term in our generalized kinetic equation (\ref{keqk}). The advance of
a phase-shift by a value range of $\pi$ across a certain energy window adds
one state to the level density and points towards an $s$-channel resonance. An
example is the $\rho$-meson in the $p$-wave $\pi^+\pi^-$ scattering channel,
fig.~2. In cases, where the resonance couples to one asymptotic channel only,
the corresponding phase-shifts relate to the vacuum spectral function $A_j(p)$
of that resonance via\footnotemark
\begin{eqnarray}\label{Tinout}
4\left|T_{\rm in,out} \right|^2&=&
\frac{
{\Green\Gamma_{}\def\blue{\rm in}(E)\Green\Gamma_{}\def\blue{\rm out}(E)}}
{\left(E^{\kappa}-{\Green E_R^{\kappa}(E)}\right)^2
+{\Green \Gamma_{}\def\Green{}\def\Black{\rm tot}^2(E)}/4}
\\[0.5cm]\label{Tsingle}
&=&4\;\sin^2{}\def\blue{ \delta_j(E)\;
=\Green A_j(E,{\vec p}=0)\;\Gamma_{}\def\blue{\rm tot}(E)}.\\[-2mm]\nonumber
\end{eqnarray}
\end{minipage}\\
\noindent
Here $T_{\rm in,out}$ is the corresponding $T$-matrix element. While relation
(\ref{Tinout}) is correct also in the case where many channels couple to the
same resonance, relation (\ref{Tsingle}) only holds for the single channel
case, where $\Gamma_{\rm in}=\Gamma_{\rm out}=\Gamma_{\rm tot}$. Relation
(\ref{Tsingle}) illustrates that the vacuum spectral functions of resonances
can almost model-independently be deduced from phase-shift information. In the
case of the $\rho$-meson additional information is provided by the pion form
factor. Also in the case of two channels coupling to a resonance the energy
dependence of phase-shifts of the two scattering channels together with the
inelasticity coefficient provide stringent constraints for the spectral
function of the resonance \cite{Weinh-PhD}. \footnotetext{$E$ is the relative
c.m. energy and correspondingly the momentum in $A$ vanishes; $\kappa=1$ for
{non-rel.} particles; $\kappa=2$ for relativistic bosons, where
$\Gamma(E)/2E$ equals the energy dependent decay width and
\mbox{$E_R^2(E)=m_R^2+{\vec p}^2+\mbox{Re}\;}\def\Im{\mbox{Im}\;\Se(p)$.}}
\section{The $\rho$-meson in dense matter}
An an example I like to discuss the properties of the $\rho$-meson and the
consequences for the decay into di-leptons. The exact production rate of
di-leptons is given by the following formula \unitlength8mm
\begin{eqnarray}\label{dndtdm}
\frac{{\rm d} n^{\mbox{e}^+\mbox{e}^-}}{{\rm d} t{\rm d} m}&=&
\begin{picture}(7.5,1.5)\put(0.,0.2){
\put(1.5,0){\vector(-1,1){0.8}}\put(0.2,1.3){\line(1,-1){0.55}}
\put(1.5,0){\vector(-1,-1){0.8}}\put(0.2,-1.3){\line(1,1){0.55}}
\put(.8,1.2){\makebox(0,0){\small e$^+$}}
\put(.8,-1.2){\makebox(0,0){\small e$^-$}}
\put(1.5,0){\Boson}\put(2.25,-.7){\makebox(0,0){$\gamma^*$}}
\put(3,0){\Grho}\put(3.75,-.7){\makebox(0,0){\Green$\rho$}}
\put(3,0.5){\makebox(0,0){$-$}}\put(4.5,0.5){\makebox(0,0){$+$}}
\put(4.5,0){\Boson}\put(5.25,-.7){\makebox(0,0){$\gamma^*$}}
\put(6,0){\line(1,1){0.8}}\put(7.3,1.3){\vector(-1,-1){0.6}}
\put(6,0){\line(1,-1){0.8}}\put(7.3,-1.3){\vector(-1,1){0.6}}
\put(6.8,1.2){\makebox(0,0){\small e$^+$}}
\put(6.8,-1.2){\makebox(0,0){\small e$^-$}}
\put(1.5,0){\circle*{0.2}}
\put(6,0){\circle*{0.2}}
}
\end{picture}
={\Green f_{\rho}(m,{\vec p},{\vec x},t)\;
A_{\rho}(m,{\vec p},{\vec x},t)}\;
\Gamma^{\rho\;\mbox{\small e}^+\mbox{\small e}^-}(m).
\vphantom{\left(\begin{picture}(0,1.5)\end{picture}\right)}
\end{eqnarray}
Here $\Gamma^{\rho\;\mbox{\small e}^+\mbox{\small e}^-}(m)\propto1/m^2$ is the
mass-dependent electromagnetic decay rate of the $\rho$-meson into the
di-electron channel. The phase-space distribution $f_{\rho}(m,{\vec p},{\vec
x},t)$ and the spectral function $A_{\rho}(m,{\vec p},{\vec x},t)$ define
the properties of the $\rho$-meson at space-time point ${\vec x},t$. Both
quantities are in principle to be determined dynamically by an appropriate
transport model. However till to-date the spectral functions are not treated
dynamically in most of the present transport models. Rather one employs
on-shell $\delta$-functions for all stable particles and spectral functions
fixed to the vacuum shape for resonances.
As an illustration the model case is discussed, where the $\rho$-meson just
strongly couples to two channels: naturally the $\pi^+\pi^-$ channel and to
the $\pi N\leftrightarrow\rho N$ channels relevant at finite nuclear
densities. The latter component is representative for all channels
contributing to the so-called {\em direct $\rho$} in transport codes. For a
first orientation the equilibrium properties are discussed. Admittedly by far
more sophisticated and in parts unitary consistent equilibrium calculations
have already be presented in the literature, e.g.
\citerange[Mosel-FLW]. It is not the point to compete with them at this
place. Rather we try to give a detailed analysis in simple terms with the aim
to discuss the consequences for the implementation of such resonance processes
into dynamical transport simulation codes.
Both considered processes add to the total width of the $\rho$-meson
\begin{eqnarray}\label{Gammatot}
\Gamma_{\rm tot}(m,{\vec p})&=&\Gamma_{\rho\rightarrow{\pi}^+{\pi}^-}(m,{\vec
p})+
\Gamma_{\rho\rightarrow{\pi} NN^{-1}}(m,{\vec p}),
\end{eqnarray}
and the equilibrium spectral function then results from the cuts of the
two diagrams
\unitlength6mm
\begin{eqnarray}\label{A2}
{\Green A_{\rho}(m,{\vec p})}\;&=&\normalsize
\begin{picture}(5.5,1.3)\thicklines\put(0.25,0.2){
\put(0,0){\Grho}
\put(3.5,0){\Grho}
\put(2.5,0){}\def\blue{\oval(2,1.5)}
\put(2.6,0.75){}\def\blue{\vector(-1,0){0.3}}
\put(2.6,-0.75){}\def\blue{\vector(-1,0){0.3}}
\put(2.5,1.25){\makebox(0,0){}\def\blue{$\pi^+$}}
\put(2.5,-1.25){\makebox(0,0){}\def\blue{$\pi^-$}}
\put(1.95,-1.1){\thinlines\line(1,2){0.3}}
\put(2.35,-0.3){\thinlines\line(1,2){0.3}}
\put(2.75,.5){\thinlines\line(1,2){0.3}}
}
\end{picture} +
\begin{picture}(5.4,1)\thicklines\put(0.25,0.2){
\put(0,0){\Grho}
\put(3.5,0){\Grho}
\put(2.5,0){\blue\oval(2,1.5)[b]}
\put(2.5,0){}\def\Green{}\def\Black{\oval(2,1.5)[t]}
\put(2.4,0.75){}\def\Green{}\def\Black{\vector(1,0){0.3}}
\put(2.6,-0.75){}\def\blue{\vector(-1,0){0.3}}
\put(2.5,1.25){\makebox(0,0){}\def\Green{}\def\Black{$ N^{-1}$}}
\put(2.45,0.35){\makebox(0,0){}\def\blue{$\pi$}}
\put(2.5,-1.25){\makebox(0,0){}\def\blue{ N}}
\put(3.5,0){}\def\blue{\vector(-1,0){1.3}}
\put(1.5,0){}\def\blue{\line(1,0){1}}
\put(0.05,0){
\put(1.95,-1.1){\thinlines\line(1,2){0.3}}
\put(2.35,-0.3){\thinlines\line(1,2){0.3}}
\put(2.75,.5){\thinlines\line(1,2){0.3}}}
}
\end{picture}
=\frac{
{}\def\blue{\Gamma_{\rho\;\pi^+\pi^-}} +
{}\def\Green{}\def\Black{\Gamma_{\rho\;\pi N N^{-1}}}}
{\left(m^2-m_\rho^2-\mbox{Re}\Sigma\right)^2
+{}\def\Green{}\def\Black{ \Gamma_{\rm tot}^2/4}}\; .
\end{eqnarray}
In principle both diagrams have to be calculated by fully self consistent
propagators, i.e. with corresponding widths for all particles involved. This
formidable task has not been done yet. Using micro-reversibility and the
properties of thermal distributions the two terms in (\ref{A2}) contributing
to the di-lepton yield (\ref{dndtdm}) can indeed approximately be reformulated
as the thermal average of a $\pi^+\pi^-\rightarrow\rho\rightarrow{\rm e}^+{\rm
e}^-$-annihilation process and a $\pi N\rightarrow\rho N\rightarrow{\rm
e}^+{\rm e}^-N$-scattering process, i.e.
\begin{eqnarray}\label{x-sect}
\frac{{\rm d} n^{{\rm
e}^+{\rm e}^-}}{{\rm d} m{\rm d} t}\propto
\left<f_{\pi^+}f_{\pi^-}\; v_{\pi\pi}\;
\sigma(\pi^+\pi^-\rightarrow\rho\rightarrow{\rm e}^+{\rm
e}^-)\vphantom{A^A}+
f_{\pi}f_N\; v_{\pi N}\;\sigma(\pi N\rightarrow\rho N\rightarrow{\rm
e}^+{\rm e}^-N)\vphantom{A^A}\right>_T
\end{eqnarray}
However, the important fact to be noticed is that in order to preserve
unitarity the corresponding cross sections are no longer the free ones, as
given by the vacuum decay width in the denominator, but rather involve the
{\em medium dependent total width} (\ref{Gammatot}). This illustrates in
simple terms that rates of broad resonances can no longer simply be added in a
perturbative way. Since it concerns a coupled channel problem there is a
cross talk between the different channels to the extent that the common
resonance propagator attains the total width arising from all partial widths
feeding and depopulating the resonance. While a perturbative treatment with
free cross sections in (\ref{x-sect}) would enhance the yield at resonance,
$m=m_{\rho}$, if a channel is added, c.f. fig.~2 left part, the correct
treatment (\ref{A2}) even inverts the trend and indeed depletes the yield at
resonance, right part in fig.~2. Furthermore one sees that only the total yield
involves the spectral function, while any partial cross section only refers to
that partial term with the corresponding partial width in the numerator!
Unfortunately so far all these facts have been ignored or even overlooked in
the present transport treatment of broad resonances.
\noindent
\unitlength1cm
\begin{picture}(19,6.3)
\put(0,0){\epsfig{file=rho-false.eps,width=6cm,height=5.7cm}}
\put(6.,0){\epsfig{file=rho-correct.eps,width=6cm,height=5.7cm}}
\put(11.9,0.08){\epsfig{file=first-c2.eps,width=6.5cm,height=5.2cm}}
\put(6.,6.){\makebox(0,0){\bf Di-lepton rates from thermal $\rho$-mesons
($T=110$ MeV)}}
\put(15.,6.){\makebox(0,0){\bf Quasi-free $\pi N$ collisions}}
\put(15.,5.6){\makebox(0,0){\bf and spectral function}}
\put(1.,0.1){\makebox(0,0){\small$m_{\pi}$}}
\put(1.8,0.1){\makebox(0,0){\small$2m_{\pi}$}}
\put(4.3,0.1){\makebox(0,0){\small$m_{\rho}$}}
\put(6,0){
\put(1.,0.1){\makebox(0,0){\small$m_{\pi}$}}
\put(1.8,0.1){\makebox(0,0){\small$2m_{\pi}$}}
\put(4.3,0.1){\makebox(0,0){\small$m_{\rho}$}}}
\put(3,5.3){\makebox(0,0){$\Gamma_{\rm tot}=\Gamma_{\rm free}$}}
\put(9,5.3){\makebox(0,0){full $\Gamma_{\rm tot}$}}
\put(12,0){
\put(1.6,0.1){\makebox(0,0){\small$2m_{\pi}$}}
\put(3.8,0.1){\makebox(0,0){\small$m_{\rho}$}}}
\end{picture}
\parbox[t]{11.7cm}{\small Fig.~3: $\mbox{e}^+\mbox{e}^-$ rates (arb. units)
as a function of the invariant pair mass $m$ at $T=110$ MeV from
$\pi^+\pi^-$ annihilation (dotted line) and direct $\rho$-meson contribution
(dashed line), the full line gives the sum of both contributions. Left part:
using the free cross section recipe, i.e. with $\Gamma_{\rm
tot}=\Gamma_{\rho\;\pi^+\pi^-}$; right part for the correct partial rates
(\ref{A2}). The calculation are done with
$\Gamma_{\rho\leftrightarrow\pi\pi}(m_{\rho})/2m_{\rho}=150$ MeV and
$\Gamma_{\rho\leftrightarrow\pi N N^{-1}}(m_{\rho})/2m_{\rho}=70$
MeV.\\}\hfill \parbox[t]{6.2cm}{\small Fig.~4: Fermi motion averaged $\pi
N\rightarrow\rho N\rightarrow {\rm e}^+{\rm e}^- N$ cross sections at pion
beam momenta of 1 and 1.3 GeV/c (dashed and full curve) as a function of
invariant pair mass $m$. The dotted line
gives the spectral function used here and in fig.~3.\\}
Compared to the spectral function (dotted line in fig.~4) both thermal
components in fig.~3 show a significant enhancement on the low mass side and a
strong depletion at high masses due to the thermal weight
$f\propto\exp(-p_0/T)$ in the rate (\ref{dndtdm}). A similar effect is seen
in genuine non-equilibrium processes like the di-lepton yield resulting from
Fermi-motion averaged $\pi N\rightarrow \rho N$ scattering, fig.~4. The
latter is representive for the first chance collision in a $\pi A$ reaction
and shows a behavior significantly different from that obtained in
refs.~\cite{CassingpiA}. For orientation the sub-threshold conditions for the
two beam momenta are given by the vertical lines in fig.~4!
Much of the physics can already be discussed by observing that the partial
widths are essentially given by the type of coupling ($s$ or $p$-wave,
$l=0,1$) and the phase space available for the decay channel. For point-like
couplings and two-body phase space or approximately in the case of one light
particle among only heavy ones in the decay channel (e.g. for $\pi N N^{-1}$)
one finds
\begin{eqnarray}\label{Gamma(m)}
\Gamma_{c}(m)\propto m p_{\rm cm}\left(\frac{p_{\rm cm}}{m}\right)^l
\quad\mbox{with}\quad p_{\rm cm}\propto\sqrt{
m^2-s_{\rm thr}} ;\;
\left\{
\begin{array}{lll}
s_{\rm thr}=4m_{\pi}^2,&\quad l=1\;
&\mbox{for } c=\left\{\rho\leftrightarrow\pi\pi\right\}\\
s_{\rm thr}=m_{\pi}^2,&\quad l=1
&\mbox{for } c=\left\{\rho\leftrightarrow\pi N N^{-1}\right\}.
\end{array}\right.\hspace*{4mm}
\end{eqnarray}
In the $\pi\pi$ case the corresponding strength is approximately given by the
vacuum decay, while it depends on the nuclear density in the $\rho
N\leftrightarrow\pi N$ case. The simple phase-space behavior (\ref{Gamma(m)})
suggests that far away from all thresholds ($m^2\gg s_{\rm thr}$) the ratio
$\left<\pi\pi\mbox{-annihilation}\right>/\left<\mbox{direct }\rho\right>$ of
the two components should result to a fairly {\em smooth} and {\em almost
constant} function of $m$, e.g. for $m>500$ MeV. This kinematical constraint
is nicely confirmed by some calculations, e.g. of ref. \cite{Ko}, however, no
such behavior is seen in the {\em direct} $\rho$-mesons so far computed in
refs. \cite{CassingpiA,Cassing}\footnote{In refs. \cite{CassingpiA,Cassing}
the direct $\rho$ component appears almost like the spectral function itself,
i.e. untouched from any phase-space constraints which come in through the
distributions $f_{\rho}(X,p)$. The latter favour low masses and deplete the
high mass components! In fact rather than being almost constant the ratios
$\left<\pi\pi\right>/\left<\mbox{direct }\rho\right>$ exhibit an exponential
behavior $\exp(-m/T^*)$ for $m>500$ MeV with $T^*$ between 70 - 110 MeV
depending on beam energy, pointing towards a major deficiency in the account
of phase-space constraints for the {\em direct} $\rho$-meson component in
these calculations.}.
For completeness and as a stimulus for improvements the discussed defects
in some of presently used transport treatments of broad resonances (vector
mesons in particular) are listed below. The last column gives an estimate of
multiplicative factors needed to restore the defect ($m_R$ is the resonance
mass and $T^*$ (between 70 and 120 MeV) is a typical slope parameter for the
corresponding beam energy). Many of the points are known and trivial, and can
trivially be implemented. They are just of kinematical origin. However the
associated defects are by no means minor. Rather they ignore essential
features of the dynamics with consequences already on the {\em qualitative}
level and affect the spectra by far more than any of the currently
discussed im-medium effects, e.g. of the $\rho$-meson.
\noindent
\begin{small}
\begin{tabular}{lll}
\multicolumn{2}{l}{\normalsize\bf List of defects in some of the transport
codes} &{\normalsize\bf restoring factor}\\[2mm]
[\makebox[2.5mm]{a}]&\parbox[t]{13.cm}{The differential mass information
contained in the distribution functions $f(X,p)=f(X,m,{\vec p})$ of
resonances is ignored and only the integrated total number is evaluated as a
function of space-time (direct $\rho$ in
refs. \cite{CassingpiA,Cassing}).\\[-2mm]}
&\parbox[t]{3.4cm}{$\exp(-(m-m_R)/T^*)$\\(factor 10 or more at $m=500$ MeV
for $\rho$)}\\[3mm] [\makebox[2.5mm]{b}]&\parbox[t]{13.cm}{Except for the
$\pi^+\pi^-\rightarrow {\rm e}^+{\rm e}^-$ case most resonance production
cross sections are parametrized such that they vanishes for $\sqrt{s}$
values below the nominal threshold, e.g. below $m_N+m_{\rho}$ in the case
$\pi N\rightarrow \rho N$. This violates detailed balance, since broad
resonances can decay for $m<m_R$.\\[-2mm]} &\parbox[t]{3.2cm}{misses yield
for\\ $m<m_R$}\\[3mm] [\makebox[2.5mm]{c}]&\parbox[t]{13.cm}{In {\em partial}
cross sections leading to a resonance the randomly chosen mass is normally
selected according to the spectral function. This is not correct since the
corresponding {\em partial} width in the numerator of (\ref{A2}) has to be
considered.\\[-2mm]} &\parbox[t]{3.2cm}{changes shape}\\[3mm]
[\makebox[2.5mm]{d}]&\parbox[t]{13.cm}{Different partial cross sections are
simply added without adjusting the total width in the resonance propagator
accordingly. This violates unitarity.\\[-2mm]}
&\parbox[t]{3.2cm}{$\left(\Gamma_{\rm free}/\Gamma_{rm tot}\right)^2$\\ at
$m=m_R$}\\[3mm] [\makebox[2.5mm]{e}]&\parbox[t]{13.cm}{The Monte Carlo
implementation of selecting the random mass $m$ of the \mbox{resonance}
(item [c]) is sometimes falsely implemented, namely ignoring the kinetic
phase-space of genuine multi-particle final state configurations, e.g. in
$\pi N\rightarrow\rho N$. Applies also to the $\Delta$-resonance , e.g. for
$NN\rightarrow N\Delta$.\\[-2mm]} &\parbox[t]{3.2cm}{proportional to\\
$\left({s}({\sqrt{s}-m-m_N})\right)^{1/2}$\\quad{\rm for}\quad two-body final state}\\[3mm]
[\makebox[2.5mm]{f}]&\parbox[t]{13.cm}{For the electromagnetic decay of
vector mesons some authors use a mass independent decay rate,
e.g. $\Gamma_{\rho\rightarrow {\rm e}^+{\rm e}^-}/m={\rm const.}$, rather
than that resulting from vector dominance and QED with
$\Gamma_{\rho\rightarrow {\rm e}^+{\rm e}^-}\propto 1/m^2$.\\[-2mm]}
&$(m_R/m)^3$
\end{tabular}
\end{small}
\section{$\Phi$-derivable approximations}
The preceding section has shown that one needs a transport scheme adapted for
broad resonances. Besides the conservation laws it should comply with
requirements of unitarity and detailed balance. A practical suggestion has
been given in ref. \cite{DB} in terms of cross section prescriptions. However
this picture is tied to the concept of asymptotic states and therefore not
well suited for the general case, in particular if more than one channel feeds
into a broad resonance. Therefore we suggest to revive the so-called
$\Phi$-derivable scheme, originally proposed by Baym \cite{Baym} on the basis
of a formulation of the generating functional or partition sum given by
Luttinger, Ward \cite{Luttinger}, and later reformulated in terms of
path-integrals \cite{Cornwall}. This functional can be generalized to
the real time case (for details see \cite{IKV1}) with the diagrammatic
representation\footnote{ $n_\Se$ counts the number of self-energy
$\Se$-insertions in the ring diagrams, while for the closed diagram of
$\Phi$ the value $n_\lambda$ counts the number of vertices building up the
functional $\Phi$.} \unitlength=.7cm
\begin{eqnarray}\label{keediag}
\hspace*{-0.8cm}
\ii\Gamma\left\{G}\def\Ga{G}\def\Se{\Sigma}\def\Sa{\Sigma\right\} = \ii
\Gamma^0\left\{G}\def\Ga{G}\def\Se{\Sigma}\def\Sa{\Sigma^0\right\}
+
\underbrace{\sum_{n_\Se}\vhight{1.6}\frac{1}{n_\Se}\GlnG0Sa}
_{\displaystyle \pm \ln\left(1-\odotG}\def\Ga{G}\def\Se{\Sigma}\def\Sa{\Sigma^{0}\odot\Se\right)}
\;\underbrace{-\vhight{1.6}\GGaSa}
_{\displaystyle \pm \odotG}\def\Ga{G}\def\Se{\Sigma}\def\Sa{\Sigma\odot\Se\vphantom{\left(\Ga^{0}\right)}}
\;\;+\;\underbrace{\vhight{1.6}\sum_{n_\lambda}\frac{1}{n_\lambda}
\Dclosed{c2}{\thicklines}}
_{\displaystyle\vphantom{\left(\Ga^{0}\right)}
+\ii\Phi\left\{G}\def\Ga{G}\def\Se{\Sigma}\def\Sa{\Sigma\right\}}.
\end{eqnarray}
Thereby the key quantity is the auxiliary functional $\Phi$ given by
two-particle irreducible vacuum diagrams. It solely depends on fully
re-summed, i.e. self consistently generated propagators $G}\def\Ga{G}\def\Se{\Sigma}\def\Sa{\Sigma(x,y)$ (thick
lines). The consistency is provided by the fact that $\Phi$ is the generating
functional for the re-summed self-energy $\Se(x,y)$ via functional variation
of $\Phi$ with respect to any propagator $G}\def\Ga{G}\def\Se{\Sigma}\def\Sa{\Sigma(y,x)$, i.e.
\begin{eqnarray}\label{varphi}
-\ii \Se =\mp \delta \ii \Phi / \delta \ii G}\def\Ga{G}\def\Se{\Sigma}\def\Sa{\Sigma.
\end{eqnarray}
The Dyson equations of motion directly follow from
the stationarity condition of $\Gamma$ (\ref{keediag}) with respect to
variations of $G}\def\Ga{G}\def\Se{\Sigma}\def\Sa{\Sigma$ on the contour\footnote{an extension to include classical
fields or condensates into the scheme is presented in ref. \cite{IKV1}}
\begin{eqnarray}
\label{varG/phi}
\delta \Gamma \left\{G}\def\Ga{G}\def\Se{\Sigma}\def\Sa{\Sigma \right\}/ \delta G}\def\Ga{G}\def\Se{\Sigma}\def\Sa{\Sigma = 0,
\quad&&\mbox{(Dyson eq.)}
\end{eqnarray}
In graphical terms, the variation (\ref{varphi}) with respect to $G}\def\Ga{G}\def\Se{\Sigma}\def\Sa{\Sigma$ is
realized by opening a propagator line in all diagrams of $\Phi$. The
resulting set of thus opened diagrams must then be that of proper skeleton
diagrams of $\Se$ in terms of {\em full propagators}, i.e. void of any
self-energy insertions. As a consequence, the $\Phi$-diagrams have to be {\em
two-particle irreducible} (label $c2$), i.e. they cannot be decomposed into
two pieces by cutting two propagator lines.
The clue is that truncating the auxiliary functional $\Phi$ to a limited
subset of diagrams leads to a self consistent, i.e closed, approximation
scheme. Thereby the approximate forms of $\Phi^{\scr{(appr.)}}$ define {\em
effective} theories, where $\Phi^{\scr{(appr.)}}$ serves as a generating
functional for the approximate self-energies $\Sa^{\scr{(appr.)}}(x,y)$
through relation (\ref{varphi}), which then enter as driving terms for the
Dyson equations of the different species in the system. As Baym \cite{Baym}
has shown such a $\Phi$-derivable approximation is conserving for all
conservation laws related to the global symmetries of the original theory and
at the same time thermodynamically consistent. The latter automatically
implies correct detailed balance relations between the various transport
processes. For multicomponent systems it leads to a {\em actio} = {\em
reactio} principle. This implies that the properties of one species are not
changed by the interaction with other species without affecting the properties
of the latter ones, too. The $\Phi$-derivable scheme offers a natural and
consistent way to account for this principle. Some thermodynamic examples have
been considered recently, e.g., for the interacting $\pi N \Delta$ system
\cite{Weinhold} and for a relativistic QED plasma \cite{Baym98}.
\section{Generalized Kinetic Equation}\label{sect-Kin-EqT}
In terms of the kinetic notation (\ref{F}) and in the first
gradient approximation the {\em generalized kinetic} equation for $F$
takes the form
\begin{equation}
\label{keqk}
\Do \F (X,p) =
B_{\rm in}(X,p)
+ C (X,p)
\end{equation}
with the drift term determined from
the ''mass'' function (c.f. (\ref{Aeq}))
\begin{eqnarray}\label{meqx}\label{M}
M(X,p)=M_0(p) -\mbox{Re}\;}\def\Im{\mbox{Im}\;\Se^R (X,p)
\end{eqnarray}
through the Poisson bracket ${\Do F\equiv\Pbr{M,F}}$.
The explicit form of the differential drift operator reads
\begin{eqnarray}\label{Drift-O}
\Do =
\left(
\vu_{\mu} -
\frac{\partial \mbox{Re}\;}\def\Im{\mbox{Im}\;\Sa^R}{\partial p^{\mu}}
\right)
\partial^{\mu}_X +
\frac{\partial \mbox{Re}\;}\def\Im{\mbox{Im}\;\Sa^R}{\partial X^{\mu}}
\frac{\partial }{\partial p_{\mu}}
, \quad\quad\mbox{with}\quad v^\mu=\frac{\partial M_0(p)}{\partial p_{\mu}}
=\left\{
\begin{array}{ll}
(1,{{\vec p}/m})\quad&\mbox{non-rel.}\\
2p^{\mu}&\mbox{rel. bosons.}
\end{array}\right.
\end{eqnarray}
The two other terms in (\ref{keqk}), $B_{\rm in}(X,p)$ and $C(X,p)$, are a
fluctuation term and the collision term, respectively
\begin{eqnarray}
\label{Coll(kin)}
B_{\rm in}=\Pbr{\Gamma_{\scr{in}} , \ReG}\def\Ga{G}\def\Se{\Sigma}\def\Sa{\Sigma^R}, \quad\quad
C (X,p) =
\Gamma_{\scr{in}} (X,p) \Ft (X,p)
- \Gamma_{\scr{out}} (X,p) \F (X,p),
.
\end{eqnarray}
Here the reduced gain and loss rates and total width of the collision
integral are
\begin{eqnarray}
\label{gain}
\Gamma_{\scr{in}} (X,p) &=& \mp \ii \Se^{-+} (X,p),\quad\quad
\Gamma_{\scr{out}} (X,p) = \ii \Se^{+-} (X,p),\\
\label{G-def}
\Gamma (X,p)&\equiv& -2\Im \Se^R (X,p) = \Gamma_{\scr{out}} (X,p)\pm\Gamma_{\scr{in}} (X,p).
\end{eqnarray}
The combination opposite to (\ref{G-def}) determines the fluctuations
\begin{eqnarray}
\label{Fluc-def}
I (X,p) = \Gamma_{\scr{in}} (X,p)\mp\Gamma_{\scr{out}} (X,p).
\end{eqnarray}
We need still one more equation, which in fact can be provided by the retarded
Dyson equation. In first order gradient approximation the latter is completely
solved algebraically \cite{BM}
\begin{eqnarray}
\label{Asol}\label{Xsol}
&&G}\def\Ga{G}\def\Se{\Sigma}\def\Sa{\Sigma^R=\frac{1}{M(X,p)+\ii\Gamma(X,p)/2}
\quad\Rightarrow\quad
A (X,p) =
\frac{\Gamma (X,p)}{M^2 (X,p) + \Gamma^2 (X,p) /4}
\end{eqnarray}
Canonical equal-time (anti) commutation relations
for (fermionic) bosonic field operators provide the standard sum--rule
for the spectral function.
We now provide a physical interpretation of the various terms in the
generalized kinetic equation (\ref{keqk}). The drift term $\Do \Fd$ on the
l.h.s. of eq. (\ref{keqk}) is the usual kinetic drift term including the
corrections from the self-consistent field $\mbox{Re}\;}\def\Im{\mbox{Im}\;\Se^R$ into the convective
transfer of real and also virtual particles. For the collision-less case
$C=B=0$, i.e. \mbox{$\Do \Fd=0$} (Vlasov equation), the quasi-linear first
order differential operator $\Do$ defines characteristic curves. They are the
standard classical paths in the Vlasov case. Thereby the four-phase-space
probability $\Fd(X,p)$ is conserved along these paths. The formulation in
terms of a Poisson bracket in four dimensions implies a generalized Liouville
theorem. For the collisional case both, the collision term $C$ and the
fluctuation term $B$ change the phase-space probabilities of the
``generalized'' particles during their propagation along the ``generalized''
classical paths given by $\Do$. We use the term ``generalized'' in order to
emphasize that particles are no longer bound to their mass-shell, $M=0$,
during propagation due to the collision term, i.e. due decay, creation or
scattering processes.
The r.h.s. of eq. (\ref{keqk}) specifies the collision term $C$ in terms of
gain and loss terms, which also can account for multi-particle processes.
Since $\Fd$ includes a factor $A$, the $C$ term further deviates from the
standard Boltzmann-type form in as much that it is multiplied by the spectral
function $A$, which accounts for the finite width of the particles.
The additional Poisson-bracket term
\begin{eqnarray}
\label{backflow}
B_{\rm in}&=&\Pbr{\Gamma_{\scr{in}},\ReG}\def\Ga{G}\def\Se{\Sigma}\def\Sa{\Sigma^R}=\frac{M^2-\Gamma^2/4}{(M^2+\Gamma^2/4)^2}\;
\Do\;\Gamma_{\scr{in}}
+\frac{M\Gamma}{(M^2+\Gamma^2/4)^2}\Pbr{\Gamma_{\scr{in}},\Gamma_{\scr{out}}}
\end{eqnarray}
is special. It contains genuine contributions from the finite mass width of
the particles and describes the response of the surrounding matter due to
fluctuations. This can be seen from the conservation laws discussed below. In
particular the first term in (\ref{backflow}) gives rise to a back-flow
component of the surrounding matter. It restores the Noether currents to be
conserved rather than the intuitively expected sum of convective currents
arising from the convective $\Do\F$ terms in (\ref{keqk}). The second term
of (\ref{backflow}) gives no contribution in the quasi-particle limit of small
damping width limit and represents a specific off mass-shell response due to
fluctuations, c.f. \cite{LipS,IKV2}. In the low density and quasi-particle
limit the $B_{\rm in}$ term provides the virial corrections to the Boltzmann
collision term \cite{Mor98}.
\section{Conservations of the Current and Energy--Momentum}
\label{Conservation-L}
The global symmetries of $\Phi$ provide conservation laws such as the
conservation of charge and energy--momentum. The corresponding Noether
charge current and Noether energy--momentum tensor result to the following
expressions, c.f. \cite{IKV1},
\begin{eqnarray}
\label{c-new-currentk}\nonumber
j^{\mu} (X)
&=&\frac{e}{2}\mbox{Tr} \int \dpi{p}
\vu^{\mu}
\left(\Fd (X,p) \mp \Fdt (X,p) \right),\hspace*{-1cm} \\
\label{E-M-new-tensork}
\Theta^{\mu\nu}(X)
&=&\frac{1}{2}\mbox{Tr} \int \dpi{p}
\vu^{\mu} p^{\nu}
\left(\Fd (X,p) \mp \Fdt (X,p) \right)
+ g^{\mu\nu}\left(
{\cal E}^{\scr{int}}(X)-{\cal E}^{\scr{pot}}(X)
\right).
\end{eqnarray}
Here
\begin{eqnarray}
\label{eps-int}
{\cal E}^{\scr{int}}(X)=\left<-\Lgh^{\mbox{\scriptsize int}}(X)\right>
=\left.\frac{\delta\Phi}{\delta\lambda(x)}\right|_{\lambda=1},\quad
\label{eps-potk}
{\cal E}^{\scr{pot}}
= \frac{1}{2}\mbox{Tr}
\int\dpi{p} \left[
\mbox{Re}\;}\def\Im{\mbox{Im}\;\Sa^R \left(\Fd\mp\Fdt\right)
+ \mbox{Re}\;}\def\Im{\mbox{Im}\;\Ga^R\left(\Gb\mp\Gbt\right)\right]\nonumber
\end{eqnarray}
are the densities of the interaction energy and the potential energy,
respectively. The first term of ${\cal E}^{\scr{pot}}$ complies with
quasi-particle expectations, namely mean potential times density, the second
term displays the role of fluctuations $I=\Gb\mp\Gbt$ in the potential energy
density. This fluctuation term precisely arises form the $B$-term in the
kinetic eq. (\ref{keqk}), discussed around eq. (\ref{backflow}). It restores
that the Noether expressions (\ref{E-M-new-tensork}) are indeed the exactly
conserved quantities. In this compensation we see the essential role of the
fluctuation term in the generalized kinetic equation. Dropping or
approximating this term would spoil the conservation laws. Indeed, both
expressions in (\ref{E-M-new-tensork}) comply exactly with the generalized
kinetic equation (\ref{keqk}), i.e. they are exact integrals of the
generalized kinetic equations of motion within the $\Phi$-derivable scheme.
Memory effects and the formulation of a kinetic entropy can likewise be
addressed \cite{IKV2}.\\
\noindent
{\bf Acknowledgement:} Much of the material presented is due a very
stimulating collaboration with Y. Ivanov and D. Voskresensky. The author
further acknowledges encouraging discussions with P. Danielewicz, B. Friman,
H. van Hees, E. Kolomeitsev, M. Lutz, K. Redlich and W. Weinhold on various
aspects of broad resonances and J. Aichelin, S. Bass, E. Bratkowskaya,
W. Cassing, C.M. Ko and U. Mosel on some particular features of transport
codes.
| proofpile-arXiv_065-8630 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\paragraph*{}
Topological space-time defects usually appears in several distinct geometrical forms such as shells, lines and planes \cite{1,2,3}. These can be investigated either in the context of Riemannian distributions in General Relativity or in the context of EC - gravity.
In the later case torsion loops in Weitzenb\"{o}ck teleparallel spaces or torsion line defects as spinning cosmic strings have been considered recently by P.S.Letelier \cite{4,5}.
In this short note I shall be considering another type of torsion defect, namely a planar thin wall distribution of orthogonal lines of polarized static spinning particles in linearized EC - gravity.
Linearity is consider here to avoid problems with the square of Dirac $ \delta $-functions.
Following Nitsch \cite{6} spinning matter demands that one consider the more general Riemann-Cartan $ U_{4} $ space-time and therefore Weitzenb\"{o}ck teleparallel $ T_{4} $ solutions are not allowed.
\section{Non-Riemannian planar defects and Spin}
\paragraph*{}
Let us now consider the planar space-time given by
\begin{equation}
ds^{2}=({\omega}^{0})^{2}-({\omega}^{1})^{2}-({\omega}^{2})^{2}-({\omega}^{3})^{2}
\label{1}
\end{equation}
where the basis 1-forms $ {\omega}^{r} $ (r=0,1,2,3) are given by
\begin{eqnarray}
{\omega}^{0} & = & e ^{\frac{F}{2}}dt \nonumber \\
{\omega}^{1} & = & e ^{\frac{H}{2}} dx \nonumber \\
{\omega}^{2} & = & e ^{\frac{H}{2}}dy\\
{\omega}^{3} & = & e ^{\frac{G}{2}} dz\nonumber
\label{2}
\end{eqnarray}
where F,H and G are only functions of z.
Torsion 1-forms are chosen such as
\begin{equation}
\begin{array}{llll}
T^{0} & = & J^{0} {\omega}^{0} \wedge {\omega}^{3} \nonumber \\
\\
T^{1} & = & J^{1} {\omega}^{3} \wedge {\omega}^{1} + J^{2} {\omega}^{0} \wedge {\omega}^{1} \nonumber \\
\\
T^{2} & = & J^{1} {\omega}^{3} \wedge {\omega}^{2} + J^{2} {\omega}^{0} \wedge {\omega}^{2} \nonumber \\
\\
T^{3} & = & J^{3} {\omega}^{0} \wedge {\omega}^{3} \nonumber \\
\end{array}
\label{3}
\end{equation}
Substitution of (\ref{3}) into Cartan first structure equation
\begin{equation}
T^{a}=d{\omega}^{a}+{{\omega}^{a}}_{b} \wedge {\omega}^{b}
\label{4}
\end{equation}
yields the following connection 1-forms
\begin{equation}
\begin{array}{lllll}
{{\omega}^{0}}_{3} & = & \lbrack J^{0} + e^{\frac{-G}{2}} \frac{F'}{2} \rbrack {\omega}^{0} \nonumber \\
\\
{{\omega}^{1}}_{0} & = & \lbrack J^{2} + e^{\frac{-F}{2}} \frac{\dot{H}}{2} \rbrack {\omega}^{0} \nonumber \\
\\
{{\omega}^{1}}_{3} & = & \lbrack J^{1} + e^{\frac{-G}{2}} \frac{H'}{2} \rbrack {\omega}^{1} \nonumber \\
\\
{{\omega}^{2}}_{3} & = & - \lbrack J^{1} + e^{\frac{-G}{2}} \frac{H'}{2} \rbrack {\omega}^{2} \nonumber \\
\end{array}
\label{5}
\end{equation}
and $ J^{3} = \frac{\dot{G}}{2} e^{-\frac{F}{2}} $. Where dots mean time derivatives and dashes mean z-coordinate derivatives. To simplify matters we shall consider that only $ J^{0} $ torsion component and the H(z) component of metric are non vanishing. This choice of metric is similar to Letelier \cite{7} choice for multiple cosmic strings in Riemannian space since $ d{\omega}^{0} = 0 $ and c is a constant.
This hypothesis reduces the connection 1-forms (\ref{5}) to
\begin{equation}
\begin{array}{lll}
{{\omega}^{0}}_{3} = J^{0}{\omega}^{0}\nonumber \\
\\
{{\omega}^{1}}_{3} = c \frac{H'}{2}{\omega}^{1} \nonumber \\
\\
{{\omega}^{2}}_{3} = -c \frac{H'}{2}{\omega}^{2} \nonumber \\
\end{array}
\label{6}
\end{equation}
since the $ d{\omega}^{0} = 0 $ and c is a constant.
\section{Field equations}
\paragraph*{}
In the language of exterior differential forms the EC - field equations \cite{8,9} are
\begin{equation}
R^{ik} \wedge {\omega}^{l} {\epsilon}_{ikml} = -16 {\pi}G {\Sigma}_{m}
\label{7}
\end{equation}
\begin{equation}
T^{k} \wedge {\omega}^{l} {\epsilon}_{ijkl} = -8 {\pi}G S_{ij}
\label{8}
\end{equation}
Where $ R^{ik} \equiv \frac{1}{2} {R^{ik}}_{rs} {\omega}^{r} \wedge {\omega}^{s} $ is the Riemann-Cartan curvature 2-forms, $ {{\Sigma}}_{m} = \frac{1}{6} {{{\Sigma}}_{m}}^{k} {\epsilon}_{krsf} {\omega}^{r} \wedge {\omega}^{s} \wedge {\omega}^{f} $ is the energy-momentum 3-form current, $ {\epsilon}_{ijrs} $ is the Levi-Civita totally skew-symmetric symbol $ S_{ij} $ is the 3-form spin density. To solve eqns. (\ref{7}) and (\ref{8}) remains to compute the second Cartan structure eqn.
\begin{equation}
{{R}^{i}}_{k} = d{{\omega}^{i}}_{k} + {{\omega}^{i}}_{l} \wedge {{\omega}^{l}}_{k}
\label{9}
\end{equation}
and to compute the KOP matter-spin current
\begin{equation}
{\Sigma}_{i} = {\epsilon} u_{i}u + p({\eta}_{i} + u_{i}u) - 2u^{k} \dot{S}_{ik}u
\label{10}
\end{equation}
(for notation see Ref.[12]) which in the case of a thin cosmic wall can be written as
\begin{equation}
{{\Sigma}}_{i} = {{\Sigma}^{w}}_{i} - 2 u^{k} \dot{S}_{ik}u
\label{11}
\end{equation}
where $ {{\Sigma}_{i}}^{\omega} $ corresponds to the planar thin wall stress-energy tensor $ {{{\Sigma}^{\omega}}_{i}}^{k} $ given by
\begin{equation}
{{\Sigma}^{w}_{i}}^{k} = {\sigma} {\delta}(z) diag(1,1,1,0)
\label{12}
\end{equation}
where $ \delta $(z) is the Dirac $ {\delta} $-function and the plane is orthogonal to the z-direction and $ \sigma $ is the constant surface energy-density.
Since we here deal only with static polarized $ \dot{S}_{ik} $ vanishes and (\ref{11}) reduces to the thin planar wall current.
Substitution of (\ref{6}) into (\ref{9}) yields the components
\begin{equation}
\begin{array}{llll}
{{R}^{0}}_{101}({\Gamma}) & = & c J^{0} \frac{H'}{2} = {{R}^{0}}_{202}({\Gamma}) \nonumber \\
\\
{{R}^{0}}_{330}({\Gamma}) & = & J^{0'} \nonumber \\
\\
{{R}^{1}}_{212}({\Gamma}) & = & {c}^{2} \frac{H'^{2}}{4} \nonumber \\
\\
{{R}^{2}}_{332}({\Gamma}) & = & - {{R}^{1}}_{331}({\Gamma}) = - \frac{1}{2} (H'' + \frac{1}{2}H'^{2}) \nonumber \\
\end{array}
\label{13}
\end{equation}
where $ \Gamma $ is the Riemann-Cartan connection.
Notice that the component $ {{R}^{0}}_{330} $ has a pure torsional contribution.
Since we are dealing only with linearized EC theory terms such $ H^{'2} $ and $ J^{0}H' $ should be dropped. Substitution of (\ref{11}) and (\ref{13}) into (\ref{7}) yields the following fields equations
\begin{equation}
\begin{array}{ll}
H''(z) = 8 {\pi} G {\sigma} {\delta}(z) \nonumber \\
\\
J^{0'} = \frac{8}{3} {\pi} G {\sigma} {\delta}(z) \nonumber \\
\end{array}
\label{14}
\end{equation}
A simple solution of (\ref{14}) reads
\begin{equation}
H'(z) = 8 {\pi} G {\sigma} {\theta}_{0}(z)
\label{15}
\end{equation}
and
\begin{equation}
J^{0} = \frac{8{\pi}G}{3} {\sigma} {\theta}_{0}(z)
\label{16}
\end{equation}
Here $ {\theta}_{0}(z) $ is the Heaviside step function given by
\begin{equation}
{\theta}_{0}(z)=
\left \{
\begin{array}{ll}
1 , z < 0 \nonumber \\
\frac{1}{2} , z = 0 \nonumber \\
0 , z > 0 \nonumber \\
\end{array}
\right.
\end{equation}
The second equation in (\ref{14}) tell us that $ {\delta}$-Dirac torsion is not compatible with the thin cosmic wall as far as our model is concerned.
Thus eqn. (\ref{16}) yields a torsion step function. This is not the first time that torsion step functions appear in the context of EC-gravity. Previously H.Rumpf \cite{13} has made use of torsion steps as a mechanism to create Dirac particles on torsion and electromagnetic backgrounds.
\section{Matching conditions}
\paragraph*{}
Equation (\ref{15}) yields the space-time region
\begin{equation}
ds^{2} = dt^{2} - dz^{2} - e^{{\beta}z}(dx^{2} + dy^{2}) \hspace{1.0cm} (z<0)
\label{18}
\end{equation}
where $ {\beta} \equiv 8 {\pi}G{\sigma} $. The resulting space-time is obtained by gluing together \cite{14,15} two space-times across a torsion junction given by a cosmic planar thin wall. One space-time is given by expression (\ref{18}) and the other is given by the Minkowski space-time. Note that the boundary conditions \cite{13}
\begin{equation}
g_{ij} {\vert}_{+} = g_{ij} {\vert}_{-}
\label{19}
\end{equation}
\begin{equation}
n_{k}{{{\Sigma}}_{i}}^{k} - n_{i} \overline{K}_{jkl} \overline{K}^{klj} {\vert}_{-} = 0
\label{20}
\end{equation}
\begin{equation}
n_{k} {{\Sigma}_{ij}}^{k} {\vert}_{-} = 0
\label{21}
\end{equation}
where $ n_{i} $ is the normal vector to $ z=0 $ plane and the bar over the contortion tensor $ K_{ijk} $ are the projections onto the wall are obeyed. Eqns (\ref{20}) reduces in the linearized case to $ n_{k} {{{\Sigma}}_{i}^{k}}=0 $. Here the plus and minus signs refer to the RHS and LHS of the cosmic thin wall. Let us now search for the spin distribution corresponding to Cartan torsion $ T^{0}$.
Substitution of $ T^{0} $ into (\ref{8}) yields the following spin 3-forms
\begin{equation}
S_{13}= - \frac{1}{8{\pi}G} {\theta}_{0}(z) {\omega}^{0} \wedge {\omega}^{2} \wedge {\omega}^{3}
\label{22}
\end{equation}
\begin{equation}
S_{23}= \frac{1}{8{\pi}G} {\theta}_{0}(z) {\omega}^{0} \wedge {\omega}^{1} \wedge {\omega}^{3}
\label{23}
\end{equation}
Notice that the spin distribution (\ref{22}) and (\ref{23}) correspond physically to orthogonal lines of polarized spins along $ z=\mbox{const} < 0 $ hypersurfaces. Note also that spins do not exist only along the cosmic wall (z=0) but also at the LHS of the wall.
\section{Conclusions}
\paragraph*{}
Note that the resulting space-time is not a pure space-time defect since on the LHS of the cosmic wall the space is not Minkowskian. Notice also that in the Riemannian limit $ (J^{0} \equiv 0 ) $ the curvature components (\ref{13}) reduce to $ {{R}^{2}}_{332}(\{ \}) = {{R}^{1}}_{331}(\{ \}) = 8 {\pi}G {\sigma} {\delta}(z) $ which represents the Riemannian planar thin wall curvatures. If the solution here may represent a planar thin domain wall is another story that we will appear elsewhere \cite{14,15}.
Since the choice of the metric (\ref{18}) is the same as the Letelier choice for multiple cosmic strings and since there are lines of spinning particles orthogonal to each other along the cosmic wall it is argued that maybe the lines of spinnings particles could be replaced by spinning cosmic strings. This idea is also supported by proof of Galtsov and Letelier \cite{16} that the chiral conical space-time arising from the spinning particle solution in (2+1)- dimensional gravity by an appropriate boost is the gravitational counterpart for the infinitely thin straight chiral string.
One could also note that in the case of Letelier \cite{7} solution of plane walls crossed by cosmic strings the only interaction between them is via the metric function H(z) in expressions (\ref{1}) and (\ref{2}).
This fact further support our idea that the lines of polarized spinning particles could be analogous to cosmic strings. As noted by A.Vilenkin \cite{17} the weak field approximation breaks down at large distance from walls and strings, therefore an exact solution of the problem dealt with here is necessary and will be address in future work. Finally one may notice that torsion here is constant in one side of the cosmic wall and vanishes on the other. This means that our solution does not describe a torsion wall where torsion is given by $\delta$-Dirac functions. Another place where constant torsion appears is in the study of torsion kinky in Poincar\'{e} gauge field theory \cite{18}.
\section*{Acknowledgments}
\paragraph*{}
I would like to express my gratitude to Prof. F.W.Hehl, P.S.Letelier and A.Wang for helpful discussions on the subject of this paper. Financial support from UERJ and CNPq. is gratefully acknowledged.
| proofpile-arXiv_065-8639 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\setcounter{equation}{0}
The relation between symmetries and quantum theory is
an important and fundamental issue. For instance,
symmetry relations among correlation functions (Ward identities)
are often used in order to prove that a quantum field theory
is unitary and renormalizable.
Conversely, the violation of a classical symmetry at the quantum
level (anomalies) often indicates that the theory is inconsistent.
Furthermore, in recent years symmetries (such as supersymmetry)
have been instrumental in uncovering non-perturbative
aspects of quantum theories (see, for example, \cite{SW}).
It is, thus, desirable to understand the interplay
between symmetries and quantization in a manner which is free of the
technicalities inherent in the conventional Lagrangian
approach (regularization/renormalization) and in
a way which is model independent as much as possible.
In a recent paper\cite{paper1} we have presented
a general method, the Quantum Noether Method, for
constructing perturbative quantum field theories
with global symmetries. Gauge theories are
within this class of theories, the global symmetry being
the BRST symmetry\cite{BRST}. The method is
established in the causal approach
to quantum field theory introduced by Bogoliubov and Shirkov \cite{BS}
and developed by Epstein and Glaser\cite{EG0,stora}.
This explicit construction method rests
directly on the axioms of relativistic quantum field theory.
The infinities encountered in the conventional approach
are avoided by a proper handling of the correlation
functions as operator-valued distributions.
In particular, the well-known problem of ultraviolet (UV)
divergences is reduced to the mathematically
well-defined problem of splitting an operator-valued distribution with causal
support into a distribution with retarded and a distribution with
advanced support or, alternatively \cite{stora, fredenhagen1},
to the continuation of time-ordered products to coincident points.
Implicitly, every consistent renormalization scheme
solves this problem. Thus, the explicit Epstein-Glaser (EG)
construction should not be regarded as a special renormalization
scheme but as a general framework in which the conditions posed by the
fundamental axioms of quantum field theory
(QFT) on any renormalization scheme are built in by
construction. In this sense our method is
independent from the causal framework. Any
renormalization scheme can be used to work out the
consequences of the general symmetry conditions proposed in \cite{paper1}.
In the EG approach the $S$-matrix is directly constructed
in the Fock space of free asymptotic fields in a form of formal
power series. The coupling constant is replaced by a tempered
test function $g(x)$ (i.e. a smooth function rapidly decreasing
at infinity) which switches on the interaction. Instead
of evaluating the $S$-matrix by first computing
off-shell Greens functions by means of Feynman rules
and then applying the LSZ formalism, the $S$-matrix is
directly obtained by imposing causality and Poincar\'{e}
invariance. The method can be regarded as an ``inverse''
of the cutting rules. One builds $n$-point functions out
of $m$-point functions ($m<n$) by suitably
``gluing'' them together. The precise manner
in which this is done
is dictated by causality and Poincar\'{e} invariance
(see appendix A for details).
One shows, that this process uniquely fixes the $S$-matrix
up to local terms (which we shall call ``local normalization terms'').
At tree level these local terms are nothing but
the Lagrangian of the conventional approach\cite{paper1}.
The problem we set out to solve in \cite{paper1} was to
determine how to obtain a quantum theory
which, on top of being causal and Poincar\'{e} invariant,
is also invariant under a global symmetry.
For linear symmetries such as global internal symmetries
or discrete $C$, $P$, $T$ symmetries the solution
is well-known: one implements the symmetry in the
asymptotic Fock space space by means of an (anti-) unitary
transformation.
The focus of our investigation in \cite{paper1} was
symmetries that are
non-linear in the Lagrangian formulation. The prime
examples are BRST symmetry and supersymmetry (in the
absence of auxiliary fields). The main puzzle is
how a theory formulated in terms of asymptotic
fields only knows about the inherent non-linear structure.
The solution to the problem is rather natural. One imposes
that the Noether current that generates
the asymptotic symmetry is conserved
at the quantum level, i.e. inside correlation functions.
This condition, the Quantum Noether Condition (QNC),
constrains the local normalization terms left
unspecified by causality and Poincar\'{e} invariance.
At tree-level one finds that the asymptotic Noether
current renormalizes such that it generates
the full non-linear transformation rules.
At the quantum level the same condition yields
the corresponding Ward identities.
The way the methods works is analogous to the classical
Noether method \cite{deser,sugra}, hence its name.
In addition, we have shown that the QNC
is equivalent to the condition that the
$S$-matrix is invariant under the symmetry under
question (i.e. the $S$-matrix commutes with the
generator of the asymptotic symmetry).
Quantum field theory, however, is usually formulated in terms of interacting
fields. In the Lagrangian formulation, the symmetries of the theory
are the symmetries of the action (or more generally of the
field equations) that survive at the quantum level.
These symmetries are generated by interacting Noether currents.
It will, thus, be desirable to express the QNC in terms
of the latter. As we shall see, this is indeed possible.
The QNC in term of the interacting current is given
in (\ref{cond3}).
If the symmetry is linear then the condition
is that the interacting current is conserved (as expected).
If the symmetry, however, is non-linear the interacting
current is only conserved in the adiabatic limit ($g \tilde{\omega}$ const.).
One important example is Yang-Mills theory. In this
case, the corresponding Noether current is the BRST
current.
Because there are unphysical degrees of freedom present in gauge theories,
one needs a subsidiary condition in order
to project out the unphysical states.
The subsidiary condition should remain invariant under time
evolution. This means that it should be expressed in terms of a conserved
charge. The appropriate charge for gauge theories is the
BRST charge \cite{KO}. The subsidiary condition is that
physical states should be annihilated by the BRST charge $Q_{int}$
(and not be $Q_{int}$-exact).
The considerations in \cite{KO}, however, (implicitly) assumed the
naive adiabatic limit. For pure gauge theories this limit
seem not to exist. Then from the Quantum Noether Condition (\ref{cond3})
follows that the interacting BRST current
is not conserved before the adiabatic limit.
We stress, however, that the Quantum Noether Condition
allows one to work out all consequences of non-linear symmetries for
time-ordered operator products before the adiabatic limit is taken.
As we shall see, one can even identify the non-linear
transformation rules.
We organize this paper as follows:
In the next section we shortly review the Quantum Noether Method.
In section 3 we express the Quantum Noether Condition
in terms of the interacting Noether current.
Section 4 contains a discussion of future
directions. In the appendix we present
the main formulae of the causal framework
and our conventions.
\section{The Quantum Noether Method}
\setcounter{equation}{0}
In the EG approach one starts with a set of free fields in the
asymptotic Fock space. These fields satisfy their (free) field
equations and certain commutation relations. To define the theory
one still needs to specify $T_1$, the first term in the $S$-matrix.
(Actually, as we shall see, even $T_1$ is not free in our
construction method but is also constrained by the Quantum Noether Condition).
Given $T_1$ one can, in a well defined manner, construct iteratively the
perturbative $S$-matrix. The requirements of causality and Poincar\'{e}
invariance completely fix the $S$-matrix up to local terms.
The additional requirement that the theory is invariant under
a global and/or local symmetry imposes constraints on these local terms.
To construct a theory with global and/or local
symmetry we introduce the coupling $g_\mu j^\mu_0$ in the theory,
where $j^\mu_0$ is the Noether current that generates the
asymptotic (linear) symmetry transformations, and
we impose the condition that ``the Noether current
is conserved at the quantum level''
\begin{equation}} \def\eq{\begin{equation} \label{cons}
\partial_\mu {\cal J}_n^{\mu} (x_1, \cdots, x_n; \hbar) =0,
\end{equation}} \def\eqe{\end{equation}
where we introduce the notation (we
use the abbreviation $\partial/ \partial x^\mu_l = \partial^l_\mu$)
\begin{equation}} \def\eq{\begin{equation} \label{nota}
\partial_\mu {\cal J}_n^{\mu} (x_1, \cdots, x_n; \hbar)=
\sum_{l=1}^n \partial_\mu^l {\cal J}_{n/l}^{\mu},
\end{equation}} \def\eqe{\end{equation}
and
\begin{equation}} \def\eq{\begin{equation}
{\cal J}^\mu_{n/l}=T[T_1 (x_1) \cdots j_0^\mu(x_l) \cdots T_1(x_n)].
\end{equation}} \def\eqe{\end{equation}
(for $n=1$, ${\cal J}^\mu_1(x_1)=j_0^\mu(x_1)$).
In other words we consider an $n$-point function
with one insertion of the current $j_0^\mu$ at the point $x_l$.
Notice that since the left hand side of (\ref{cons}) is a formal
Laurent series in $\hbar$, this condition is actually a set of conditions.
One may apply the inductive EG construction to work out the consequences of
(\ref{cons}). This may be done by first working out $T[j_0 T_1...T_1]$
and then constructing (\ref{nota}). However, there is an alternative
route \cite{paper1}. One relaxes
the field equations of the fields $\phi^A$. Then the
inductive hypothesis takes the form: for $m<n$,
\begin{equation}} \def\eq{\begin{equation} \label{tfeq}
\sum_{l=1}^m \partial^l_\mu {\cal J}_{m/l}^{\mu} =
\sum_{A} R^{A;m}(\hbar) {\cal K}_{AB} \f^B \delta(x_1, \ldots, x_m),
\end{equation}} \def\eqe{\end{equation}
where
\begin{equation}} \def\eq{\begin{equation} \label{feq}
{\cal K}_{AB} \f^B= \partial^\mu {\partial {\cal L}_0 \over \partial (\partial^\mu \f^A)}
- {\partial {\cal L}_0 \over \partial \f^A}
\end{equation}} \def\eqe{\end{equation}
are the free field equations (${\cal L}_0$ is the free Lagrangian
that yields (\ref{feq}); the present formulation assumes
that such a Lagrangian exists).
The coefficients $R^{A;m}(\hbar)$ are defined by (\ref{tfeq})
and are formal series in $\hbar$.
Clearly, if we impose the field equation we go back to (\ref{cons}).
The converse is also true. Once one relaxes the field
equations in the inductive step, (\ref{cons}) implies (\ref{tfeq})
as was shown in \cite{paper1}. The advantage of the off-shell
formulation is that it makes manifest
the non-linear structure: the coefficients $R^{A;m}(\hbar)$
are just the order $m$ part
of the non-linear transformation rules. In addition,
the calculation of local on-shell terms arising from tree-level graphs
simplifies:
We now discuss the condition (\ref{cons})
at tree-level. For the analysis at loop level we refer to \cite{paper1}.
At tree-level we only need the $\hbar^0$ part of (\ref{tfeq}).
Let us define
\begin{equation}} \def\eq{\begin{equation}
\label{delta}
s_{(m-1)}\f^A = {1 \over m!} R^{A;m}(\hbar^0).
\end{equation}} \def\eqe{\end{equation}
Depending on the theory under consideration the quantities $R^{A;m}(\hbar^0)$
may be zero after some value of $m$. Without loss of generality we
assume that they are zero for $m>k+1$, for some integer $k$ (which
may be infinity; the same applies for $k'$ below.).
One shows that
\begin{equation}} \def\eq{\begin{equation} \label{fulltr}
s \f^A = \sum_{m=0}^k g^m s_m \f^A
\end{equation}} \def\eqe{\end{equation}
are symmetry transformation rules that leave the Lagrangian,
\begin{equation}} \def\eq{\begin{equation} \label{lagr}
{\cal L} = \sum_{m=0}^{k'} g^m {\cal L}_m,
\end{equation}} \def\eqe{\end{equation}
invariant (up to total derivatives),
where $k'$ is also an integer (generically not equal to $k$).
The Lagrangian ${\cal L}$ will be determined from the tree-level normalization
conditions as follows,
\begin{equation}} \def\eq{\begin{equation} \label{lagdef}
{\cal L}_m = {\hbar \over i} {N_m \over m!}, \quad {\rm for} \quad m>1,
\end{equation}} \def\eqe{\end{equation}
where $N_m$ denotes the local normalization ambiguity of
$T_m[T_1(x_1)...T_1(x_m)]$ in tree graphs defined with respect
to the naturally split solution (i.e. the Feynman propagator
is used in tree-graphs). For $m=1$, ${\cal L}_1=(\hbar/i)T_1$.
The factor $m!$ reflects the fact that
$T_m[...]$ appears in (\ref{GC}) with a combinatorial
factors $m!$ while the factor $\hbar/i$ is there to cancel the
overall factor $i/\hbar$ that multiplies the action in the
tree-level $S$-matrix. Notice that we regard
(\ref{lagdef}) as definition of ${\cal L}_m$.
Let us further define $j_n^\mu$ as the local normalization
ambiguity of $T_n[j_0T_1...T_1]$,\footnote{
We use the following abbreviations for the delta function distributions
$\delta^{(n)}=\delta(x_1, \ldots, x_n)=$\\ $\delta(x_1-x_2)\cdots\delta(x_{n-1}-x_n)$.}
\begin{equation}} \def\eq{\begin{equation} \label{jndef}
T_n [j_0^\mu(x_1) T_1(x_2) \cdots T_1 (x_n)]=
T_{c,n} [j_0^\mu(x_1) T_1(x_2) \cdots T_1(x_n)]
+ j_{n-1}^\mu \delta^{(n)}
\end{equation}} \def\eqe{\end{equation}
where $T_{n,c}$ denotes the naturally splitted solution.
We shall see that the normalization terms $j_n$
complete the asymptotic current $j_0$ to the
Noether current that generates the non-linear symmetry
transformations (\ref{fulltr}).
We wish to calculate the tree-level terms at $n$th order.
The causal distribution $\sum_{l=1}^n \partial_\mu^l {\cal D}^\mu_{n/l}$
at the $n$th order consists of a sum of terms each of these being
a tensor product of
$T_m[T_1 ... T_1 \partial{\cdot}j_0 T_1 ... T_1)$ ($m<n$) with
$T$-products that involve
only $T_1$ vertices according to the general
formulae (\ref{ret},\ref{adva},\ref{D-dist}).
By the off-shell induction hypothesis, we have for all $m<n$
\begin{equation}} \def\eq{\begin{equation} \label{offshell}
\sum_{l=1}^m \partial^l_\mu {\cal J}_{m/l}^{\mu} =
\sum_{A} (m! s_{m-1} \f^A) {\cal K}_{AB} \f^B \delta^{(m)}.
\end{equation}} \def\eqe{\end{equation}
As explained in detail in \cite{paper1}, at order $n$ one
obtains all local on-shell terms by performing the so-called
``relevant contractions'', namely the
contractions between the $\f^B$ in the right hand side of (\ref{offshell})
and $\f$ in local terms.
In this manner we get the following general formula for the
local term $A_{c,n}$ arising through tree-level contractions at level
$n$,
\begin{equation}} \def\eq{\begin{equation} \label{loc}
A_{c,n}(tree) = \sum_{\pi \in \Pi^n} \sum_{m=1}^{n-1}
\partial_\mu {\cal J}_m^\mu(x_{{(+)}(1)}, \ldots, x_{{(+)}(m)})
N_{n-m}\delta(x_{{(+)}(k+1)}, \ldots, x_{{(+)}(n)})
\end{equation}} \def\eqe{\end{equation}
where it is understood that in the right hand side only
``relevant contractions'' are
made. The factors $N_{n-m}$ are tree-level normalization terms of
the $T$-products that contain $n-m$ $T_1$ vertices.
In \cite{paper1} we have provided a detailed analysis
of (\ref{loc}) for any $n$ (under the assumption that
the Quantum Noether Method is not obstructed). In the next section,
we will need these results in order to show that condition
(\ref{cond3}) is equivalent to condition (\ref{cons}).
We therefore list them here without proofs.
The $n=1$ case is trivial. One just gets that $R^{A;1}(\hbar^0)=s_0 \f^A$.
For $2 \leq n \leq k+1$, the
condition (\ref{tfeq}) at tree-level yields the following constraint
on the local normalization terms of the $T_m$, $m<n$,
\begin{equation}} \def\eq{\begin{equation} \label{n<k}
s_0 {\cal L}_{n-1} + s_1 {\cal L}_{n-2} + \cdots + s_{n-2} {\cal L}_1=
\partial_\mu {\cal L}^\mu_{n-1} + s_{n-1}\f^A {\cal K}_{AB}\f^B
\end{equation}} \def\eqe{\end{equation}
and, furthermore, determines $j_{n-1}^\mu$,
\begin{equation}} \def\eq{\begin{equation} \label{jn}
j_{n-1}^\mu= -n!{\cal L}_{n-1}^\mu
+(n-1)! \sum_{l=0}^{n-2} (l+1) \frac{\partial {\cal L}_{n-1-l}}
{\partial(\partial_\mu \f^A)} s_l \f^A.
\end{equation}} \def\eqe{\end{equation}
For $n>k+1$ we obtain,
\begin{equation}} \def\eq{\begin{equation} \label{n>k}
s_0 {\cal L}_{n-1} + s_1 {\cal L}_{n-2}
+ \cdots + s_k {\cal L}_{n-1-k}=
\partial_\mu {\cal L}_{n-1}^\mu,
\end{equation}} \def\eqe{\end{equation}
and
\begin{equation}} \def\eq{\begin{equation} \label{jn1}
j_{n-1}^\mu=-n!{\cal L}^\mu_{n-1} + (n-1)! \sum_{l=1}^{k}
l \frac{\partial {\cal L}_{n-l}}{\partial(\partial_\mu \f^A)} s_{l-1} \f^A.
\end{equation}} \def\eqe{\end{equation}
Depending on the theory under consideration the ${\cal L}_n$'s will
be zero for $n>k'$, for some integer $k'$. Given the integers $k$ and
$k'$, there is also an integer $k''$ (determined from the other two)
such that ${\cal L}^\mu_n=0$, for $n>k''$.
Summing up the necessary and sufficient conditions (\ref{n<k}),
(\ref{n>k}) for the Quantum Noether method to
hold at tree level we obtain,
\begin{equation}} \def\eq{\begin{equation}
s \sum_{l=1}^{k'} g^l {\cal L}_l = \sum_{l=1}^{k''} \partial_\mu {\cal L}_l^\mu
+ (\sum_{l=1}^k g^l s_l \f^A) {\cal K}_{AB} \f^B
\end{equation}} \def\eqe{\end{equation}
Using $s_0 {\cal L}_0 = \partial_\mu k^\mu_0$ and for $l \leq k$
\begin{equation}} \def\eq{\begin{equation}
s_l \f^A {\cal K}_{AB} \f^B = \partial_\mu ({\partial {\cal L}_0 \over \partial(\partial_\mu \f^A)} s_l \f^A)
-s_l {\cal L}_0
\end{equation}} \def\eqe{\end{equation}
we obtain,
\begin{equation}} \def\eq{\begin{equation} \label{treecon}
s {\cal L} = \partial_\mu (\sum_{l=0}^{k''} g^l k_l^\mu)
\end{equation}} \def\eqe{\end{equation}
where, for $1<l \leq k$,
\begin{equation}} \def\eq{\begin{equation} \label{kdef}
k_l^\mu = {\cal L}_l^\mu + {\partial {\cal L}_0 \over \partial(\partial_\mu \f^A)} s_l \f^A
\end{equation}} \def\eqe{\end{equation}
and for $l>k$, $k_l^\mu = {\cal L}_l^\mu$.
We therefore find that ${\cal L}$ is invariant under the symmetry
transformation,
\begin{equation}} \def\eq{\begin{equation}
s \f^A = \sum_{l=0}^k g^l s_l \f^A.
\end{equation}} \def\eqe{\end{equation}
According to Noether's theorem there is an associated Noether current.
One may check that the current normalization terms $j_m^\mu$
(\ref{jn}), (\ref{jn1}) are in one-to-one correspondence
with the terms in the Noether current.
Therefore the current $j_0$ indeed renormalizes to the
full non-linear current.
\section{Conservation of the Interacting Noether Current}
\setcounter{equation}{0}
The Quantum Noether Condition (\ref{cons}) can be reformulated in terms of
interacting fields.
Let $j^\mu_{0,int}$ and $\tilde{j}^\mu_{1,int}$
be the interacting currents corresponding to free field operators
$j_0^\mu$ and $\tilde{j}_1^\mu$,
respectively, perturbatively constructed
according to (\ref{defint}). $\tilde{j}_1^\mu$ is equal to
$- {\cal L}^\mu_1$ (defined in (\ref{n<k})) as will see below.
Then the general Ward identity
\begin{equation}} \def\eq{\begin{equation} \label{cond3}
\partial_\mu j^\mu_{0,int} = \partial_\mu g \tilde{j}^\mu_{1, int}
\end{equation}} \def\eqe{\end{equation}
is equivalent to condition (\ref{cons}).
According to condition (\ref{cond3})
the interacting Noether current $j^\mu_{0,int}$ is conserved
only if it generates a linear symmetry, i.e. $\tilde{j}_1^\mu$ vanishes,
or otherwise in the adiabatic limit
\mbox{$g(x)\rightarrow 1$}, provided this limit exists.
In the following we shall show that the condition (\ref{cond3})
yields the same conditions on the
the time-ordered products $T_n [T_1... T_1]$
as the Quantum Noether condition (\ref{cons}).
In this sense the two general symmetry conditions
are considered equivalent.
Because Poincar\'{e} invariance and causality already
fix the time-ordered products $T_n [T_1... T_1]$
up to the local normalization ambiguity $N_n$, we only have to
show that these local normalization terms $N_n$ are constrained
in the same way by both conditions, (\ref{cond3}) and (\ref{cons}).
First, we translate the condition (\ref{cond3}) to a condition on
time-ordered products using the formulae given in the appendix:
The perturbation series for the interacting field operator $j^\mu_{int}$
of a free field operator $j^\mu$ is given by
the advanced distributions of the corresponding expansion of
the $S$-matrix (see (\ref{defint})):
\begin{equation}} \def\eq{\begin{equation}
\label{advanced}
j_{int}^\mu (g,x) = j^\mu (x) + \sum_{n=1}^\infty \frac{1}{n!}
\int d^4 x_1 \ldots d^4 x_n
Ad_{n+1} \left[T_1 (x_1) \ldots T_1 (x_n);
j^\mu (x) \right] g(x_1) \ldots g(x_n),
\end{equation}} \def\eqe{\end{equation}
where $Ad_{n+1}$ denotes the advanced operator-valued
distribution with
$n$ vertices $ T_1 $ and one vertex $ j^\mu (x) $ at the
$(n+1)$th position.
This distribution is only symmetric in the first $n$
variables $x_1, \ldots, x_n $.
The support properties are defined with respect to the
unsymmetrized variable $x$.
With the help of (\ref{advanced}), we rewrite the left hand side of
equation (\ref{cond3})
\begin{equation}} \def\eq{\begin{equation}
\partial_\mu^x j_{0,int}^{\mu} (x) = \partial_\mu^x j_0^\mu(x) +
\sum_{n=1}^\infty \frac{1}{n!}
\int d^4 x_1 \ldots d^4 x_n
\partial_\mu^x Ad_{n+1} \left[ T_1 (x_1)
\ldots T_1 (x_n); j_0^\mu (x) \right]
g(x_1) \ldots g(x_n)
\end{equation}} \def\eqe{\end{equation}
and the right hand side of (\ref{cond3})
\begin{eqnarray}} \def\eqa{\begin{eqnarray}
\tilde{j}_{1,int}^{\mu} (x) \partial_\mu g(x) & = &
\sum_{n=0}^\infty \frac{1}{n!}
\int d^4 x_1 \ldots d^4 x_n d^4 x_{n+1} \\
& & Ad_{n+1} \left[ T_1 (x_1) \ldots T_1 (x_n);
\tilde{j}_1^\mu (x) \right]
\delta (x-x_{n+1}) \quad g(x_1) \ldots g(x_n)
\partial_\mu^{x_{n+1}} g(x_{n+1}) \nonumber
\eea
After partial integration, symmetrization of the
integrand in the variable
$(x_1, \ldots, x_{n+1})$ and
shifting the summation index, the right hand side of (\ref{cond3})
can be further rewritten as
\begin{eqnarray}} \def\eqa{\begin{eqnarray}
\tilde{j}_{1,int}^{\mu} (x) \partial_\mu
g(x) & = & - \sum_{n=1}^\infty \frac{1}{n!}
\int d^4 x_1 \ldots d^4 x_n \\
&&\sum_{j=1}^{n} \left\{ Ad_{n}
\left[T_1(x_1) \ldots \widehat{T(x_j)} \ldots T(x_n);\tilde{j}_1^\mu(x)
\right]
\partial_\mu^{x_j} \delta (x_j - x) \right\} g(x_1) \ldots g(x_n) \nonumber
\eea
where the hat indicates that this coupling has to be omitted.
Equation (\ref{cond3}) reads then
\begin{eqnarray}} \def\eqa{\begin{eqnarray}
\label{cond3adv}
&& \partial_\mu j^\mu_0 =0, \qquad (n=0)\nonumber \\{}
&& \partial_\mu^x Ad_{n+1} \left[ T_1(x_1)
\ldots T(x_n); j_0^\mu (x) \right] \nonumber \\{}
&&+ \sum_{j=1}^{n} Ad_n \left[ T_1(x_1) \ldots \widehat{T(x_j)}
\ldots T(x_n); \tilde{j}_1^\mu (x) \right]\partial_\mu^{x_j}
\delta (x_j - x) = 0, \quad (n>0)
\eea
where the local normalization terms of the $ Ad $-distributions with
respect to a specified splitting solution will be given below.
In the following we discuss the equivalent condition of the
time-ordered distributions
instead of the advanced ones in order to compare the unsymmetrized
condition (\ref{cond3})
with the symmetrized Quantum Noether Condition (\ref{cons}).
We get
instead of (\ref{cond3adv})
\begin{eqnarray}} \def\eqa{\begin{eqnarray}
&& \partial_\mu^x T_{n+1} \Big[ T_1(x_1) \ldots T_1(x_n); j_0^\mu (x) \Big]
\nonumber \\
&&\hspace{2cm} = -\sum_{j=1}^{n} T_n \left[ T_1(x_1) \ldots
\widehat{T_1(x_j)} \ldots T_1(x_n);
\tilde{j}_1^\mu (x) \right] \partial_\mu^{x_j}
\delta (x_j - x) \label{condt}
\eea
These distributions get smeared out by $ {g(x_1)
\ldots g(x_n) \tilde {g}(x)}$, where
the test-function $\tilde{g}$ differs from $g$.
One easily verifies the left hand side of (\ref{condt}) is
just the Quantum Noether Condition (\ref{cons}) but without the
symmetrization; the missing symmetrization produces
the extra terms on the right hand side of (\ref{condt}) as we shall see.
We shall use the same off-shell procedure in order to fix the
local on-shell obstruction
terms (which is explained in detail in \cite{paper1}, section 4.2).
The starting point ($n=0$) of both conditions is the same
\begin{equation}} \def\eq{\begin{equation}
\partial_\mu j_0^\mu (x) = s_0 \phi^A {\cal K}_{AB} \phi^B
\end{equation}} \def\eqe{\end{equation}
We have now for $n=1$,
\begin{equation}} \def\eq{\begin{equation}
\partial_\mu^x \left( T_{2,c}
[T_1 (x_1) j_0^\mu (x)] + j_1^\mu \delta (x_1 - x) \right)
= - \tilde{j}_1^\mu (x) \partial_\mu^{x_1} \delta (x_1 - x)
\end{equation}} \def\eqe{\end{equation}
Working out the left hand side (and using
$T_1= \frac{i}{\hbar} {\cal L}_1$) we obtain,
\begin{equation}} \def\eq{\begin{equation} \label{n=2new}
\partial_\mu^x \left( j_1^\mu \delta (x_1 - x) \right) +
s_0 {\cal L}_1 \delta (x_1 - x) -
\partial_\mu^x \left( \frac {\partial {\cal L}_1}
{\partial (\partial_\mu \phi^A)} s_0 \f^A \delta (x_1 - x) \right)
= \tilde{j}_1^\mu (x) \partial_\mu^{x} \delta (x_1 - x)
\end{equation}} \def\eqe{\end{equation}
This condition fixes the local renormalization
of $ j_0^\mu $ at order $g$, denoted by $ j_1^\mu $
(defined with respect to the natural splitting solution $T_{2,c}$)
and also $ \tilde{j}_1^\mu $ in condition (\ref{cond3}).
The latter term, proportional
to the derivative of the $\delta$-distribution, is left
over in our new unsymmetrized
condition. Note that in the symmetrized case,
we reduced these
kind of terms to ones proportional
to the $\delta$-distribution with the help of
distributional identities.
The condition (\ref{n=2new}) can be fulfilled for some
local operators $ j_1^\mu $ and $ \tilde{j}_1^\mu $ if and only if
$s_0 {\cal L}_1 $ is a divergence up to field equation terms,
\begin{equation}} \def\eq{\begin{equation}
s_0 {\cal L}_1 = \partial_\mu {\cal L}_1^\mu + s_1 \phi^A {\cal K}_{AB} \phi^B.
\end{equation}} \def\eqe{\end{equation}
In the absence of real obstructions
this equation has solutions and we get
\begin{equation}} \def\eq{\begin{equation} \label{j1}
j_1^\mu = - {\cal L}_1^\mu + \frac {\partial {\cal L}_1}
{\partial (\partial_\mu \phi^A)} s_0 \phi^A
\end{equation}} \def\eqe{\end{equation}
as local renormalization of $ j_{0,int}^\mu $ at order $ g^1 $ and
\begin{equation}} \def\eq{\begin{equation}
\tilde{j}_1^\mu = - {\cal L}_1^\mu.
\end{equation}} \def\eqe{\end{equation}
Equation (\ref{j1}) should be compared with the analogous formulae (\ref{jn})
for $n=2$ \footnote{Notice that $n$ in the present section
should be compared with $n+1$ in section 2.}. We finally have
\begin{eqnarray}} \def\eqa{\begin{eqnarray}
\label{n=2newfinal}
\partial_\mu^x T_{2} \left[ T_1 (x_1) j_0^\mu (x) \right] +
\tilde{j}_1^\mu (x) \partial_\mu^{x_1} \delta (x_1 - x) =
s_1 \phi^A {\cal K}_{AB} \phi^B \delta(x_1-x).
\eea
The off-shell term on the right hand side of (\ref{n=2newfinal})
is responsible for local obstruction terms at the next order, $n=2$.
We get (taking special care of
derivative terms and advantage
of our off-shell procedure):
\begin{eqnarray}} \def\eqa{\begin{eqnarray}
& \partial_\mu^x T_{3,c} \left[ T_1(x_1)T_1(x_2) j_0^\mu (x) \right] +
\left( T_{2,c} \left[ T_1(x_1) \tilde {j}_1^\mu (x) \right]
\partial_\mu^{x_2} \delta (x_2 - x) + [x_1 \leftrightarrow x_2] \right) & \\
& = {\hbar \over i} \left[
2s_1T_1 \delta^{(3)} - \left(2 \partial_\mu^x + \partial_\mu^{x_1}
+ \partial_\mu^{x_2}
\right) \left( \frac {\partial T_1}
{\partial (\partial_\mu \phi^A)} s_1 \phi^A \delta^{(3)} \right)
+ s_0 N_2 \delta^{(3)} - \partial_\mu^x \left( \frac {\partial N_2}
{\partial (\partial_\mu \phi^A)} s_0 \phi^A \delta^{(3)} \right) \right]
& \nonumber
\label{n=222}
\eea
where $ N_2 $ denotes the tree-normalization term of
$ T_2 [T_1 T_1] $ which is uniquely
defined with respect to the natural splitting solution
$ T_{2,c} [T_1 T_1] $. Now we include also the
normalization ambiguity of the
other distributions involved:
\begin{eqnarray}} \def\eqa{\begin{eqnarray}
\label{norm22}
T_3 \left[ T_1(x_1) T_1(x_2) j_0^\mu (x) \right] &=& T_{3,c}
\left[ T_1(x_1) T_1(x_2) j_0^\mu (x) \right] +
j_2^\mu (x) \delta(x_1, x_2, x) \\
T_2 \left[ T_1 (x_i) \tilde{j}_1^\mu (x) \right] &=& T_{2,c}
\left[ T_1 (x_i) \tilde{j}_1^\mu(x) \right] +
\tilde{j}_{2}^\mu \delta (x_i - x) \nonumber
\eea
According to (\ref{n<k}) the Quantum Noether Condition (\ref{cons})
at order $n=3$ is fulfilled if and only if
\begin{eqnarray}} \def\eqa{\begin{eqnarray}
\label{n=2222}
s_1 {\cal L}_1 + s_0 {\cal L}_2 = \partial_\mu {\cal L}_2^\mu + s_2 \phi^A {\cal K}_{AB} \phi^B
\eea
where the definition $ {\cal L}_n = (\hbar/i) (N_n/n!) $ is used.
Now the same is true for condition (\ref{n=222}). Only if
(\ref{n=2222}) holds
one can absorb the local terms
on the right hand side of (\ref{n=222}) in the normalization terms
$ j_2^\mu (x) $ and $\tilde{j}_2^\mu (x) $
given in (\ref{norm22}).
The reasoning is again slightly different from the one in the
symmetrized case. The distributions are only symmetric in the variables $x_i$,
but $x$ is a distinguished variable.
This means that the two local operator-valued distributions
\footnote{One could also choose as a basis
$ \hat{A}_0^{'} \delta (x_1, x_2, x); \partial^x \left( \hat{A}_1^{'}
\delta (x_1, x_2, x) \right) $.}
\begin{eqnarray}} \def\eqa{\begin{eqnarray}
\hat{A}_0 \delta (x_1, x_2, x); \quad \sum_{i=1}^2 \partial_{x_i}
\left( \hat{A}_1 \delta (x_1, x_2, x) \right),\eea
where $ \hat{A}_0 (x) $ and $ \hat{A}_1 (x) $ are local operators, are
independent (on the test functions
$ \tilde{g} (x_1, x_2, x) := g (x_1) g (x_2) \tilde{g} (x)$
with $ g \neq \tilde{g}$)\footnote{
In the symmetrized case, where one smears out with totally
symmetric test functions $ g(x_1, x_2, x_3) := g (x_1) g(x_2) g(x_3)$, one has
$\sum_{i = 1}^{2} \partial_{x_i} \left( \hat{A}_1 \delta(x_1, x_2, x)
\right) = (2/3) \partial \hat{A}_1
\delta (x_1, x_2, x).$}.
So if and only if (\ref{n=2222}) is true the condition (\ref{cond3})
can be fulfilled at order
$n=2$ and the local normalization terms of the interacting currents,
$ j_{0,int}^\mu $ and $ \tilde{j}_{1,int}^\mu,$ get fixed to
\begin{eqnarray}} \def\eqa{\begin{eqnarray}
j_2^\mu &=& 2! \left( - {\cal L}_2^\mu + \frac {\partial {\cal L}_2} {\partial
(\partial_\mu \phi^A)} s_0 \phi^A +
\frac {\partial {\cal L}_1} {\partial (\partial_\mu \phi^A)} s_1 \phi^A \right)
\nonumber \\{}
\tilde{j}_2^\mu &=& -2! {\cal L}_2^\mu
+ \frac{\partial {\cal L}_1}{\partial (\partial_\mu \f^A)} s_1 \f^A
\eea
Note the different symmetry factors in $ j_2^\mu $ compared with the
symmetrized case (\ref{jn}). With these normalizations we get
\begin{eqnarray}} \def\eqa{\begin{eqnarray}
&\partial_\mu^x T_3 \left[ T_1 (x_1) T_1 (x_2) j_0^\mu (x) \right] +
\left( T_2 \left[ T_1 (x_1) \tilde{j}_1 (x) \right] \partial_\mu^{x_2}
\delta (x_2 - x) + [x_1 \leftrightarrow x_1] \right) & \\
& = 2! s_2 \phi^A {\cal K}_{AB} \phi^B \delta(x_1, x_2, x) & \nonumber
\eea
This corresponds to (\ref{cond3}) at order $n=2$:
\begin{eqnarray}} \def\eqa{\begin{eqnarray}
\partial_\mu^x j_{0,int}^{\mu} (x) \Big|_{g^2}=
\tilde{j}_{1,int}^{\mu} (x)\Big|_{g^1} \partial_\mu g (x)
+ 2! s_2 \phi^A {\cal K}_{AB} \phi^B (x).
\eea
{}From these first two steps of the inductive construction, one
already realizes that
in general the additional terms proportional to $\partial_\mu g$
in (\ref{cond3}) correspond to terms proportional to
$\partial_\mu \delta^n$ which are now independent. In the former condition
(\ref{cons}) we got rid of these terms by symmetrization and moding
out the general formula $\sum_{l=1}^{n} \partial^l \delta^n=0$. This
formula is a direct consequence of translation invariance.
Regardless this slight
technical difference both conditions, (\ref{cons}) and (\ref{cond3}),
pose the same consistency conditions on the physical normalization
ambiguity.
For $0 < n \leq k $, (where
$k$ is the minimal integer such that $\forall m > k, s_m = 0$),
condition (\ref{condt}) yields
\begin{eqnarray}} \def\eqa{\begin{eqnarray}
&&\partial^x_\mu(j_n \delta^{(n+1)}) +
n! \left( \sum_{l=0}^{n-1} s_{l} {\cal L}_{n-l} \right) \delta^{(n+1)}- \nonumber \\
&&-\sum_{l=0}^{n-1}\left
(n! \, \partial_\mu^x + l \, (n-1)! \, \sum_{i=1}^n (\partial_\mu^{x_i})
\right)
\left({\partial {\cal L}_{n-l} \over \partial (\partial_\mu \f^A)} s_{l} \f^A \delta^{(n+1)} \right)=
\tilde{j}^\mu_n (x) \partial_\mu^x \delta^{(n+1)}
\eea
where $j_n^\mu$ and $\tilde{j}^\mu_n$ are defined by analogous
to (\ref{norm22}) formulae. The sufficient and necessary condition
for this equation to have a solution is
\begin{equation}} \def\eq{\begin{equation}
s_0 {\cal L}_n + \cdots + s_{n-1} {\cal L}_1 = \partial_\mu {\cal L}^\mu_n
+ s_n \f^A {\cal K}_{AB} \f^B.
\end{equation}} \def\eqe{\end{equation}
This agrees with (\ref{n<k}) (we remind the reader
that $n$ in present section
corresponds to $n+1$ in section 2). Then the
current normalization terms are given by
\begin{eqnarray}} \def\eqa{\begin{eqnarray} \label{jn<k}
j^\mu_{n} &=& n! \left(- {\cal L}_{n}^\mu +
\sum_{l = 0}^{n-1}
\frac{\partial {\cal L}_{n-l}} {\partial (\partial_\mu \phi^A)}
s_{l} \phi^A \right)\\
\tilde{j}^\mu_{n} &=& -n! {\cal L}_{n}^\mu
+ (n-1)! \sum_{l=0}^{n-1} l\,
\frac{\partial {\cal L}_{n-l}} {\partial (\partial_\mu \phi^A)} s_{l} \phi^A
\eea
and we have
\begin{eqnarray}} \def\eqa{\begin{eqnarray}
\label{final}
& &\partial_\mu^x j_{0,int}^{\mu} (x) \Big|_{g^{n}}
=\tilde {j}_{1,int}^{\mu} \Big|_{g^{n-1}} \partial_\mu g(x)
+ n! s_n \f^A {\cal K}_{AB} \f^B (x)
\eea
For $n > k$, equation (\ref{condt}) yields
\begin{eqnarray}} \def\eqa{\begin{eqnarray}
&&\partial^x_\mu(j_n \delta^{(n+1)}) +
n! \left( \sum_{l=0}^{k} s_{l} {\cal L}_{n-l} \right) \delta^{(n+1)} - \nonumber \\
&&-\sum_{l=0}^{k} \left(n! \partial_\mu^x -l\, (n-1)! \sum_{i=1}^n (\partial_\mu^{x_i})
\right)
\left({\partial {\cal L}_{n-l} \over \partial (\partial_\mu \f^A)} s_{l} \f^A \delta^{(n+1)} \right)
= \tilde{j}^\mu_n (x) \partial_\mu^x \delta^{(n+1)}
\eea
This equation now implies
\begin{equation}} \def\eq{\begin{equation}
s_0 {\cal L}_n + \cdots + s_k {\cal L}_{n-k} = \partial_\mu {\cal L}^\mu_n.
\end{equation}} \def\eqe{\end{equation}
We further obtain for the current normalization terms,
\begin{eqnarray}} \def\eqa{\begin{eqnarray} \label{jn>k}
j^\mu_{n} &=& n! \left(- {\cal L}_{n}^\mu +
\sum_{l = 0}^{k}
\frac{\partial {\cal L}_{n-l}} {\partial (\partial_\mu \phi^A)}
s_{l} \phi^A \right)\\
\tilde{j}^\mu_{n} &=& -n! {\cal L}_{n}^\mu
+ (n-1)! \sum_{l=0}^{k} l\,
\frac{\partial {\cal L}_{n-l}} {\partial (\partial_\mu \phi^A)} s_{l} \phi^A
\eea
Therefore,
\begin{eqnarray}} \def\eqa{\begin{eqnarray}
& &\partial_\mu^x j_{0,int}^{\mu} (x) \Big|_{g^{n}}
=\tilde {j}_{1,int}^{\mu} \Big|_{g^{n-1}} \partial_\mu g(x)
\eea
without using the free field equations.
In exactly the same way as in section 2, we deduce that
the sum of all tree-level local normalization terms
consitute a Lagrangian which is invariant (up to a
total derivative) under the symmetry transformation
$s \f^A = \sum s_i \f^A$. Inserting now the
local normalization terms (\ref{jn<k}) and (\ref{jn>k})
into (\ref{advanced}) we obtain,
\begin{equation}} \def\eq{\begin{equation}
j^\mu_{0, int} = {\partial {\cal L} \over \partial( \partial_\mu \f^A)} s \f^A - k^\mu
\end{equation}} \def\eqe{\end{equation}
where we have used the definitions (\ref{lagr}), (\ref{fulltr}),
and (\ref{kdef}). The combinatorial factor $n!$ in (\ref{jn<k})
and (\ref{jn>k}) exactly cancels the same factor in (\ref{advanced}).
We, therefore, see that the interacting free current exactly
becomes the full non-linear current.
We have, thus, found that going from
condition (\ref{cons}) to condition (\ref{cond3}) just corresponds
to a different technical treatment of the $\partial_\mu \delta^{(n)}$
terms which
has no influence on the fact that both conditions pose the same
conditions
on the normalization ambiguity of the physical $T_n$ distributions,
namely
the consistency conditions of the classical Noether method.
Our analysis of the condition (\ref{cons}) at the loop level
is also independent of this
slight technical rearrangement of the derivative terms.
Thus, the issue of stability can be analyzed in exactly
the same way as before (see section 4.3 of \cite{paper1}). One shows
(under the assumption that the Wess-Zumino consistency
condition has only trivial solutions)
that condition (\ref{cond3}) at loop
level also implies that the normalization ambiguity at the
loop level, $N_n(\hbar)$,
is constrained in the same way as the tree-level normalizations,
$N_n(\hbar^0)$.
Once the stability has been established
the equivalence of (\ref{cons}) and (\ref{cond3}) at
loop level follows.
Summing up, we have shown that conditions (\ref{cons})-(\ref{cond3})
yield all consequences of non-linear symmetries for time-ordered
products before the adiabatic limit.
So at that level currents seem to be
sufficient. As mentioned in the introduction, however,
if one wants to identify the physical Hilbert space,
one may need
to use the Noether charge $ Q_{int} = \int d^3x j^{0}_{int}(x) $.
As our Quantum Noether Condition (\ref{cond3}) shows, only in the
adiabatic limit (provided the latter exists) the interacting
Noether current is conserved. Moreover, there is an additional
technical obstacle. In the construction of the BRST charge a volume
divergence occurs. In \cite{Fredenhagen} a resolution
was proposed for the case of QED. It was also described there
how the analysis of Kugo-Ojima may hold locally.
One may expect more technical
problems in the construction of the BRST charge in the case non-abelian
gauge theories where the free non-interacting Noether current includes
two quantum fields.
However, at least for the implementation of the symmetry
transformations in correlation functions, such an explicit construction of the
BRST charge is not necessary,
as we have shown. Symmetries are implemented
with the help of Noether currents only.
\section{Discussion}
We have presented a general method for
constructing perturbative quantum field theories with
global and/or local symmetries. The analysis was performed
in the Bogoliubov-Shirkov-Epstein-Glaser approach.
In this framework the perturbative $S$-matrix
is directly constructed in the asymptotic Fock
space with only input causality and Poincar\'{e}
invariance. The construction directly yields
a finite perturbative expansion without the
need of intermediate regularization.
The invariance of the theory under a given
symmetry is imposed by requiring that the
asymptotic Noether current is conserved
at the quantum level.
The novel feature of the present discussion
with respect to the usual approach
is that our results are manifestly scheme independent.
In addition, in the conventional
approach one implicitly assumes the naive
adiabatic limit. Our construction is
done before the adiabatic limit is taken.
The difference between the two approaches
is mostly seen when the symmetry condition
is expressed in terms of the interacting
Noether current. If the interacting current
generates non-linear symmetries, it is
not conserved before the adiabatic limit
is taken. An important example is
pure gauge theory. In this case,
the global symmetry is BRST symmetry.
The interacting BRST current is not
conserved before the adiabatic limit.
Nevertheless, one may still construct
correlation functions that satisfy the
expected Ward identities.
In the present contribution and in \cite{paper1}
we analyzed the symmetry conditions assuming that
there are no true tree-level or loop-level
obstructions. The algebra of the symmetry transformation
imposes integrability conditions on the
possible form of these obstructions \cite{WZ}.
Therefore, to analyze the question of anomalies
in the present context one would have to understand
how to implement the algebra of symmetry transformations
in this framework.
This is expected to be encoded in multi-current correlation functions.
We will report on this issue in a future publication
\cite{paper2}.
The Quantum Noether Condition (\ref{cons}) or (\ref{cond3})
leads to specific constraints (equations (\ref{n<k}), (\ref{n>k}))
that the local normalization terms should satisfy.
We have seen that these conditions are equivalent to the
condition that one has an invariant action.
So, one may infer the most general solution of equations
(\ref{n<k}), (\ref{n>k}) from the most general solution
of the problem of finding an action invariant
under certain symmetry transformation rules.
For the particular case of gauge theories the global symmetry
used in the construction is BRST symmetry. In EG one always works
with a gauged fixed theory since one needs to have
propagators for all fields. Therefore, the symmetry
transformation rules are the gauged fixed ones. Physics, however,
should not depend on the particular gauge fixing chosen.
The precise connection between the results of the gauge invariant
cohomology (which may be derived with the help of the
antifield formalism\cite{BV,HT}) and
the present gauged-fixed formulation will
be presented elsewhere \cite{HHS}.
The symmetry condition we proposed involves the (Lorentz invariant)
condition of conservation of the Noether current.
There are cases, however, where one has a charge that generates
the symmetry but not a Noether current (for this to happen
the theory should not possess a Lagrangian).
A more fundamental formulation that will also cover these
cases may be to demand that the charge that generates the symmetry
is conserved at the quantum level (i.e. inside correlation
functions). A precise formulation of this condition
may require a Hamiltonian reformulation
of the EG approach. Such a reformulation may be interesting on its
own right.
\section*{Acknowledgements}
We thank Klaus Fredenhagen and Raymond Stora for discussions.
KS is supported by the Netherlands Organization for Scientific
Research (NWO).
| proofpile-arXiv_065-8662 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{INTRODUCTION}
Eigenphase shifts $\delta_i$ ($i$ = 1,2,$\dots , n$) of the $S$
matrix, defined as
\begin{equation}
S = U e^{2i \delta} \tilde{U}
\label{S_out_bc}
\end{equation}
have been utilized as a tool for analyzing the
resonances\cite{Burke69,Truhlar87}. Eigenphase shifts and the
corresponding eigenchannels are also extensively used in various forms
in multichannel quantum-defect theory (MQDT) which is one of the most
powerful theories of resonance\cite{FanoBook}. In MQDT, the first
derivatives of eigenphase shifts as functions of energy are used for
various purposes\cite{Fano70}. The Lu-Fano plot is essentially a
plot of the energy derivative of an eigenphase
shift\cite{FanoBook}. The first derivative of an eigenphase shift as a
function of energy, called ``a partial delay time'', has also studied
as a relevant quantity to time delays in various
fields\cite{Fyodorov97}.
In spite of their wide use, studies of the behaviors of eigenphase
shifts and their energy derivatives in the neighborhood of a resonance
have not been done extensively, comparing to the studies of
photo-fragmentation cross sections and $S$ matrix itself. Eigenphase
shifts in the multichannel system are known to show complicated
behaviors near a resonance due to the avoided crossing between curves
of eigenphase shifts along the energy\cite{Burke69,Macek70}. In the
previous work, detailed studies of the behaviors of eigenphase shifts
and times delayed by collision were done for the system of one
discrete state and two continua\cite{Lee98}. Comparing to the system
of one discrete state and one continuum, the newly added one more
continuum brings about the new phenomena of the avoided crossing
between curves of eigenphase shifts and eigentimes delayed and of
times delayed due to the change in frame transformation along the
energy besides the resonance behavior. Thus the system of one discrete
state and two continua provides a prototypical system for the study of
two effects on the resonance phenomena, avoid crossings between
eigenphase shifts and eigentimes delayed as one effect and the change
in frame transformation as another one.
Previous work showed that eigenphase shifts and eigentimes delayed due
to the avoided crossing interaction and eigentimes delayed due to the
change in frame transformation are functionals of the Beutler-Fano
formula using the appropriate dimensionless energy units and line
profile indices\cite{Lee98}. Parameters representing the avoided
crossing of eigenphase shifts and eigentime delays were
identified. Parameters representing the eigentime delays due to a
change in frame transformation were shown to be described by the same
parameters. With the help of new parameters, the rotation angle
$\theta$ and rotation axis $\hat{n}$ for the $S$ matrix $\exp [
i(a+\theta \sigma \cdot \hat{n} )]$ were identified. The time delay
matrix $Q$ was shown to be given as $Q = \frac{1}{2} \tau_r ({\bf 1} +
{\bf P}_a \cdot \boldsymbol{\sigma} + {\bf P}_f \cdot
\boldsymbol{\sigma} )$ where the first term is the time delay due to
the resonance, the second term is the one due to the avoided crossing
interaction and the last term is the one due to the change in frame
transformation.
Though previous work found that behaviors of eigenphase shifts and
eigentime delays as functions of energy follow
Beutler-Fano's formula, it could not explain why they follow
Beutler-Fano's formula. Since the system considered in the previous
work is essentially two channel system (with one more discrete state),
an analogy with the spin system was made and utilized but not fully
exploited. One of the main purpose of the present paper is to exploit
the analogy further. Especially, the homomorphism of the spin model
with the three-dimensional rotation will be fully exploited to
construct geometric structures made up of dynamical parameters for
eigenphase shifts and eigentime delays and thus to derive the
Beutler-Fano's formula geometrically. The geometrical realization
clarifies the ambiguities in relations and unexplained meanings of
dynamic parameters at the previous work since the geometric
constructions appeal to our intuition and are thus easy to understand
and provides means of viewing complicated relations in a simple
way. This clarification of complicated dynamic relations through
the geometrical realization is another main goal of this paper.
Section II summarizes the previous results. Section III modifies the
previous results a little suitable for the geometrical
realization. Section IV gives a geometrical realization of the
previous results. Section V connects the geometrical relations with
the dynamical ones. Section VI applies the theory developed in
Ref. \cite{Lee98} and the present paper to the vibrational
predissociation of triatomic van der Waals molecules. Section VII
summarizes and discusses the results.
\section{Summary of the previous result}
Ref. \cite{Lee98} examined eigenphase shifts and eigentime delays in
the neighborhood of an isolated resonance for the system of one discrete
state and two continua as a prototypical system for the study of the
combined effects of the resonance and the indirect continuum-continuum
interaction via a discrete state. The $S$ matrix for an isolated
resonance system is well known and, if the background $S^0$ matrix is
described by its eigenphase shifts $\delta^0$ as $S^0$= $U^0 e^{-2i
\delta^0} \tilde{U}^0$, takes the form\cite{TaylorBook}
\begin{equation}
S_{jk} = \sum_{l,m} U_{jl}^0 e^{-i \delta_l^0} \left( \delta_{lm} + i
\frac{\sqrt{\Gamma_l \Gamma_m}}{E-E_0 -i \Gamma /2} \right) e^{-i
\delta_m^0} \tilde{U}_{mk}^0 ,
\label{S_res}
\end{equation}
where $\Gamma_l$, $\Gamma$,$E_0$ are the partial decay width of a
resonance state into the $l$th background
eigenchannel\cite{ChannelTerm}, the total decay width $\sum_l
\Gamma_l$, and the resonance energy, respectively. Eq. (\ref{S_res})
is for the incoming wave boundary condition. The formula for the
outgoing wave boundary condition differs from Eq. (\ref{S_res}) in
that $i$ is replaced by $-i$.
By diagonalizing Eq. (\ref{S_res}), eigenphase shifts $\delta$ of
the $S$ matrix (=$U e^{-2i \delta }\tilde{U}$) for the system of one
discrete state and two continua were obtained as
\begin{equation}
2 \delta_{\pm} (E) = \sum_i \delta_i^{0} + \delta_r (E) \pm
\delta_a (E) ,
\label{eigenphase_shifts}
\end{equation}
where $\delta_r (E)$ is the well-known resonance phase shift due to
the modification of the scattering wave by the quasi-bound state and
given by $- \arctan (1/\epsilon_r )$ and $\delta_a (E)$ is the one due
to the modification of the scattering wave by the other wave through
the indirect interaction via the quasi-bound state and was found to be
given as a functional of the Beutler-Fano formula\cite{Fano61}
\begin{equation}
\cot \delta_a (E) = - \cot \Delta_{12}^{0}
\frac{\epsilon_a -q_{a} }{(1+\epsilon_a^2 )^{1/2}} ,
\label{delta_a}
\end{equation}
in the dimensionless energy scale defined by
\begin{equation}
\epsilon_a \equiv \frac{2(E-E_{a})}{\Gamma_{a}} ,
\label{e_a}
\end{equation}
where $\Gamma_a = 2 \sqrt{\Gamma_1 \Gamma_2}/ | \sin\Delta_{12}^{0} |$
and $E_a$ =$E_{0}+ \frac{\Delta\Gamma}{2}\cot\Delta_{12}^{0}$
($\Delta \Gamma = \Gamma_1 - \Gamma_2$, $\Delta_{12}^0 = \delta_1^0 -
\delta_2^0$). Its form as a functional of the Beutler-Fano formula
can be shown more explicitly by using the Beutler-Fano function
\begin{equation}
f_{{\rm BF}} (\epsilon ,q) \equiv \frac{(\epsilon -q)^2}{1+\epsilon
^2} ,
\label{fBF}
\end{equation}
as
\begin{equation}
\cot \delta_a (E ) =
\left\{ \begin{array}{ll}
\cot \Delta_{12}^{0} \sqrt{f_{{\rm BF}} (\epsilon_a , q_{a} )} &
{\rm when}~\epsilon_a
< q_a \\
- \cot \Delta_{12}^{0} \sqrt{f_{{\rm BF}} (\epsilon_a , q_{a} )}
& {\rm when}~\epsilon_a \ge q_a .
\end{array} \right.
\label{delta_a_bf}
\end{equation}
The line profile index $q_a$ of the curve of $\delta_a (E)$ is given
by
\begin{equation}
q_a = -\frac{ \Delta\Gamma} { 2\sqrt{\Gamma_1 \Gamma_2 } \cos
\Delta_{12}^{0} }.
\label{q_a}
\end{equation}
When $\epsilon_a$ = $q_a$, $\delta_a$ = $\pi /2$ and the
difference in abscissas of two eigenphase shift curves is $\pi /2$
which is the largest separation of two curves when $\delta_a$ is
defined up to $\pi$. Therefore, the line profile index $q_a$ also
stands for the energy of maximal avoidance of eigenphase shifts.
Eq. (\ref{S_res}) shows that the eigenphase sum $\delta_{\Sigma}$ is
given by $\delta_{\Sigma}$ = $\sum_i \delta_i^0$ + $\delta_r$ =
$\delta_{\Sigma}^0 $ + $\delta_r$, in conformity with Hazi's
formula\cite{Hazi79}.
Let us define ${\cal S}$ by $S$ = $U^0 {\cal S} \tilde{U}^0$. Let
${\cal S}$ be diagonalized by the $V$ matrix composed of eigenvectors
corresponding to $\delta_+$ and $\delta_{-}$ as $V$ =
($v_{+},v_{-}$). The $V$ matrix was obtained as
\begin{equation}
V = \left( \begin{array}{cc} \cos\frac{\theta_{a}}{2} & - \sin
\frac{\theta_{a}}{2} \\ \sin \frac{\theta_{a}}{2} & \cos
\frac{\theta_{a}}{2} \end{array} \right) ,
\label{V}
\end{equation}
where $\theta_{a}$ is defined by
\begin{equation}
\cos\theta_{a} = -\frac{\epsilon_a}{\sqrt{1+\epsilon_a
^2}} , ~~
\sin\theta_{a} = \frac{1}{\sqrt{1+\epsilon_a
^2}} .
\end{equation}
Eigenvectors are independent of $q_{a}$. They depend only on
$\epsilon_a$. As $\epsilon_a$ varies from $-\infty$ through zero to
$\infty$, $\theta_{a}$ varies from zero through $\pi/2$ to $\pi$ and
$v_{+}$ varies from $\left( \begin{array}{cc}1\\ 0 \end{array}
\right)$ through $\frac{1}{\sqrt{2}} \left( \begin{array}{cc}1\\ 1
\end{array} \right)$ to $\left( \begin{array}{cc}0\\ 1 \end{array}
\right)$ . Thus, at $\epsilon_a$ = 0 or at $E$ = $E_{0}$ +
$\frac{\Delta\Gamma}{2}\cot \Delta_{12}^{0}$, two background
eigenchannels are mixed equally. For this reason $\epsilon_a$ = 0 is
regarded as the avoided crossing point energy. This energy does not
coincide with the energy $\epsilon_a$ = $q_a$ where two eigenphase
shift curves are separated most. Let $U$ = $U^0 V$, then $U$
diagonalizes the $S$ matrix, that is, the transform $\tilde{U} S U$ is
the diagonal matrix $e^{-2i \delta}$. The $U$ matrix is obtained from
the $V$ matrix by replacing $\theta_a$ with $\theta_a' = \theta_a +
\theta^{0}$, where $\theta^0$ parametrizes $U^{0}$ matrix as
\begin{equation}
U^{0} = \left( \begin{array}{cc} \cos\frac{\theta^{0}}{2} & - \sin
\frac{\theta^{0}}{2} \\ \sin \frac{\theta^{0}}{2} & \cos
\frac{\theta^{0}}{2} \end{array} \right) ,
\label{U_0}
\end{equation}
With the new parameters and Pauli's spin matrices, the $S$ matrix was
found to be expressible as
\begin{equation}
S = e^{-i\left( \delta_{\Sigma} {\bf 1} +\delta_a
\boldsymbol{\sigma} \cdot \hat{n}_a' \right) } ,
\label{S_final}
\end{equation}
where
\begin{equation}
\hat{n}_a' = \hat{z} \cos \theta_a' + \hat{x} \sin
\theta_a' .
\label{na}
\end{equation}
Smith's time delay matrix $Q$ (=$i\hbar
S^{\dag}\frac{dS}{dE}$\cite{Smith,Qmatrix}) can be easily obtained by
substituting Eq. (\ref{S_final}) into its definition and was found to
consist of three terms
\begin{equation}
Q = \frac{1}{2} ({\bf 1} \tau_r +
\boldsymbol{\sigma}\cdot \hat{n}_a' \tau_a
+ \boldsymbol{\sigma}\cdot \hat{n}_f' \tau_f ) ,
\label{Q_af}
\end{equation}
one due to the resonance, one due to the avoided crossing interaction,
and one due to the change in frame transformation as a function of
energy, where
\begin{equation}
\hat{n}_f' = \hat{y} \times \hat{n}_a'
\cos \delta_a - \hat{y} \sin\delta_a ,
\label{nf}
\end{equation}
and is orthogonal to $\hat{n}_a'$.
The time delay due to the resonance takes a symmetric Lorentzian form
\begin{equation}
\tau_r (E ) = 2\hbar \frac{d \delta_r (E)}{dE} = \frac{4\hbar}
{\Gamma}\frac{1}{1+\epsilon_r ^2} ,
\label{tau_r}
\end{equation}
and the time delay due to the avoided crossing was found to take a
form of a functional of the Beutler-Fano formula
\begin{eqnarray}
\tau_a (E ) &=&
- \tau_r (E) \frac{\epsilon_r - q_{\tau}}
{\sqrt{( \epsilon_r - q_{\tau} )^2 + r^2 (1+\epsilon_r ^2 )}}
\label{tau_a}
\\
&=&
\left\{ \begin{array}{ll}
\tau_r (E )
\sqrt{\frac{f_{{\rm BF}} (\epsilon_r ,q_{\tau})}{f_{{\rm BF}} (\epsilon_r
,q_{\tau} ) +r^2 }}
&{\rm when}~\epsilon_r \le q_{\tau}\\
- \tau_r (E )
\sqrt{\frac{f_{{\rm BF}} (\epsilon_r ,q_{\tau})}{f_{{\rm BF}} (\epsilon_r
,q_{\tau} ) +r^2 }}
&{\rm when}~\epsilon_r > q_{\tau} ,
\end{array} \right.
\label{tau_a_bf}
\end{eqnarray}
where parameters $r$ and $q_{\tau}$ are defined by
\begin{equation}
r \equiv \frac{\sqrt{\Gamma ^2 - \Delta\Gamma ^2} }{\Delta \Gamma }
,\label{r_2}
\end{equation}
\begin{equation}
q_{\tau} \equiv \frac{\Gamma}{\Delta\Gamma} \cot\Delta_{12}^{0} ,
\label{qtau}
\end{equation}
The asymmetry of $\tau_a$ as a function of energy is brought about by
the nonzero value of $q_{\tau}$ which is proportional to the shift of
the avoided crossing point energy from the resonance one. Thus the
asymmetry of $\tau_a$ is caused by the mismatch in the positions of
the avoided crossing point and resonance energies. The time delay due
to a change in frame transformation was found to take the following
form\cite{Convention1}
\begin{equation}
\tau_f (E) = \tau_r (E) \frac{|r|}{\sqrt{f_{\rm BF} (\epsilon_r
,q_{\tau} ) +r^2 }} .
\label{tauf_bf}
\end{equation}
Because of the last term of
Eq. (\ref{Q_af}), eigenfunctions of the $Q$ matrix are different from
those of the $S$ matrix. The eigentime delay sum, which is equal to
$\sum_i Q_{ii}$ = ${\rm Tr} Q$, is obtained as
\begin{equation}
\sum_i Q_{ii} = {\rm Tr} Q = \tau_r ,
\label{eigentime_delay_sum}
\end{equation}
since ${\rm Tr} \boldsymbol{\sigma}$ = 0. The consideration of the
transforms $\tilde{U}^0 SU^0$ and $\tilde{U}^0 Q U^0$, which will be
denoted as ${\cal S}$ and ${\cal Q}$, will be proved to be more
convenient later for the geometric consideration. The transforms
${\cal S}$ and ${\cal Q}$ are the scattering and time delay matrices
with the background eigenchannel wavefunctions as a basis instead of
the asymptotic channel wavefunctions\cite{ChannelTerm}, whose forms in
terms of the new parameters and Pauli's spin matrices are the same as
those of $S$ and $Q$ but with vectors $\hat{n}_{a}$ and $\hat{n}_{f}$
which are obtained from $\hat{n}_a'$ and $\hat{n}_f'$ by replacing
$\theta_a'$ with $\theta_a$.
The connection of the time delay matrix $Q$ with the time delay
experienced by a wave packet was first considered by
Eisenbud\cite{Eisenbud} and extended by
others\cite{GoldbergerBook}. According to their work, $Q_{ii}$ is the
average time delay experienced by a wave packet injected in the $i$th
channel. Here, the average time delays due to the avoided crossing
interaction are given by $\tau_a \cos \theta_a' /2$ and $- \tau_a \cos
\theta_a' /2$. Similarly, the average time delays due to the change in
frame transformation are $- \tau_f \sin \theta_a' \cos \delta_a /2$
and $\tau_f \sin \theta_a' \cos \delta_a /2$. Time delays due to the
avoided crossing interaction and the change in frame transformation
are out of phase by $\pi /2$. Overall, $Q_{11}$ = $\frac{1}{2}
(\tau_r$ + $\tau_a \cos \theta_a'$ $-$ $\tau_f \sin \theta_a' \cos
\delta_a )$ and $Q_{22}$ = $\frac{1}{2}
(\tau_r$ - $\tau_a \cos \theta_a'$ $+$ $\tau_f \sin \theta_a' \cos
\delta_a )$.
In analogy with the spin $\frac{1}{2}$ system, the time delay matrix
$Q$ was expressed in terms of polarization vectors and the Pauli
spin matrices as
\begin{equation}
Q = \frac{1}{2} \tau_r \left( {\bf 1} + {\bf P}_a \cdot
\boldsymbol{\sigma} + {\bf P}_f \cdot \boldsymbol{\sigma} \right) ,
\label{Qaf}
\end{equation}
where
polarization vectors are defined by
\begin{equation}
{\bf P}_a = \frac{\tau_a}{\tau_r} \hat{n}_a' , ~~~
{\bf P}_f = \frac{\tau_f}{\tau_r} \hat{n}_f' .
\label{PaPf}
\end{equation}
Like the spin $\frac{1}{2}$ system, it was found that the absolute
values of ${\bf P}_a$ and ${\bf P}_f$ are restricted to $0 \le |{\bf
P}_a | \le 1$ and $0 \le | {\bf P}_f | \le 1$. In the present case a
complete depolarization means that eigentimes delays are the same
regardless of eigenchannels, while a complete polarization means that
eigentime delays are 0 for one eigenchannel and $\tau_r (E )$ for
another eigenchannel.
Eigenvectors for eigentime delays due to an avoided crossing
interaction and due to a change in frame transformation are orthogonal
to each other and contribute to the total eigentime delays as
$\sqrt{\tau_a ^2 + \tau_f ^2} = \tau_r \sqrt{|{\bf P}_a |^2 + |{\bf
P}_f |^2}$. It was found that
\begin{equation}
|{\bf P}_a |^2 + |{\bf P}_f |^2 =1 .
\label{Pt_magnitude}
\end{equation}
Since ${\bf P}_a$ and ${\bf P}_f$ are mutually orthogonal and $| {\bf
P} _a|^2 + |{\bf P} _f|^2 =1$, we can define a vector ${\bf P} _t =
{\bf P} _a + {\bf P} _f$, whose magnitude is unity. Its formula may
be obtained straightforwardly but hardly used. Instead, the formula of
its transform
$\pmb{\cal P}_t$ = $\tilde{U}^0 {\bf P}_t U^0$
is exclusively used, which is much simpler and given as
\begin{equation}
\pmb{\cal P}_t \equiv \hat{n}_t = \left( \cos \Delta_{12}^0
\frac{\sqrt{\Gamma^2
- \Delta \Gamma^2}}{\Gamma}, - \sin \Delta_{12}^0
\frac{\sqrt{\Gamma^2 -
\Delta \Gamma^2}}{\Gamma} , \frac{\Delta \Gamma}{\Gamma}\right) .
\label{nt}
\end{equation}
The transform $\pmb{\cal P}_t$ is the total polarization vector
for the time delay matrix ${\cal Q}$ (=$\tilde{U}^0 Q U^0$)
with background eigenchannels used as the basis. The similar
transforms $\pmb{\cal P}_a$ and $\pmb{\cal P}_f$ of ${\bf P}_a$ and
${\bf P}_f$ will be used later and satisfy the same relations
$\pmb{\cal P}_t$ = $\pmb{\cal P}_a$ + $\pmb{\cal P}_f$ and
\begin{equation}
|\pmb{\cal P}_a |^2 + |\pmb{\cal P}_f |^2 =1 .
\label{Pt_cal_magnitude}
\end{equation}
With the total polarization vector, the time
delay matrix becomes
\begin{equation}
Q = \frac{1}{2} \tau_r ( {\bf 1} + {\bf P} _t \cdot {\bf
\sigma } ) .
\label{Q_t}
\end{equation}
Since $( {\bf P}_t \cdot \boldsymbol{\sigma})^2$ = 1, eigenvalues of
$Q$ or total eigentime delays are obtained as zero and $\tau_r$, the
time delayed by the resonance state. Though time delays due to an
avoided crossing interaction and a change in frame transformation are
asymmetric with respect to the resonance energy and therefore the
energies of the longest lifetimes are not matched with the resonance
energy, the energy of the longest overall eigentimes delayed is
exactly matched with the resonance energy.
\section{Preparation for the Geometrical Realization}
\label{Sec:Prep}
In the previous work, some of the interesting things were noticed but
could not be explained. Some of them are summarized below.
\begin{itemize}
\item Why are eigenvectors of the $S$ matrix independent of $q_a$ while
its eigenphase shifts are not?
\item Why do the energy behaviors of $\delta_a (E)$, $\tau_a (E)$,
and $\tau_f (E)$
follow Beutler-Fano formulas?
\item Why does $\tau_a$ take the Beutler-Fano formula in the energy scale
of $\epsilon_r$ instead of $\epsilon_a$ in contrast to the case of
$\delta_a$ though the former is obtained as the derivative of the
latter.
\item Why is $|{\bf P}_a |^2 + |{\bf P}_f |^2$ = 1 satisfied?
\item What is the meaning of the parameter $r^2$?
\end{itemize}
In the previous work, we got some help by making an analogy of the
system with a spin model, especially in interpreting the time delay
matrix $Q$ with polarization vectors ${\bf P}_a$ and ${\bf P}_f$ which
are borrowed from the spin model. But the analogy with the spin model
is not fully exploited. Here we show that by
exploiting the analogy further, we can give the explanations of the
above questions. In particular, we succeeded in giving the geometrical
realization of the Beutler-Fano formulas.
Before starting the geometrical realization of the previous results,
let us first rewrite some of the previous results suitable for the
geometrical realization.
First, we notice that Eqs. (\ref{delta_a}) and (\ref{tau_a}) are
simpler than the corresponding Eqs. (\ref{delta_a_bf}) and
(\ref{tau_a_bf}). This indicates that the square root of the
Beutler-Fano formula (\ref{fBF}) seems to be more fundamental than the
original one. Next we notice that
Eq. ({\ref{delta_a}) resembles $\cot \delta_r$ = $-\epsilon_r$
and $\cot \theta_a$ = $-\epsilon_a$. Thus the square root of the
Beutler-Fano formula may be regarded as an energy parameter
$\epsilon_{\rm BF} (\epsilon , q, \theta^0 )$. Then
Eq. (\ref{delta_a}) takes the suggestive form
\begin{equation}
\cot \delta_a = - \epsilon_{\rm BF} (\epsilon_a , q_a ,
\Delta_{12}^0 ),
\end{equation}
where
\begin{equation}
\epsilon_{\rm BF} (\epsilon_a , q_a ,
\Delta_{12}^0 ) = \cot \Delta_{12}^0 \frac{\epsilon_a - q_a
}{\sqrt{\epsilon_a^2 +1}}
\end{equation}
($\Delta_{12}^0$ is the value of $\delta_a$ at $\epsilon_a$
$\rightarrow$ $-\infty$). But there is a one drawback when the square
root of the Beutler-Fano formula is considered as an energy
parameter. It is not a monotonically increasing function of energy. It
has a minimum when $q>0$ and a maximum when $q<0$. Hence $\epsilon_{\rm
BF}$ will be considered here merely as a convenient notation.
Eq. (\ref{Pt_cal_magnitude}) suggests another angle $\theta_f$
satisfying $\pmb{\cal P}_a$ = $\hat{n}_a \cos \theta_f$ and $\pmb{\cal
P}_f$ = $\hat{n}_f \sin \theta_f$. Its cotangent is obtained as
\begin{equation}
\cot \theta_f = - \frac{1}{r} \frac{\epsilon_r -
q_{\tau} } {\sqrt{\epsilon_r ^2 +1}} .
\label{cotf'}
\end{equation}
Eq. (\ref{cotf'}) indicates that $1/r$ becomes $\cot \theta_f$ at
$\epsilon_r$ $\rightarrow$ $- \infty$. The angle of $\theta_f$ at
$\epsilon_r$ $\rightarrow$ $- \infty$ is identified with the angle
which the polarization vector $\pmb{\cal P}_t$ or $\hat{n}_t$ makes
with $\hat{n}_a$. That angle will be denoted as
$\theta_t$. Eq. (\ref{nt}) shows that the angle $\theta_t$ is obtained
as
\begin{eqnarray}
\cos \theta_t &=& \frac{\Delta \Gamma}{\Gamma} \nonumber , \\ \sin
\theta_t &=& \frac{\sqrt{\Gamma ^2 - \Delta \Gamma ^2}}{\Gamma}
\label{theta_t}
\end{eqnarray}
and with it the spherical polar coordinate of $\hat{n}_t$ is given by
(1,$\theta_t$,$-\Delta_{12}^0$). Now with $\theta_t$,
Eq. (\ref{cotf'}) becomes
\begin{equation}
\cot \theta_f = - \cot \theta_t \frac{\epsilon_r - q_{\tau} }
{\sqrt{\epsilon_r ^2 +1}} = - \epsilon_{\rm BF} (\epsilon_r, q_{\tau},
\theta_t ) .
\label{cotf}
\end{equation}
With the new angle $\theta_f$,
$\tau_a$ becomes $\tau_r \cos \theta_f$, which explains the
complicated form of $\tau_a$ as a functional of the Beutler-Fano
function in contrast to that of $\cot \delta_a$.
As a result of rewriting, we obtain four equations
\begin{eqnarray}
\cot \delta_r &=& - \epsilon_r , \nonumber
\\
\cot \theta_a &=& - \epsilon_a ,
\nonumber
\\
\cot \delta_a &=& - \epsilon_{\rm BF} (\epsilon_a , q_a ,
\Delta_{12}^0 ) ,
\nonumber \\
\cot \theta_f &=& - \epsilon_{\rm BF} (\epsilon_r , q_{\tau} ,
\theta_t ) .
\label{cot_energy}
\end{eqnarray}
The use of the geometrical parameters, $\delta_r$ and $\theta_a$, in
place of $\epsilon_r$ and $\epsilon_a$ makes the geometrical
realization of dynamic relations possible. Our aim is to obtain the
dynamic formulas for $\epsilon_{\rm BF} (\epsilon_a , q_a ,
\Delta_{12}^0 )$ and $\epsilon_{\rm BF} (\epsilon_r , q_{\tau} ,
\theta_t ) $ by converting the geometric relations containing $\delta_a$
and $\theta_f$ back into dynamic
ones. We will sometimes abbreviate $\epsilon_{\rm BF} (\epsilon_a
,q_a, \Delta_{12}^0 )$ as $\epsilon_{\rm BF,a}$ and $\epsilon_{\rm BF}
(\epsilon_r , q_{\tau} , \theta_t )$ as $\epsilon_{\rm BF,r}$. Before
ending this section, let us note the following formulas for the line
profile indices $q_a$ and $q_{\tau}$
\begin{eqnarray}
q_a &=& \frac{\cot \delta_a (\epsilon_a =0 )}{\cot \delta_a
(\epsilon_a \rightarrow -\infty )} , \nonumber\\
q_{\tau} &=& \frac{\cot \theta_f (\epsilon_r =0 )}{\cot \theta_f
(\epsilon_r \rightarrow -\infty )} .
\end{eqnarray}
They can also be expressed as
\begin{eqnarray}
q_a &=& \frac{\cot \delta_a}{\cot \Delta_{12}^0} ~~~~{\rm when}~
\theta_a = \frac{\pi}{2} ,
\nonumber\\
q_{\tau} &=& \frac{\cot \theta_f}{\cot \theta_t} ~~~~{\rm when}~
\delta_r = \frac{\pi}{2} .
\label{qaf_new_def}
\end{eqnarray}
\section{Geometrical Realization}
The geometrical realization is based on that to each unimodular
unitary matrix in the complex two-dimensional space, there is an
associated real orthogonal matrix representing a rotation in real
three-dimensional space\cite{Tinkham}. The general two-dimensional
unimodular unitary matrix can be written as $e^{- i \frac{\theta}{2}
\boldsymbol{\sigma} \cdot \hat{n}}$ as its determinant can be easily
shown to be unity using ${\rm Tr}(\boldsymbol{\sigma})$ = 0 and thus
unimodular by the definition of unimodularity. The associated real
orthogonal matrix will be denoted as $R_{\hat{n}} (\theta )$, the
rotation matrix about the vector $\hat{n}$ by an angle $\theta$
defined in an active sense.
Let us first consider the $S$ matrix. It is unitary but not unimodular
[det$(S)\ne 1$] and can not be associated with the pure rotation
alone. But after extracting det($S$) which is equal to $e^{-i
\delta_{\Sigma} }$ for isolated resonances (a similar formula holds
for overlapping resonances, where $\delta_r$ is replaced by the sum
over ones from all resonances\cite{Simonius74}), the remaining part of
the scattering matrix will be unimodular and may be associated with a
pure rotation. According to Eq. (\ref{S_final}), the remaining part is
$e^{-i \delta_a \boldsymbol{\sigma} \cdot \hat{n}_a}$ and may be
associated with the rotation about the vector $\hat{n}_a$ by an angle
2$\delta_a$. We will explore the possibility of this explanation of
the $S$ matrix in below by deriving Eq. (\ref{S_final}) in a more
systematic way.
In the previous section, the $S$ matrix was diagonalized by two
unitary matrices $U^0$ and $V$ given by Eqs. (\ref{U_0}) and
(\ref{V}). Actually the theorem in Ref. \cite{FanoRacahBook} limits
that $U^0$ and $V$ are real orthogonal since the $S$ matrix is
symmetric. With them, it is rewritten as
\begin{equation}
S = U^0 V e^{-2i \delta} \tilde{V} \tilde{U}^0 .
\label{S_U0V}
\end{equation}
Note that the two unitary transformations $U^0$ and $V$ can be written
in terms of Pauli spin matrices as
\begin{eqnarray}
U^0 &=& e^{-i \frac{\theta^0}{2} \boldsymbol{\sigma} \cdot \hat{y}}
,\nonumber \\ V &=& e^{-i \frac{\theta_a}{2} \boldsymbol{\sigma} \cdot
\hat{y}} .
\label{U0V}
\end{eqnarray}
Notice that argument matrices of two exponential functions are commute
and therefore $U^0V$ = $e^{-i \frac{1}{2} (\theta_a + \theta^0)
\boldsymbol{ \sigma} \cdot \hat{y}}$. As before, let us denote
$\theta_a + \theta^0$ as $\theta_a'$. The diagonalized matrix $e^{-2i
\delta}$ of the $S$ matrix can be expressed in terms of Pauli matrices
as
\begin{equation}
e^{-2i \delta} = \left( \begin{array}{cc} e^{-2i \delta_{+}} & 0 \\ 0
& e^{-2i \delta_{-}} \end{array} \right) = e^{-i(\delta_{\Sigma}{\bf
1} + \delta_a \boldsymbol{\sigma } \cdot \hat{z}) } .
\label{Sdiag}
\end{equation}
Substituting Eqs. (\ref{U0V}) and (\ref{Sdiag}) into
Eq. (\ref{S_U0V}), we obtain
\begin{equation}
S = e^{-i\frac{\theta_a'}{2} \boldsymbol{\sigma } \cdot \hat{y}}
e^{-i(\delta_{\Sigma}{\bf 1} + \delta_a \boldsymbol{\sigma } \cdot
\hat{z}) } e^{i \frac{\theta_a'}{2} \boldsymbol{\sigma } \cdot
\hat{y}}.
\label{S}
\end{equation}
In order to give the geometrical interpretation to Eq. (\ref{S}), a
long preliminary exposition is necessary. Let us start with
considering $\boldsymbol{\sigma} \cdot {\bf r}$ and transform it into
a new matrix $\boldsymbol{\sigma} \cdot {\bf r}'$ by a general 2
$\times$ 2 unitary transformation $e^{-i \frac{\theta}{2}
\boldsymbol{\sigma} \cdot \hat{n}}$ as follows
\begin{equation}
e^{-i \frac{\theta}{2} \boldsymbol{\sigma} \cdot \hat{n}} \,
\boldsymbol{\sigma} \cdot {\bf r}\, e^{i \frac{\theta}{2}
\boldsymbol{\sigma} \cdot \hat{n}} = {\bf \sigma}\cdot {\bf r}' .
\label{r}
\end{equation}
The left hand side of Eq. (\ref{r}) can be calculated
using\cite{MerzbacherBook}
\begin{equation}
e^{i\hat{S}}
\hat{O} e^{-i\hat{S}} = \hat{O} + i [\hat{S},\hat{O}] +
\frac{i^2}{2!} [\hat{S},[\hat{S},\hat{O}]] +
\frac{i^3}{3!} [\hat{S},[\hat{S},[\hat{S},\hat{O}]]] + \dots
\label{exp_sim}
\end{equation}
and $(\boldsymbol{\sigma} \cdot {\bf a}) (\boldsymbol{\sigma} \cdot
{\bf b} )$ = ${\bf a} \cdot {\bf b}$ + $i \boldsymbol{\sigma} \cdot (
{\bf a} \times {\bf b} )$ and the result is that ${\bf r}'$ is just
the vector obtained from ${\bf r}$ by the three dimensional rotation
matrix $R_{\hat{n}} (\theta )$ about the vector $\hat{n}$ by $\theta$ in
an active sense as
\begin{equation}
{\bf r}' = R_{\hat{n}} (\theta ) {\bf r} .
\label{rp}
\label{rotation}
\end{equation}
Only in the form of the similarity transformation (\ref{r}) the
homomorphism that a 2 $\times$ 2 unimodular unitary matrix is
associated with a three dimensional rotation holds. According to this
interpretation, the unitary transformations $U^0$ and $V$ for the
symmetric $S$ matrix (\ref{S_U0V}) correspond to the three-dimensional
rotations about the $y$ axis through angles $\theta^0$ and $\theta_a$,
respectively, and their overall effect $U^0V$ is equal to the rotation
about the $y$ axis by $\theta_a'$ = $\theta_a + \theta^0$. Therefore,
the original frame transformation for the symmetric $S$ matrix becomes
as a rotation about the $y$ axis in the ``hypothetical'' real
three-dimensional space. This hypothetical real three-dimensional
space is different from the Hilbert space and called the Liouville
space. It is the space spanned by the set of vectors $\sigma_x$,
$\sigma_y$, and $\sigma_z$ which are orthogonal in the sense that
\begin{equation}
{\rm Tr} (\sigma_i \sigma_j ) = 2 \delta_{ij} ,
\end{equation}
and extensively studied in Ref. \cite{Fano57}. Any traceless 2
$\times$ 2 Hermitian matrices, for example $h$, can be expanded in
this vector space as $h$ = $x \sigma_x + y \sigma _y + z \sigma_z$ =
($x,y,z$) = $\boldsymbol{\sigma} \cdot {\bf r}$. We can lift the
restriction of traceless in Hermitian matrices if we include the unit
matrix ${\bf 1}$ as another basic vector in addition to $\sigma_x$,
$\sigma_y$, $\sigma_z$. Then the three-dimensional Liouville space is
a subspace of this four-dimensional Liouville space. Note that two
subspace \{${\bf 1}$\} and \{$\sigma_x , \sigma_y , \sigma_z $\} are
orthogonal and therefore either the subspace \{$\sigma_x , \sigma_y ,
\sigma_z$\} or \{${\bf 1 }$\}, or the whole space may be chosen freely
depending on the situation without having any trouble.
Now, Eq. (\ref{r})
can be viewed in two ways. It can be viewed as a rotation of the
vector ${\bf r}$ into ${\bf r}'$ by the rotation matrix $R_{\hat{n}}
(\theta )$ as expressed in Eq. (\ref{rotation}). Or it can be viewed as
the transformation from the $xyz$ coordinate system to the $x'y'z'$
coordinate system by the rotation matrix $R_{\hat{n}} ( - \theta
)$. The latter view, though obvious, can be shown to be true using the
following mathematical transformation. Let us regard
$\boldsymbol{\sigma}$ and ${\bf r}$ as the column vectors. Then the
scalar product $\boldsymbol{\sigma} \cdot {\bf r}'$ can be written as
a matrix multiplication of the row vector $\tilde{
\boldsymbol{\sigma}}$ with the column vector ${\bf r}'$, namely, $
\boldsymbol{\sigma} \cdot {\bf r}' = \tilde{\boldsymbol{\sigma}} {\bf
r}'$. The support for the view of the coordinate transformation is
obtained by the following transformation
\begin{equation}
\tilde{\boldsymbol{\sigma}} {\bf
r}' = \tilde{\boldsymbol{\sigma}} R_{\hat{n}} (\theta ){\bf r} =
\widetilde{ [ R_{\hat{n}} (- \theta ) \boldsymbol{\sigma} ]} {\bf r} .
\end{equation}
Since the diagonalization of the operator $\boldsymbol{\sigma} \cdot
{\bf r}$ yields its eigenchannels, the vector ${\bf r}$ in the
three-dimensional Liouville space is enough to
uniquely specify the eigenchannels of the traceless Hermitian
matrix $\boldsymbol{\sigma} \cdot {\bf r}$. Conversely, eigenchannels
may be regarded as a vector in the
three-dimensional Liouville space.
Since any real orthogonal frame transformation is of the form like
Eq. (\ref{V}), it may be generally said that a real orthogonal frame
transformation in the complex two-dimensional space corresponds to a
rotation about the $y$ axis in the real three-dimensional Liouville
space. Since the matrix corresponding to any 2 $\times$ 2 Hermitian
operator is diagonal in the basis of eigenchannels by definition of
eigenchannels and can be written as $a{\bf 1} + b \boldsymbol{\sigma}
\cdot \hat{z}$, a dynamical process along an eigenchannel corresponds
to a process along the $z$ axis in the real three-dimensional
Liouville space and leads to a variation in length of the vector.
Thus the $y$ axis in the real three-dimensional Liouville space can be
regarded as the axis for the real orthogonal frame transformations and
the $z$ axis as the axis for the dynamical processes along
eigenchannels.
We have theorem that $C e^{B} C^{-1}$ = $e^{CBC^{-1}}$ which can be
easily proved by using Eq. (\ref{exp_sim}). Using this theorem and
Eq. (\ref{r}), we have
\begin{equation}
e^{-i \frac{\theta}{2} \boldsymbol{\sigma} \cdot \hat{n}} e^{i
\boldsymbol{\sigma} \cdot {\bf r} } e^{i \frac{\theta}{2}
\boldsymbol{\sigma} \cdot \hat{n}} = e^{i \boldsymbol{\sigma} \cdot
{\bf r}' } .
\label{exp_r}
\end{equation}
Using Eq. (\ref{exp_r}) and Eq. (\ref{rp}), Eq. (\ref{S}) becomes
Eq. (\ref{S_final}) with $\hat{n}_a'$ now interpreted as
\begin{equation}
\hat{n}_a' = R_{\hat{y}} (\theta_a' ) \hat{z} .
\label{na_rot}
\end{equation}
Or $\hat{n}_a'$ can be regarded as the $z'$ axis in the $x'y'z'$
coordinate system, i.e., $\hat{n}_a'$ = $\hat{z}'$. Let $S = e^{-2i
\boldsymbol{\Delta}'}$, i.e., $\boldsymbol{\Delta}'$ = $\frac{1}{2} (
\delta_{\Sigma} \, {\bf 1} + \delta_a \boldsymbol{\sigma} \cdot
\hat{n}_a' )$. $\boldsymbol{\Delta}'$ is a vector in the
four-dimensional Liouville space. Or, if we exclude the isotropic part
in $\boldsymbol{\Delta}'$, $\frac{1}{2} \delta_a \boldsymbol{\sigma}
\cdot \hat{n}_a'$ is a vector in the three-dimensional Liouville space
which may be obtained by rotating the vector $\frac{1}{2} \delta_a
\boldsymbol{\sigma} \cdot \hat{z}$ about the $y$ axis by an angle
$\theta_a'$. The $\frac{1}{2} \delta_{\Sigma} \, {\bf 1}$ term in
$\boldsymbol{\Delta}'$ gives the phase shift owing to the isotropic
influence of the background potential scattering and the resonance on
eigenchannels. Likewise, the $\frac{1}{2} \delta_a
\boldsymbol{\sigma} \cdot \hat{n}_a'$ term gives the phase shifts
owing to the anisotropic influence of the background scattering
potential and the resonance on eigenchannels. Therefore, the length
$\frac{1}{2} \delta_a$ of
the anisotropic term in the three-dimensional Liouville space
denotes the degree of anisotropic influence on eigenphase shifts by
the background scattering and the resonance.
Let us now consider the time delay matrix. If the time delay matrix is
written in Lippmann's suggestive form\cite{Lippman66}, $Q = S^{+} \tau
S$ ($\tau$ is the time operator defined by $i \hbar
\frac{\partial}{\partial E}$), it is apparent that the unitary matrix
which gives the similarity transformation is now the $S^+$ matrix and
can be associated with the rotation matrix according to the theorem
mentioned above when det($S$) is extracted from it. Using the relation
\begin{equation}
\frac{d^r}{dz^r} \left( e^{Az} \right) = A^r e^{Az} = e^{Az} A^r ,
\end{equation}
Eq. (\ref{S}) is easily differentiated with respect to energy to yield
\begin{equation}
\frac{dS}{dE} = -i \frac{d\delta_{\Sigma}}{dE} S -i
\frac{d\delta_a}{dE} e^{-i \frac{\theta_a'}{2} \boldsymbol{\sigma}
\cdot \hat{y}} e^{-i \delta_a \boldsymbol{\sigma} \cdot \hat{z}} {\bf
\sigma} \cdot \hat{z} e^{i \frac{\theta_a'}{2} \boldsymbol{\sigma}
\cdot \hat{y}} + \frac{i}{2} \frac{d \theta_a'}{dE} \left( S
\boldsymbol{\sigma} \cdot \hat{y} - \boldsymbol{\sigma} \cdot \hat{y}
S \right) .
\label{dSdE}
\end{equation}
By multiplying (\ref{dSdE}) with the adjoint of (\ref{S}),
the $Q$ matrix becomes
\begin{equation}
Q = i\hbar S^+ \frac{dS}{dE} = \hbar \frac{d\delta_{\Sigma}}{dE} {\bf
1} +
\hbar \frac{d\delta_{a}}{dE} e^{-i \frac{\theta_a'}{2}
\boldsymbol{\sigma} \cdot \hat{y}} \boldsymbol{\sigma} \cdot \hat{z}
e^{i \frac{\theta_a'}{2} \boldsymbol{\sigma} \cdot \hat{y}} +
\frac{\hbar}{2} \frac{d\theta_a'}{dE} \left( S^+ \boldsymbol{\sigma}
\cdot \hat{y}S - \boldsymbol{\sigma} \cdot \hat{y} \right) ,
\label{Q_deriv1}
\end{equation}
where use is made of the fact that $S$ is unitary and thus $S^+ S$ = 1
for the first and third terms. The matrix multiplication in the second
term of the right hand side of Eq. (\ref{Q_deriv1}) is just ${\bf
\sigma} \cdot \hat{n}_a'$ as was already done before. The sum of the
first and second term is the time delay due to the energy derivatives
of the eigenphase shifts and called ``the partial delay times'' by some
group of people\cite{Fyodorov97}.
The parenthesized part of the third term is the time delay due to the
change in frame transformation and has an interference effect between
two contributions, one due to the change in frame transformation from
the asymptotic channels to the background eigenchannels $\langle
\psi_E^{-(k)} | \psi_E^{(l)} \rangle$ ($k,l$ = 1,2,...,n) and the
other due to the change in frame transformation from the background
eigenchannels to the asymptotic eigenchannels $\langle \psi_E^{(l)} |
\psi_E^{-(m)} \rangle$ ($l,m$ = 1,2,...,n). The change in frame
transformation does not take place in the direction of the rotation
axis given by $\hat{y}$ but in the rotation angle since the rotation
$y$ axis is fixed in the Liouville space as energy varies. The first
contribution has the term $S^{+} \boldsymbol{\sigma} \cdot \hat{y} S$
which is the similarity transformation of the operator
$\boldsymbol{\sigma} \cdot \hat{y}$ by $S$. Substituting Eq. (\ref{S})
for $S$, this term becomes
\begin{equation}
S^{+} \boldsymbol{\sigma} \cdot \hat{y} S = e^{-i \frac{\theta_a'}{2}
\boldsymbol{\sigma} \cdot \hat{y}} e^{i \delta_a \boldsymbol{\sigma}
\cdot \hat{z}} \boldsymbol{\sigma} \cdot \hat{y} e^{-i \delta_a
\boldsymbol{\sigma} \cdot \hat{z}} e^{i \frac{\theta_a'}{2}
\boldsymbol{\sigma} \cdot \hat{y}} ,
\label{Q_deriv2}
\end{equation}
where use is made of that $e^{i \frac{\theta_a'}{2}
\boldsymbol{\sigma} \cdot \hat{y}}$ and $\boldsymbol{\sigma} \cdot
\hat{y}$ are commutative. The scalar factor $e^{- i \delta_{\Sigma}}$
in $S$ does not appear in Eq. (\ref{Q_deriv2}) as it is multiplied by
its complex conjugation in $S^{+}$ to become unity. According to the
theorem, the unitary transformations in the right hand side of
Eq. (\ref{Q_deriv2}) correspond to two consecutive rotations, at first
about the $z$ axis by $-2 \delta_a$ and then about the $y$
axis by $\theta_a$. By the first rotation, $\boldsymbol{\sigma} \cdot
\hat{y}$ becomes $\boldsymbol{\sigma} \cdot \left( \hat{x} \sin 2
\delta_a + \hat{y} \cos 2 \delta_a \right)$, i.e.,
\begin{equation}
e^{i\delta_a \boldsymbol{\sigma} \cdot \hat{z}} \boldsymbol{\sigma}
\cdot \hat{y} e^{-i \delta_a \boldsymbol{\sigma} \cdot \hat{z}} =
\boldsymbol{\sigma} \cdot \left[ R_{\hat{z}} (-2 \delta_a ) \hat{y}
\right] = \boldsymbol{\sigma} \cdot \left( \hat{x} \sin 2 \delta_a +
\hat{y} \cos 2 \delta_a \right) .
\label{rot_y_about_z}
\end{equation}
By substituting Eq. (\ref{rot_y_about_z}) into Eq. (\ref{Q_deriv2})
and $ \boldsymbol{\sigma} \cdot \hat{y} $'s being replaced with $e^{-i
\frac{\theta_a'}{2} \boldsymbol{\sigma} \cdot \hat{y}}
\boldsymbol{\sigma} \cdot \hat{y} e^{i \frac{\theta_a'}{2}
\boldsymbol{\sigma} \cdot \hat{y}}$,
\begin{eqnarray}
S^+ \boldsymbol{\sigma} \cdot \hat{y}S - \boldsymbol{\sigma} \cdot
\hat{y} &=& e^{-i \frac{\theta_a'}{2} {\bf \sigma} \cdot \hat{y}}
\left[ \boldsymbol{\sigma} \cdot \left( \hat{x} \sin 2 \delta_a +
\hat{y} \cos 2 \delta_a \right) - \boldsymbol{\sigma} \cdot \hat{y}
\right] e^{i \frac{\theta_a'}{2} {\bf \sigma} \cdot \hat{y}} \nonumber
\\ &=& e^{-i \frac{\theta_a'}{2} {\bf \sigma} \cdot \hat{y}} \left[ 2
\sin \delta_a \boldsymbol{\sigma} \cdot \left( \hat{x} \cos \delta_a -
\hat{y} \sin \delta_a \right) \right] e^{i \frac{\theta_a'}{2} {\bf
\sigma} \cdot \hat{y}} .
\label{Q_third}
\end{eqnarray}
The bracketed part of Eq. (\ref{Q_third}) is equal to the rotation of
the $x$ axis about the $z$ axis by $- \delta_a$ multiplied
by $2 \sin \delta_a$, which is the overall effect of the
interference. Fig. \ref{fig:frmchg} shows this process of interference
as a vector addition in the three-dimensional Liouville space.
The time delay due to the change in frame transformation, the third
term of the right hand side of Eq. (\ref{Q_deriv1}), becomes
\begin{eqnarray}
&& \hbar \sin \delta_a \frac{d\theta_a'}{dE}
e^{-i \frac{\theta_a'}{2} \boldsymbol{\sigma} \cdot \hat{y}}
\left( \hat{x} \cos \delta_a - \hat{y} \sin \delta_a \right)
e^{i \frac{\theta_a'}{2} \boldsymbol{\sigma} \cdot \hat{y}} \nonumber \\
&=& \hbar \sin \delta_a \frac{d\theta_a'}{dE}
e^{-i \frac{\theta_a'}{2} \boldsymbol{\sigma} \cdot \hat{y}}
e^{i \frac{\delta_a}{2} \boldsymbol{\sigma} \cdot \hat{z}}
\boldsymbol{\sigma} \cdot \hat{x}
e^{-i \frac{\delta_a}{2} \boldsymbol{\sigma} \cdot \hat{z}}
e^{i \frac{\theta_a'}{2} \boldsymbol{\sigma} \cdot \hat{y}}
\nonumber \\
&=& \hbar \sin \delta_a \frac{d\theta_a'}{dE}
e^{i \frac{\delta_a}{2} \boldsymbol{\sigma} \cdot \hat{z'}}
\boldsymbol{\sigma} \cdot \hat{x}'
e^{-i \frac{\delta_a}{2} \boldsymbol{\sigma} \cdot \hat{z'}} \nonumber \\
&=& \hbar \sin \delta_a \frac{d\theta_a'}{dE} \boldsymbol{\sigma} \cdot
\hat{x}'' ,
\label{Q_f2}
\end{eqnarray}
where the second equality is obtained by applying Eq. (\ref{r}) twice
to the matrix term to obtain $\boldsymbol{\sigma} \cdot
\left[R_{\hat{y}} (\theta_a' ) R_{\hat{z}} (- \delta_a ) \hat{x}
\right]$ which becomes $\boldsymbol{\sigma} \cdot R_{\hat{z}'}
(-\delta_a ) \hat{x}'$ and then by applying Eq. (\ref{r}) again. In
the last equality of Eq. (\ref{Q_f2}), we introduced another new $x''y''z''$
coordinate system which is obtained from the $x'y'z'$ coordinate
system by the rotation $R_{\hat{z}'} (\delta_a )$ in the passive sense.
In the active sense, $\hat{x}''$ = $R_{\hat{z}'} (- \delta_a )
\hat{x}'$.
Substituting
Eq. (\ref{Q_f2}) into Eq. (\ref{Q_deriv1}), the time delay matrix $Q$
is obtained as
\begin{equation}
Q = \hbar \left( {\bf 1} \frac{d \delta_{\Sigma}}{dE} +
\boldsymbol{\sigma } \cdot \hat{z}' \frac{d \delta_a} {dE} +
\boldsymbol{\sigma } \cdot \hat{x}'' \sin \delta_a \frac{d
\theta_a}{dE} \right) ,
\label{Q_final2}
\end{equation}
and is equal to Eq. (\ref{Q_af}) when $\hat{z}'$ and $\hat{x}''$ are
identified with the unit vectors $\hat{n}_a'$ and $\hat{n}_f'$
($\hat{n}_{\theta_a'}$ and $\hat{n}_{\theta_a'}^{\perp}$ in
Ref. \cite{Lee98}) and $\hbar d\delta_{\Sigma} /dE$ (= $\hbar
d\delta_r /dE$), $\hbar \boldsymbol{\sigma} \cdot \hat{z}' d\delta_a
/dE$, and $\hbar \sin \delta_a \boldsymbol{\sigma} \cdot \hat{x}''
d\theta_a /dE$ are identified with $\frac{1}{2} \tau_r $, $\frac{1}{2}
\tau_r {\bf P}_a$ and $\frac{1}{2} \tau_r {\bf P}_f$, respectively. By
substituting ${\bf P}_a$ = $\hat{z}' \cos \theta_f$ = $\hat{z}'' \cos
\theta_f$ and ${\bf P}_f$ = $\hat{x}'' \sin \theta_f$,
Eq. (\ref{Q_final2}) can also be transformed as follows
\begin{eqnarray}
Q &=& \frac{1}{2} \tau_r \left[ {\bf 1} + \boldsymbol{\sigma} \cdot
\left( \hat{z}'' \cos \theta_f + \hat{x}'' \sin \theta_f \right)
\right] \nonumber \\ &=& \frac{1}{2} \tau_r \left( {\bf 1} + e^{-i
\frac{\theta_f}{2} \boldsymbol{\sigma} \cdot \hat{y}''}
\boldsymbol{\sigma} \cdot \hat{z}'' e^{i \frac{\theta_f}{2}
\boldsymbol{\sigma} \cdot \hat{y}''} \right) \nonumber \\ &=&
\frac{1}{2} \tau_r \left( {\bf 1} + \boldsymbol{\sigma} \cdot
\hat{z}''' \right) ,
\label{Q_final3}
\end{eqnarray}
where still another $x'''y'''z'''$ coordinate system is introduced. In
the active sense, $\hat{z}'''$ = $R_{\hat{y}''} (\theta_f )
\hat{z}''$.
Eqs. (\ref{Q_final2}) and (\ref{Q_final3}) tells us that time delay
matrices due to the avoided crossing interaction,due to the change in
frame transformations and the total time delay matrix take simplest
form in the $x'y'z'$, $x''y''z''$, and $x'''y'''z'''$ coordinate
systems, respectively.
Eq. (\ref{Q_final3}) equals Eq. (\ref{Q_t}) and therefore $\hat{z}'''$
equals ${\bf P}_t$. The vector $\hat{z}'''$, and accordingly ${\bf
P}_t$, can be obtained from $\hat{z}$
by successive rotations by
\begin{equation}
{\bf P}_t = \hat{z}''' = R_{\hat{y}''} (\theta_f ) R_{\hat{z}'} (-
\delta_a ) R_{\hat{y}} (\theta_a' ) \hat{z} = R_{\hat{y}''} (\theta_f
) R_{\hat{y}} (\theta_a' ) \hat{z} .
\label{nt_euler}
\end{equation}
As mentioned before, it is better to consider $\pmb{\cal P}_t$ =
$\tilde{U}^0 {\bf P}_t U^0$ rather than ${\bf P}_t$ itself since the
formula for the former is simpler that that for the latter. $\pmb{\cal
P}_t$ is the polarization vector pertaining to ${\cal Q} = \tilde{U}^0
Q U^0$ which is the time delay matrix in the basis of background
eigenchannel wavefunctions. This suggests that it may be better to
take the background eigenchannel wavefunctions rather than the
asymptotic channel wavefunctions as a starting channel basis. From now
on, let us redefine the $xyz$ coordinate system as the coordinate
system pertaining to the background eigenchannels. Let us use the name
of $x^0 y^0 z^0$ coordinate system as that pertaining to the
asymptotic channels. Definitions of other coordinate systems remain
unchanged. With this redefinition of notation, the formulas
for ${\cal S}$ and ${\cal Q}$ corresponding to Eqs. (\ref{S_final})
and (\ref{Q_af}) becomes
\begin{eqnarray}
{\cal S} &=& e^{-i\frac{\theta_a}{2} \boldsymbol{\sigma } \cdot
\hat{y}} e^{-i(\delta_{\Sigma}{\bf 1} + \delta_a \boldsymbol{\sigma }
\cdot \hat{z}) } e^{i \frac{\theta_a}{2} \boldsymbol{\sigma } \cdot
\hat{y}} = e^{-i\left( \delta_{\Sigma}{\bf 1} +\delta_a
\boldsymbol{\sigma} \cdot \hat{n}_a \right) } ,
\label{S_new_basis}
\\
{\cal Q}
&=& \frac{1}{2} (\tau_r {\bf 1} + \tau_a \boldsymbol{\sigma}\cdot
\hat{n}_a + \tau_f \boldsymbol{\sigma}\cdot \hat{n}_f ) =
\frac{1}{2}
\tau_r \left( {\bf 1} + \pmb{\cal P}_a \cdot \boldsymbol{\sigma} +
\pmb{\cal P}_f \cdot \boldsymbol{\sigma} \right)
= \frac{1}{2}
\tau_r \left( {\bf 1} + \pmb{\cal P}_t \cdot \boldsymbol{\sigma}
\right) ,
\label{Q_new_basis}
\end{eqnarray}
with
\begin{eqnarray}
\hat{n}_a &=& \hat{z} \cos \theta_a + \hat{x} \sin
\theta_a = R_{\hat{y}} (\theta_a ) \hat{z} ,\nonumber\\
\hat{n}_f &=& \hat{y} \times \hat{n}_a
\cos \delta_a - \hat{y} \sin\delta_a = R_{\hat{z}'} (- \delta_a )
\hat{x}' .
\end{eqnarray}
In place of Eq. (\ref{nt_euler}), we have
\begin{equation}
\pmb{\cal P}_t = \hat{n}_t = R_{\hat{y}''} (\theta_f ) R_{\hat{z}'} (-
\delta_a ) R_{\hat{y}} (\theta_a ) \hat{z} = R_{\hat{y}''} (\theta_f )
R_{\hat{y}} (\theta_a ) \hat{z} .
\label{nt_euler2}
\end{equation}
By substituting the relations $\cot \theta_a = - \epsilon_a$ and $\cot
\theta_f = - \epsilon_{\rm BF,r}$ into Eq. (\ref{nt_euler2}), it is
checked that the same formula as Eq. (\ref{nt}) is obtained for
$\hat{n}_t$. Note that the formula (\ref{nt}) for $\hat{n}_t$ is
independent of energy in contrast to $\hat{n}_a$ ($\hat{n}_f$) which
varies from $\hat{z}$ ($\hat{x}$) through $\hat{x}$ ($-\hat{z}$) to
$-\hat{z}$ ($-\hat{x}$) as energy varies from $-\infty$ to
$\infty$. This holds generally at least for the multichannel system in
the neighborhood of an isolated resonance and derives from the fact
that only one type of continua can interact with a discrete state (see
Eq. (\ref{QPa}) in Appendix \ref{App:deriv_na_nt} and
Ref. \cite{Lyuboshitz77} for more general systems).
So far, several coordinate systems are considered such as $xyz$,
$x'y'z'$, $x''y''z''$, and $x'''y'''z'''$ coordinate systems
pertaining to the eigenchannels of $S^0$, $S$ or $\boldsymbol{\sigma}
\cdot \pmb{\cal P}_a$, $\boldsymbol{\sigma} \cdot \pmb{\cal P}_f$, and
$\boldsymbol{\sigma} \cdot \pmb{\cal P}_t$, respectively. These
coordinate systems are shown graphically in Fig. \ref{fig:euler}.
According to Eq. (\ref{theta_t}), the spherical polar coordinate of
$\hat{n}_t$ is given by (1,$\theta_t$, $-\Delta_{12}^0$) in the $xyz$
coordinate system. Since $\hat{n}_a$ lies on the $zx$ plane, the
absolute magnitude $\Delta_{12}^0$ of the azimuth of $\hat{n}_t$ is
equal to the dihedral angle between two planes whose normals are given
by $\hat{z} \times \hat{n}_a$ and $\hat{z} \times \hat{n}_t$. Let us
now consider the coordinate of $\hat{n}_t$ in the $x'y'z'$
coordinates, where $\hat{z}'$ = $\hat{n}_a$. The angle which
$\hat{n}_t$ makes with $\hat{z}'$ is $\theta_f$ and the azimuth of
$\hat{n}_t$ may be
obtained by considering the $z'x''$ (=$z''x''$) plane. Note that
$\hat{x}''$ is equal to $\hat{n}_f$ and $\hat{n}_t$ lies on the
$z'x''$ plane meaning that the azimuth of $\hat{n}_t$ is identical
with the dihedral angle which the $z'x''$ plane makes with the $z'x'$
plane. Since $x''y''z''$ coordinate system is obtained from
$x'y'z'$ coordinate system by rotating about the $z'$ axis by
$-\delta_a$, the dihedral angle which the plane $z'x''$ makes with the
$z'x'$ plane is $\delta_a$. Therefore, the spherical polar coordinate
of $\hat{n}_t$ in the $x'y'z'$ coordinate system is (1, $\theta_f$,
$-\delta_a$). See Fig. \ref{fig:pol_vec} to understand the explanation
graphically.
Since the dihedral angle between two planes whose normals
are $\hat{n}_a \times \hat{x}'$ and $\hat{n}_a \times \hat{n}_t$ is
$\delta_a$, the dihedral angle between two planes whose normals are
given by $\hat{n}_a \times \hat{z}$ and $\hat{n}_a \times \hat{n}_t$
is $\pi - \delta_a$. With this, we can construct a spherical triangle
$\Delta {\rm APQ}$ with vertices formed with the endpoints of
$\hat{z}$, $\hat{n}_a$, and $\hat{n}_t$, where the vertex angles
opposite to the edge angles $\theta_f$ and $\theta_t$ are
$\Delta_{12}^0$ and $\pi - \delta_a$, respectively, as shown in
Fig. \ref{fig:sph_tri}. The vertex angle opposite to the edge angle
$\theta_a$ can be shown to be $ \delta_r$ by making use of the
following relation
\begin{equation}
e^{-i \Delta_{12}^0 \boldsymbol{\sigma} \cdot \hat{z}} e^{-i \delta_r
\boldsymbol{\sigma} \cdot \hat{n}_t} = e^{-i \delta_a
\boldsymbol{\sigma} \cdot \hat{n}_a} ,
\label{na_nt}
\end{equation}
which is the spin model version of the relation ${\cal S}$ = ${\cal
S}^0 (\pi_b +
e^{-2i \delta_r } \pi_a )$ (see Appendix \ref{App:deriv_na_nt} for the
derivation). According to Appendix
\ref{App:mul_rot}, Eq. (\ref{na_nt}) can be expressed using the
rotation matrices in the Liouville space as
\begin{equation}
R_{\hat{z}} (2 \Delta_{12}^0 ) R_{\hat{n}_t} (2 \delta_r ) =
R_{\hat{n}_a} (2 \delta_a )
\end{equation}
and manifests that
the vertex angle
opposite to $\theta_a$ is $\delta_r$.
The dual spherical triangle of $\Delta {\rm APQ}$ may be constructed
by converting vertices of the original triangle to its edges and edges
of the original triangle to its vertices, according to the rule
described in Ref. \cite{JenningsBook}. According to the rule, the
vertex angles of the dual spherical triangle are obtained from the
corresponding edge angles of the original triangle by subtracting the
edge angles from $\pi$ like $\pi -\theta_a$, $\pi - \theta_f$ and $\pi
- \theta_t$ and the corresponding edge angles are also obtained
similarly like $\pi -\delta_r$, $\pi - \Delta_{12}^0$ and
$\delta_a$. The dual spherical triangle constructed in this way is
shown in Fig. \ref{fig:dual_st}.
Before considering dynamic aspects of the laws holding for the
spherical triangle, let us comment on Eq. (\ref{na_nt}), or the
equivalent ${\cal S}^0 (\pi_b + e^{-2i \delta_r} \pi_a )$. For this
purpose, let us define phase shift matrices $\boldsymbol{\Delta}^0$,
$\boldsymbol{\Delta}_r$, $\boldsymbol{\Delta}$ by ${\cal S}^0$ =
$e^{-2i \boldsymbol{\Delta}^0}$, $\pi_b + e^{-2i \delta_r} \pi_a$ =
$e^{-2i \boldsymbol{\Delta}_r}$, ${\cal S}$ = $e^{-2i
\boldsymbol{\Delta}}$. Phase shift matrices are easily obtained as
\begin{eqnarray}
\boldsymbol{\Delta}^0 &=& \frac{1}{2} (\delta_{\Sigma}^0 {\bf 1} +
\Delta_{12}^0 \boldsymbol{\sigma} \cdot \hat{z} ) ,
\label{Delta0} \\
\boldsymbol{\Delta}_r &=& \frac{1}{2} (\delta_r {\bf 1} + \delta_r
\boldsymbol{\sigma} \cdot \hat{n}_t ) ,
\label{Deltar} \\
\boldsymbol{\Delta} &=& \frac{1}{2} ( \delta_{\Sigma}{\bf 1} + \delta_a
\boldsymbol{\sigma} \cdot \hat{n}_a ) .
\label{Delta}
\end{eqnarray}
For Eq. (\ref{Delta0}), if $\Delta_{12}^0$ = 0, two
eigenchannels have the identical background eigenphase shifts
$\delta_1^0$ = $\delta_2^0$ = $\delta_{\Sigma}^0 /2$. The
eigenchannels are isotropic with respect to the potential
that brings about the background phase shifts. When two
eigenchannels react differently or anisotropically with respect to the
potential, eigenchannels have different phase shifts $\delta_1^0$
$\ne$ $\delta_2^0$, or $\Delta_{12}^0$ $\ne$ 0. (The off-diagonal term
of $S^0$ gives the transition amplitude and is thus caused by the
channel-channel coupling. Since the off-diagonal term is implicitly
included in eigenchannels, the potential that eigenchannels feels
includes the channel-channel coupling effect.) The anisotropic term
of (\ref{Delta0}) contains the information on this phase difference
and the eigenchannels.
The phase shift matrix (\ref{Delta0}) is a vector whose coordinate is
($\delta_{\Sigma}^0,0,0,\Delta_{12}^0$) in the four-dimensional
Liouville space. Or, if we consider only the anisotropic term, it is a
vector in the three-dimensional Liouville space, whose magnitude is
$\frac{1}{2} \Delta_{12}^0$ and whose direction is $\hat{z}$. Though
background and resonance scattering contributions appear as a single
product term in Eq. (\ref{na_nt}) in the $S$ matrix, two contributions
are not simply combined in the case of the phase shift matrix
$\boldsymbol{\Delta}$. For the isotropic parts of two contributions to
$\boldsymbol{\Delta}$, the combining rule is simple and they are
simply added up to give the isotropic part of the phase shift matrix
$\boldsymbol{\Delta}$ as $\frac{1}{2}\delta_{\Sigma}$ = $\frac{1}{2}
(\delta_{\Sigma}^0 + \delta_r)$. The combining rule of anisotropic
terms is not so simple. According to the Campbell-Baker-Hausdorff
formula\cite{Weiss62}, the anisotropic part of the phase shift matrix
$\boldsymbol{\Delta}$ is expressed as a very complicated infinite sum
of multiple commutators of the anisotropic parts of
$\boldsymbol{\Delta}^0$ and $\boldsymbol{\Delta}_r$ as
\begin{eqnarray}
2\boldsymbol{\Delta} - \delta_{\Sigma} {\bf 1} &=&
\Delta_{12}^0 \boldsymbol{\sigma} \cdot \hat{z} +
\delta_r \boldsymbol{\sigma} \cdot \hat{n}_t -\frac{i}{2}
[ \Delta_{12}^0 \boldsymbol{\sigma} \cdot \hat{z} ,
\delta_r \boldsymbol{\sigma} \cdot \hat{n}_t ] \nonumber\\
&-& \frac{1}{12}
\left( \left[ [ \Delta_{12}^0 \boldsymbol{\sigma} \cdot \hat{z} ,
\delta_r \boldsymbol{\sigma} \cdot \hat{n}_t ] ,
\delta_r \boldsymbol{\sigma} \cdot \hat{n}_t \right] -
\left[ [ \Delta_{12}^0 \boldsymbol{\sigma} \cdot \hat{z} ,
\delta_r \boldsymbol{\sigma} \cdot \hat{n}_t ] ,
\Delta_{12}^0 \boldsymbol{\sigma} \cdot \hat{z} \right] \right) +
\cdots
\end{eqnarray}
But, the geometrical construction in the
Liouville space provides a simple combining rule. For that, at first
we ignore the magnitudes of vectors in the three-dimensional Liouville
space and consider the spherical
triangle made up of endpoints of the unit vectors corresponding to the
anisotropic terms. The magnitudes of vectors corresponding to
anisotropic terms, instead, are utilized as the edge angles of the
spherical triangle. This is the procedure we take when we interpret
Eq. (\ref{na_nt}) as giving the remaining edge angle $\delta_r$.
Trigonometric laws of the spherical triangle then provide a details of
the combining
rule which are the subject of the next section.
Let us comment one more thing on the avoided crossing interaction. If
we use background eigenchannels as a basis, then the $S^0$ matrix is
already diagonal by definition in that basis. Then for processes
occurring along this eigenchannels, there is no channel-channel
coupling. If a
discrete state is included into the system, background eigenchannels
are no longer decoupled and interact each other through the indirect
continuum-continuum, or channel-channel, coupling via the discrete
state. This indirect channel-channel coupling brings about the avoided
crossing interaction in the curves of eigenphase shifts of the $S$
matrix. Therefore, the avoided crossing interaction is not devoid of
the resonance contribution. What is devoid of in the avoided crossing
interaction is the isotropic resonant contribution. It includes the
anisotropic resonant contribution.
\section{Connection of the geometrical relation with dynamics}
Now let us describe the dynamical aspects of the geometrical laws
holding for the spherical triangle, such as the laws of sines, the
laws of cosines. Cotangent laws including four
successive parts and laws including five successive parts (see
Ref. \cite{ChiangBook}) are derivable from the laws of sines and
cosines but deserve a treatment as separate laws.
Let us consider the cotangent
law\cite{ChiangBook}
\begin{equation}
\sin \Delta_{12}^0 \cot \delta_a = -
\sin \theta_a \cot \theta_t + \cos \theta_a \cos \Delta_{12}^0.
\label{cot_delta_a}
\end{equation}
When $\cos \theta_a$ = $-\epsilon_a / \sqrt{\epsilon_a^2 +1}$ and
$\sin \theta_a$ = $1/\sqrt{\epsilon_a^2 +1}$ are inserted (for the
sign convection, see\cite{Convention2}), Eq. (\ref{cot_delta_a}) can be
put into Beutler-Fano's formula (\ref{delta_a_bf}) for $\cot \delta_a$
as follows
\begin{eqnarray}
\cot \delta_a &=& \frac{1}{\sin \Delta_{12}^0} \left(
\cos \theta_a \cos \Delta_{12}^0 - \sin \theta_a \cot
\theta_t \right) \nonumber \\
&=& - \frac{1}{\sin
\Delta_{12}^0} \left( \frac{\epsilon_a}{\sqrt{\epsilon_a^2 +1}} \cos
\Delta_{12}^0 + \frac{1}{\sqrt{\epsilon_a^2 +1}} \cot \theta_t \right)
\nonumber\\
&=& - \cot \Delta_{12}^0 \frac{\epsilon_a -
q_a}{\sqrt{\epsilon_a^2 +1}} ,
\end{eqnarray}
where $q_a$ is identified with $-\cot \theta_t / \cos \Delta_{12}^0$
and can be easily checked to be equal to the previous definition
(\ref{q_a}) with the use of Eq. (\ref{theta_t}).
One of the cotangent law for its dual
spherical triangle is given by
\begin{equation}
\cot \theta_f \sin \theta_t = - \cos (\pi - \delta_r ) \cos \theta_t +
\sin (\pi - \delta_r ) \cot \Delta_{12}^0 .
\label{cot_theta_f}
\end{equation}
With $\cos \delta_r$ = $-\epsilon_r /
\sqrt{\epsilon_r^2+1}$
and $\sin\delta_r$ = $1/\sqrt{\epsilon_r^2 +1}$,
Eq. (\ref{cot_theta_f})
can be
put into Beutler-Fano's formula (\ref{cotf}) for $\cot \theta_f$ as
follows
\begin{eqnarray}
\cot \theta_f &=& \frac{1}{\sin \theta_t} \left( \cos \delta_r \cos
\theta_t + \sin \delta_r \cot \Delta_{12}^0 \right) \nonumber\\
&=& - \frac{1}{\sin \theta_t} \left( \frac{\epsilon_r
}{\sqrt{\epsilon_r ^2 +1}} \cos \theta_t - \frac{1}{\sqrt{\epsilon_r
^2 +1}} \cot \Delta_{12}^0 \right) \nonumber\\
&=& -\cot \theta_t \frac{\epsilon_r -q_{\tau}} {\sqrt{\epsilon_r ^2
+1}} ,
\label{cotf_bf}
\end{eqnarray}
where $q_{\tau}$ is identified with $\cot \Delta_{12}^0 / \cos
\theta_t$, again equal to Eq. (\ref{qtau}).
Using Eq. (\ref{cot_energy}) and the convention\cite{Convention2}, the
sine laws,
\begin{eqnarray}
\frac{\sin \Delta_{12}^0}{\sin \theta_f} &=& \frac{\sin \delta_r}
{\sin \theta_a} , \nonumber\\
\frac{\sin \delta_r}{\sin \theta_a} &=&
\frac{\sin \delta_a}{\sin \theta_t}
\label{sin_law}
\end{eqnarray}
are translated into
\begin{eqnarray}
\epsilon_a ^2 +1 &=& \sin
^2 \Delta_{12}^0 (\epsilon_r^2 +1)(\epsilon_{{\rm BF,r}}^2 +1) ,
\label{ea2}
\\
\epsilon_r ^2 +1 &=& \sin ^2 \theta_t (\epsilon_a^2
+1)(\epsilon_{{\rm BF,a}}^2 +1),
\label{er2}
\end{eqnarray}
respectively. Eq. (\ref{ea2}) was used to derive Eq. (\ref{tauf_bf})
in Ref. \cite{Lee98}.
(Equating the $x,y,z$ components of $\hat{n}_t$ with those of
$\hat{n}_a \cos \theta_f + \hat{n}_f \sin \theta_f$ yields the law
involving 5 successive parts $\sin \theta_t \cos \Delta_{12}^0$ =
$\sin \theta_a \cos \theta_f$ + $\cos \theta_a \sin \theta_f
\cos\delta_a$, one of the laws of sines, $\sin \Delta_{12}^0 / \sin
\theta_f$ = $\sin \delta_a / \sin \theta_t$, and one of the laws of
cosines, $\cos \theta_t$ = $\cos \theta_a \cos \theta_f$ $-\sin
\theta_a \sin \theta_f \cos \delta_a$, in that order. Though such an
equality looks irrelevant to the laws of the spherical triangle at a
first glance, it actually has to do with the laws of the spherical
triangle since it is used to obtain the vertex angle $\pi -\delta_a$
of the endpoint of $\hat{n}_a$.)
The presence of the dual spherical triangle indicates that there is a
symmetry with respect to the exchange of vertices and edges. The
comparison of the spherical triangle in Fig. \ref{fig:sph_tri} with
its dual one in Fig. \ref{fig:dual_st} shows that the exchange of
$\delta_r$, $\theta_f$, and $\Delta_{12}^0$ with $\pi -\theta_a$,
$\delta_a$, and $\pi - \theta_t$ transforms the spherical triangle
into its dual and vice versa and thus any trigonometric laws will be
invariant under this exchange. Besides the geometrical laws, other
laws containing not only geometric parameters but also other types of
parameters should remain as valid expressions with respect to this
exchange. With this requirement, in order for $\cot \delta_r$ =
$-\epsilon_r$ to remain as a valid expression with respect to the
exchange, $\epsilon_r$ should be replaced by $-\epsilon_a$. The right
hand side of $q_{\tau}$ = $\cot \Delta_{12}^0 / \cos \theta_t$ becomes
$\cot \theta_t / \cos \Delta_{12}^0$ which is equal to $-q_a$. Thus
$q_{\tau}$ is replaced by $-q_a$ under the exchange. Similar procedure
shows that $\epsilon_{{\rm BF},a}$ is transformed into $\epsilon_{{\rm
BF},r}$ under the exchange. The variables with their conjugated ones
are summarized in Table \ref{table:conjug}. This symmetry under the
exchange yields many relations without derivation thus saves a lot of
efforts. It can also be used to check the validity of derived
equations. Let us take a few examples. If it holds the relation that
\begin{equation}
\frac{\epsilon_r - q_{\tau}}{\sin \theta_t} = \frac{\epsilon_a +
\frac{1}{q_a}}{\sin \Delta_{12}^0} ,
\label{e_r_e_a}
\end{equation}
another valid relation is obtained as
\begin{equation}
\frac{\epsilon_a - q_a}{\sin \Delta_{12}^0} = \frac{\epsilon_r +
\frac{1}{q_{\tau}}}{\sin \theta_t} ,
\end{equation}
by exchanging variables according to Table \ref{table:conjug}.
If it holds that
\begin{equation}
\frac{d \delta_a}{d\delta_r } = \cos \theta_f ,
\end{equation}
then the following
\begin{equation}
\frac{d\theta_f}{d\theta_a} = - \cos \delta_a
\end{equation}
can be obtained by the same procedure.
Geometrical realization reveals that complicated behaviors of
dynamical parameters like $\delta_a$ and $\theta_f$ as a function of
energy are nothing but the result of a simple geometrical traversal
along the great circle shown in Fig. \ref{fig:sphere}. Before
examining the behaviors of dynamic parameters as functions of energy,
it is noted that the vectors $\hat{z}$ and $\hat{n}_t$ are fixed in
the real three-dimensional Liouville space while $\hat{n}_a$ changes
its direction as energy varies. The constancy of the $\hat{z}$ vector
derives from the usual assumption of energy insensitiveness of
background scattering. The constancy of the $\hat{n}_t$ vector derives
from that the time delay matrix has only a resonant contribution as
shown in Eq. (\ref{QPa}) and its eigenchannels consist of Fano's
energy independent $\psi_E^{(a)}$ and continua orthogonal to it. As
the energy $\epsilon_r$ varies from $-\infty$ to $\infty$, $\theta_a$
undergoes a change from 0 to $\pi$ which corresponds to the
semicircular traversal of the point P from A to the opposite point
$-$A along the greatest circle while the points A and Q keeps fixed in
Fig. \ref{fig:sphere}. In this semicircular traversal, the angle $\pi
- \delta_a$ varies from $\pi - \Delta_{12}^0$ when P coincides with A
to $\Delta_{12}^0$ when P coincides with $-$A. The angle $\delta_a$,
accordingly, varies from $\Delta_{12}^0$ to $\pi -
\Delta_{12}^0$. (The angle $\theta_f$ varies similarly from $\theta_t$
to $\pi - \theta_t$.) Traversal enjoys a special symmetry when
$\theta_t$ = $\pi/2$. Let the point A be taken as a polar point. Then
the side $PQ$ becomes part of the equator at the middle of the
traversal where $\theta_a$ becomes $\pi/2$. Since any meridian makes
the right angle with the equator, the angle $\delta_a$ which the chord
AP makes with the equator becomes a right angle, i.e., $\delta_a$ =
$\pi/2$. Now let us consider the deviation of the point P from the
equator. Let $\delta_a$ = $\pi /2 + y$ at $\theta_a$ = $\pi /2
+x$. Then, by the symmetry of the spherical triangle, $\delta_a$ =
$\pi /2 - y$ at $\theta_a$ = $\pi /2 -x$. Napier's rule
\begin{equation}
\cot \delta_a = \cos \theta_a \cot \Delta_{12}^0
\label{cot_parallel}
\end{equation}
holding for the right spherical triangle satisfies such a
symmetry\cite{NapierRule}. Obviously, $q_a$ = 0 for
Eq. (\ref{cot_parallel}). When $\theta_t$ $\ne$ $\pi/2$, $\delta_a$ is
no longer $\pi /2$ when $\theta_a$ = $\pi /2$. The occurrence of the
mismatch in energies where values of $\theta_a$ and $\delta_a$ become
$\pi /2$ amounts to the addition of $\sin \theta_a$ term on the right
hand side of Eq. (\ref{cot_parallel}), which causes the value of $q_a$
to deviate from zero. The value of $q_a$ can be obtained as $-\cot
\theta_t /\cos \Delta_{12}^0$ by
substituting $- \cot \theta_t / \sin \Delta_{12}^0$, one of Napier's
rules holding when $\theta_a$ = $\pi /2$, for $\cot \delta_a$ into
Eq. (\ref{qaf_new_def}). It can be roughly stated that the asymmetry
of the Beutler-Fano formula for $\cot \delta_a$ derives from the
asymmetry of the geometry.
The concurrent change of $\theta_f$ with the increase of the arc
length $\theta_a$ as P traverses
can be obtained by differentiating the cosine law $\cos
\theta_f$ = $\cos \theta_a \cos \theta_t$ + $\sin \theta_a \sin
\theta_t \cos \Delta_{12}^0$ with respect to $\theta_a$ keeping
$\theta_t$ and $\Delta_{12}^0$ fixed, which becomes
\begin{equation}
- \sin \theta_f \frac{d \theta_f}{d \theta_a} = - \sin \theta_a \cos
\theta_t + \cos \theta_a \sin \theta_t \cos \Delta_{12}^0 .
\label{cos_deriv}
\end{equation}
The right hand side of Eq. (\ref{cos_deriv}) becomes $- \sin \theta_f
\cos ( \pi - \delta_a ) $ according to the law containing five
successive parts, which finally yields
\begin{equation}
\frac{d\theta_f}{d\theta_a} = - \cos \delta_a .
\end{equation}
Similar derivatives are obtained as
\begin{eqnarray}
\frac{d \delta_a}{d\delta_r} &=& \cos \theta_f ,\nonumber\\
\frac{d \delta_a}{d \theta_a} &=& \cot \theta_f \sin \delta_a ,
\nonumber\\
\frac{d \cot \theta_a}{d \cot \delta_r} &=&
\frac{\sin \Delta_{12}^0}{\sin \theta_t} .
\label{angle_deriv}
\end{eqnarray}
In the spherical triangle $\Delta ABC$, Gauss-Bonnet theorem becomes
\begin{equation}
\angle A + \angle B + \angle C = \pi + \frac{area(\Delta ABC)}{R^2} ,
\end{equation}
which states that the sum of interior angles of a spherical triangle
exceeds $\pi$ by the solid angle $\Omega$ defined by $area(\Delta
ABC)/R^2$. In the present case, the sum of interior (vertex) angles is
$\pi + \delta_r + \Delta_{12}^0 - \delta_a $. Hence, the solid angle
$\Omega$ is
\begin{equation}
\Omega = \delta_r + \Delta_{12}^0 - \delta_a = \delta_r +
\delta_{\Sigma}^0 -2 \delta_2^0 - \delta_a = 2 (\delta_{-} -
\delta_2^0 ) .
\label{Omega}
\end{equation}
The solid angle of the spherical triangle $\Delta APQ$ is easily
calculated as $2 \Delta_{12}^0$ when the point P coincides with the
antipode of A. Then the solid angle of the spherical triangle varies
from zero to $2 \Delta_{12}^0$ as the point P varies from the point A
to the point $-$A and, accordingly,
$\delta_{-}$ varies from $\delta_2^0$
to $\delta_1^0$ as energy varies from $-\infty$ to $\infty$, which is
consistent with the result of Ref. \cite{Lee98}.
So far, a geometric realization of the $S$ matrix and $Q$ matrix has
been considered. Let us now go back to the original questions we had
in the beginning of Sec. \ref{Sec:Prep} and see whether we can explain
them.
The first of the questions was that the energy variations of the
eigenvectors of the $S$ matrix are independent of $q_a$ while those of
its eigenphase shifts, or more specifically $\delta_a$, depend on
it. Let us start with that the eigenvectors or eigenchannels of the
$S$ matrix are obtained by the frame transformation of the background
eigenchannels. Since the background eigenchannels are fixed, the
energy variations of eigenchannels of the $S$ matrix completely come
from the frame transformation which in this case is parametrized with
$\theta_a$. In Fig. \ref{fig:euler}, $\theta_a$ is the edge angle
opposite to the vertex angle $\delta_r$. If $\epsilon_r$ varies,
$\delta_r$ varies according to $\cot \delta_r$ = $-\epsilon_r$. Since
the edge angle $\theta_a$ is the opposite to $\delta_r$, $\theta_a$
may be expected to vary linearly with $\delta_r$. Such an expectation
does not come out right. Instead, $\cot \theta_a$ varies linearly with
$\cot \delta_r$ according to one of the relations in
(\ref{angle_deriv}), $d \cot \theta_a / d \cot \delta_r$ = $\sin
\Delta_{12}^0 / \sin \theta_t$, which is fixed in energy. The relation
tells us that $\cot \theta_a$ has a linear relation with $\epsilon_r$
but $\cot \theta_a$ may not be zero when $\epsilon_r$ = 0, in
general. But, we can always introduce a new energy scale, let us call
it $\epsilon_a$, where $\cot \theta_a$ is zero at $\epsilon_a$ = 0 and
the proportionality constant can be set so that $\cot \theta_a = -
\epsilon_a$. The argument proves that $\cot \theta_a$ does not need no
further parameter like $q_a$. The reason why the energy variation of
$\delta_a$ needs the line profile index is already considered around
Eq. (\ref{cot_parallel}) and need not be repeated here.
If the second question which asks why the energy behaviors of
$\delta_a$, $\tau_a$ and $\tau_f$ follow the Beutler-Fano formulas is
changed like ``is it possible to show geometrically that their
behaviors follow the Beutler-Fano formula?'', the answer is yes and
their behavior is the result of the cotangent laws holding for the
spherical triangle, as we have done in this section.
Let us answer the third question why $\tau_a$ takes the Beutler-Fano
formula in the energy scale of $\epsilon_r$ instead of $\epsilon_a$
though $\tau_a$ is obtained as the derivative of $\epsilon_a$.
Note that the question on
$\tau_a$ can be paraphrased to that on $\cot \theta_f$ since $\tau_a$
= $\tau_r (\epsilon_r ) (1+\cot \theta^2_f )^{-1/2}$.
There are two such cotangent laws for $\cot \theta_f$ as
follows
\begin{eqnarray}
\cot \theta_f &=& \frac{1}{\sin \theta_t} \left(\cos \delta_r \cos
\theta_t + \sin \delta_r \cot \Delta_{12}^0 \right) ,
\label{cotf_delta_r}
\\
\cot \theta_f &=& \frac{1}{\sin \theta_a} \left(- \cos \delta_a \cos
\theta_a + \sin \delta_a \cot \Delta_{12}^0 \right) .
\label{cotf_delta_a}
\end{eqnarray}
Eq. (\ref{cotf_delta_r}) expresses $\cot \theta_f$ in terms of
$\epsilon_r$ while Eq. (\ref{cotf_delta_a}) expresses it in terms of
$\epsilon_a$. In Eq. (\ref{cotf_delta_r}), $\delta_r$ is the only
parameters which is a function of energy while, in
Eq. (\ref{cotf_delta_a}), not only $\delta_a$ but also $\theta_a$ are
parameters which are functions of energy. Eq. (\ref{cotf_delta_r})
gives the Beutler-Fano formula as a function of $\epsilon_r$ as we
already saw in Eq. (\ref{cotf_bf}). Eq. (\ref{cotf_delta_a}) might
also give a Beutler-Fano formula as a function of $\epsilon_a$ if
$\theta_a$ were a constant of energy but it fails to do so as
$\theta_a$ is a function of energy, too. On the other hand, if
$\theta_t$ were a function of energy, Eq. (\ref{cotf_delta_r}) could
not give the Beutler-Fano formula, too. This argument reveals that
cotangent laws of a spherical triangle alone are not sufficient to
guarantee the presence of Beutler-Fano formulas.
Let us consider the answer to the fourth question which asks the
reason why $| {\bf P}_t |^2$ = $|{\bf P}_a |^2 + |{\bf P}_f |^2$ = 1
is satisfied. Eq. (\ref{QPa}) shows that non-zero resonant behavior
of time delay occurs
only when the system is in the $\psi_E^{(a)}$ state, which derives
from that the $\psi_E^{(a)}$ state is the only type of continuum which
can interact with the discrete state. On the other
hand, the unit magnitude of the polarization ${\bf P}_t$ means that
only one continuum shows a resonant behavior while others do not.
Thus Eq. (\ref{QPa}) proves that $|{\bf P}_t|$ = 1.
The answer to the fifth question is provided by the identification of
$r^2$ with $\tan \theta^2 _t$ and need not be considered further.
\section{Application to the triatomic van der Waals
predissociation \label{sec:apps}}
Ref. \cite{Lee98} and the present paper have developed the theory for
the behaviors of eigenphase shifts and time delays. Let us now
consider the application of the theory to the vibrational
predissociation of triatomic van der Waals molecules. The theory can
be applied in two ways. When the $S$ matrix is known as a function of
energy either experimentally or by a theoretical calculation,
eigenphase shifts can be calculated directly by its
diagonalization. Similarly, eigentime delays can be calculated from
the $S$ matrix. For these data, formulas for eigenphase shifts and
time delays derived from the theory can be used as models with
parameters in the formulas viewed as adjustable ones. Extracting best
values of the parameters may be tried by fitting the data of
eigenphase shifts and eigentime delays obtained by the diagonalization
of the $S$ matrix to the theoretical models. On the other hand, by
using the formulas for the parameters themselves derived from the
theory, parameters can be directly calculated without doing data
fitting. Parameters obtained in two different ways, namely, by
data-fitting and by using the theoretical formulas are not identical
as the theory developed so far relies on the assumption that
background eigenphase shifts and partial decay widths are constants of
energy, which is usually a good approximation but does not hold
exactly in the actual system.
The data-fitting will be done only for the eigentime delay sum
(\ref{eigentime_delay_sum}) and
partial delay times $2\hbar d\delta_{+}/dE$ and $2\hbar d\delta_{-}
/dE$. Eigenphase shifts will not be used for the data-fitting since
they need the information on $E_a$ and $\Gamma_a$, which is not
available before the data-fitting. The data-fitting of partial delay
times to the theoretical formulas
\begin{eqnarray}
2\hbar \frac{d\delta_{\pm}}{dE} &=& \hbar \left( \frac{d\delta_r}{dE}
\pm \frac{d\delta_a}{dE} \right) = \frac{1}{2} ( \tau_r \pm \tau_a ) =
\frac{1}{2} \tau_r (1 \pm \cos \theta_f ) \nonumber\\
&=& \frac{2\hbar}{\Gamma} \frac{1}{1+ \epsilon_r^2}
\left[ 1 \mp \frac{\epsilon_r - q_{\tau}}{\sqrt{(\epsilon_r -
q_{\tau})^2 + \tan ^2 \theta_t (1+\epsilon_r^2 )}} \right]
\label{partial_delay_times}
\end{eqnarray}
can be easily done on the other hand since the information on $E_0$
and $\Gamma$ which are necessary to convert $E$ to $\epsilon_r$ needed
for Eq. (\ref{partial_delay_times}) can be easily
obtained from the data-fitting of the eigentime delay sum
(\ref{eigentime_delay_sum}). (Eigenphase sum can also be used to get
$E_0$ and $\Gamma$.)
Graphs of partial delay times are shown
in Fig. \ref{fig:dpm_df} for several values of line profile indices.
Some general characteristics of the graphs are noticed.
\begin{enumerate}
\item Graphs of partial delay times $2 \hbar d\delta_{+} / dE$ and
$2\hbar d
\delta_{-} /dE$ meet at $\epsilon_r$ = $q_{\tau}$.
\item As $ | q_{\tau} |$ $\rightarrow$ $\cot
\Delta_{12}^{\circ}$,
\begin{equation}
\cos \theta_f (\epsilon_r ) \rightarrow
\left\{ \begin{array}{rl}
1& {\rm when}~\epsilon_r \le q_{\tau} \\
-1& {\rm when}~\epsilon_r > q_{\tau} ,
\end{array} \right.
\label{cosf_qt_limit}
\end{equation}
and the partial delay times becomes
\begin{eqnarray}
2\hbar \frac{d\delta_{+}}{dE} \rightarrow
\left\{ \begin{array}{ll}
\tau_r (\epsilon_r ) &{\rm when}~\epsilon_r \le q_{\tau} \\
0 &{\rm when}~\epsilon_r > q_{\tau} ,
\end{array} \right.
\nonumber \\
2\hbar \frac{d\delta_{-}}{dE} \rightarrow
\left\{ \begin{array}{ll}
0 &{\rm when}~\epsilon_r \le q_{\tau} \\
\tau_r (\epsilon_r ) &{\rm when}~\epsilon_r > q_{\tau} .
\end{array} \right.
\label{td_theta_t_zero}
\end{eqnarray}
This case corresponds to $\theta_t$ $\rightarrow$ 0, or $\theta_f$
$\rightarrow$ $\theta_a$, as can be easily seen from the inspection of
Fig. \ref{fig:sphere}. But with this geometrical consideration alone,
it is hard to find the limit of $\cos \theta_f$ in
Eq. (\ref{cosf_qt_limit}). The behavior of the limit of $\cos
\theta_f$ can only be obtained when $\Gamma_a \rightarrow 0$ is taken
into account at $\theta_t \rightarrow 0$, i.e., when the strength of
the channel coupling is taken into account, which is hidden in
the geometrical consideration because of the use of $\epsilon_a$. When
$\Gamma_a \rightarrow 0$,
\begin{equation}
\epsilon_a \rightarrow
\left\{ \begin{array}{rl}
- \infty & {\rm when}~E < E_a \\
\infty & {\rm when}~E > E_a .
\end{array} \right.
\end{equation}
Only two values of $\epsilon_a$ are possible in the limit of
$\Gamma_a$ $\rightarrow$ 0 or $| q_{\tau} |$ $\rightarrow$ $\cot
\Delta_{12}^0$. $\epsilon_a \rightarrow \mp \infty$ correspond to
$\theta_a$ $\rightarrow$ 0 and $\pi$, or $\cos \theta_a $
$\rightarrow$ 1 and $-$1, respectively. Since $\cos \theta_f$
$\rightarrow$ $\cos \theta_a$, $\cos \theta_f$ satisfies the limit
(\ref{cosf_qt_limit}) as $|
q_{\tau}|$ $\rightarrow$ $\cot \Delta_{12}^0$. The energies at which
$\cos \theta_f$ and $\epsilon_a$ undergo abrupt changes look different
but are equivalent since $E = E_a$ or $\epsilon_a =0$ corresponds to
$\epsilon_r$ = $q_{\tau}$ owing to the relation (\ref{e_r_e_a}) and
$q_a \rightarrow \infty$.
\item As $| q_{\tau}| \rightarrow \infty$, the graph of $\cos
\theta_f (\epsilon_r )$ becomes symmetric with respect to origin and
is given by
\begin{equation}
\cos \theta_f (\epsilon_r ) \rightarrow
\frac{\cos \Delta_{12}^{\circ}}{\sqrt{\epsilon_r^2
\sin ^2 \Delta_{12}^{\circ}+1}} ~~~~~~~~{\rm when}~ |q_{\tau}|
\rightarrow \infty .
\label{cosf_q_infty}
\end{equation}
The derivation of Eq. (\ref{cosf_q_infty}) from
Eq. (\ref{partial_delay_times}) is not so easy. $|q_{\tau}|$
$\rightarrow$ $\infty$ arises in two cases, $\cot \Delta_{12}^0$
$\rightarrow$ $\infty$ or $\cos \theta_t$ = 0 ($\theta_t$ =
$\pi/2$). The geometric consideration is a great help when $\theta_t$
= $\pi /2$. In this case, Napier's rule gives $\cot \theta_f$ =
$\sin\delta_r \cot \Delta_{12}^0$\cite{NapierRule}. From this formula
of $\cot \theta_f$, $\cos ^2 \theta_f$ = $\cos ^2 \Delta_{12}^0 /
(\epsilon_r^2 \sin ^2 \Delta_{12}^0 +1)$ is obtained by simple
trigonometric manipulations. The square roots of both sides of it
yields Eq. (\ref{cosf_q_infty}) except for the sign. In order to fix
the sign, let us consider the case of $\epsilon_r$ = 0 which
corresponds to $\delta_r$ = $\pi/2$. Since $\theta_t$ = $\delta_r$ =
$\pi/2$ means that the chord PQ in Fig. \ref{fig:sphere} is part of
the equator, we have $\theta_a$ = $\delta_a$ = $\pi /2$. For this
particular spherical triangle, it can be easily proved that $\theta_f$
= $\Delta_{12}^0$. This fixes the sign. The remaining case of $\cot
\Delta_{12}^0$ $\rightarrow$ $\infty$ corresponds to $\Gamma_a$
$\rightarrow$ 0 and can not be easily handled by geometric argument as
mentioned above. Eq. (74) of Ref. \cite{Lee98} allows us to handle
this case and yields $\cos \theta_f$ = $\cos \Delta_{12}^0$ which is
identical with
Eq. (\ref{cosf_q_infty}) in this case.
The partial delay times at $\epsilon_r$ = 0 are
\begin{eqnarray}
2\hbar \frac{d\delta_{+}}{dE}( \epsilon_r =0 ) &\rightarrow&
\frac{4\hbar}{\Gamma} \cos ^2\left(\frac{\Delta_{12}^{\circ}}{2}
\right) , \nonumber \\
2\hbar \frac{d\delta_{-}}{dE}( \epsilon_r =0 )
&\rightarrow& \frac{4\hbar}{\Gamma} \sin
^2\left(\frac{\Delta_{12}^{\circ}}{2} \right) ,
\end{eqnarray}
which is easily obtained by substituting $\Delta_{12}^0$ for
$\theta_f$ into $2 \hbar d\delta_{\pm}/dE$ = $\tau_r (1\pm \cos
\theta_f )$.
\end{enumerate}
Before doing the data-fitting, let us briefly describe the system
used for the calculation and the methods of calculation. A triatomic
van der Waals molecule considered here is of the type of rare
gas$\cdots$homonuclear halogen-like diatomic
molecules\cite{Delgado-Barrio95}. Let us consider the system where the
van der Waals molecule in its ground state is excited by the light
whose energy amounts to the excitation of the diatomic vibronic motion
from the $v$ = 0 to 1 state. This energy is sufficient to break the
van der Waals bond and produces a predissociation spectrum as the
energy of light is scanned over a certain range of frequency.
The following interaction potential between A and B$_2$ in
A$\cdots$B$_2$ triatomic system
\begin{equation}
V(R,r,\gamma)= \left\{ \begin{array}{ll} V_{\rm M} (R,r,\gamma) &
{\rm when}~ R\le R^* \\
V_{\rm vdW}(r,\gamma)+(V_{\rm M} -V_{\rm
vdW}) e^{-\rho \left( \frac{R-R^*}{R^*} \right) ^2} & {\rm when}~R
\ge R^* ,
\end{array} \right.
\label{V_Jacobi}
\end{equation}
is the one employed by Halberstadt et. al. to fit the predissociation
data for Ne$\cdots$Cl$_2$ system and is used here\cite{Halberstadt87}.
In Eq. (\ref{V_Jacobi}), $R,r,\gamma$ are the Jacobi coordinates that
denote the distance between A and the center of mass of B$_2$, the
bond distance of B$_2$, and the angle between ${\bf R}$ and ${\bf r}$,
respectively\cite{Beswick81}; $V_{\rm M} (R,r,\gamma) $ and $V_{\rm
vdW}$ are given as
\begin{equation} V_{\rm M} (R,r,\gamma)=D_{\rm AB} \sum_{i=1}^2
\left\{ \left[
e^{-\alpha_{\rm AB}(R_{{\rm AB}_i}-R_{\rm AB}^{(o)})}-1
\right] ^2-1 \right\}
^2\end{equation}
\begin{equation}
+ D_{\rm CM} \left\{ \left[ e^{-\alpha_{\rm CM}
(R-R_{\rm CM}^{(o)})}
-1 \right] ^2 -1 \right\} ^2 , \end{equation}
\begin{equation} V_{\rm vdW} (R,\gamma)= - { C_6
(\gamma ) \over R^6} - {C_8 (\gamma ) \over R^8 } ,
\label{Vvdw}
\end{equation}
where $R_{{\rm AB}_i}$ is the distance between A and $i^{\rm th}$ B
atom; other parameters are adjusted parameters
to yield the best fit to the experimental values. The values
of the parameters used in this paper are given in Table \ref{potential}.
Two Legendre terms are retained for $C_6 (\gamma)$ and $C_8 (\gamma)$
in Eq. (\ref{Vvdw}),
e.g.,
\begin{equation}
C_6 (\gamma ) = C_{60} +C_{62} P_2
(\cos\gamma ) .
\end{equation}
$R^*$ in Eq. (\ref{V_Jacobi}) is chosen as the inflection point of the
atom-atom Morse potentials and given by $R^*=R_{\rm CM}^{(o)} +{\rm
ln}2/\alpha_{\rm CM}$.
The Hamiltonian for the triatomic van der Waals molecules
A$\cdots$B$_2$ in the Jacobi coordinates is given in atomic units
by\cite{Halberstadt87}
\begin{equation}
H= - {1 \over 2m} {\partial ^2 \over \partial R^2} + {{\bf j} ^2 \over
2 \mu r^2} + { {\bf l} ^2 \over 2m R^2} + V(R,r,\gamma ) + H_{\rm B_2}
(r), \label{H}
\end{equation}
where
\begin{equation}
H_{\rm B_2}(r) = -{1 \over 2\mu r^2 }{\partial ^2 \over \partial r^2}
+ V_{\rm B_2} (r) ,
\end{equation}
denotes the vibrational Hamiltonian of B$_2$; $m$ is the reduced mass
of B$_2$; $\mu$ denotes the reduced mass of A and the center of mass
of B$_2$; ${\bf j}$ is the angular momentum operator of B$_2$; ${\bf
l}$ is the orbital angular momentum operator of the relative motion of
A and the center of mass of B$_2$. The values of diatomic molecular
parameters of B$_2$ used in this paper are given in Table
\ref{diatom}. The calculation is limited to zero of the total angular
momentum operator ${\bf J} ={\bf j} +{\bf l}$, as usually done in this
field without affecting the predissociation dynamics much. Such a
limitation simplifies the Hamiltonian as ${\bf l}$ can be replaced by
${\bf j}$.
Let $\Psi^{-(i)} (R,r,\gamma )$ be the eigenfunctions of $H$ of
Eq. (\ref{H}) and let it correspond to the state vibronically excited
by light which will be predissociating into an atom and a diatomic
fragment. It is indexed by the vib-rotational quantum numbers
($v,j$) of its diatomic photo-fragment and abbreviated
to $i$, i.e., $i$ = ($v,j$). When the wavefunctions
$\Psi^{-(i)} (R,r,\gamma )$ to the dissociation channel
$i$ = ($v,j$) are expanded in terms of $n$ base functions
$\Phi_{i'} (r,\gamma )=(r|v')Y_{j' o} (\gamma,0)$ ($i'$ = 1,2,...,$n$)
as
\begin{equation}
\Psi^{-(i)} (R,r,\gamma )=\sum_{i'} \Phi _{i'} (r, \gamma ) \chi_{i'i}
(R),
\label{psi_vdw_asym}
\end{equation}
the close-coupling equations for $\chi_{i'i} (R)$ are obtained as
\begin{equation}
\left[ -{1 \over 2m} {d^2 \over dR^2}- k_{i'}^2 +{{\bf j} ^2 \over 2m
R^2 } \right] \chi _{i'i} (R) +\sum_{i''} V_{i'i''} (R)\chi_{i''i}
(R)=0,
\label{cc}
\end{equation}
with
\begin{equation}
k_{i'}^2 =
2m[E-Bj'(j'+1)-(v'+\frac{1}{2} )\omega ],
\end{equation}
and
\begin{equation}
V_{i''i'} (R)=\int d\gamma \sin\gamma \int dr \Phi_{i''}(r,\gamma
)V(R,r,\gamma )\Phi _{i'}^* (r,\gamma ) .
\end{equation}
The close-coupling equation (\ref{cc}) is solved by the De Vogelaere
algorithm\cite{Lester71} and wavefunctions (\ref{psi_vdw_asym}) that
satisfy the incoming wave boundary condition are, consequently,
obtained. The $S$ matrix obtained in this process, which is identical
with (\ref{S_res}), is diagonalized and eigenphase shifts
(\ref{eigenphase_shifts}) are obtained. Two closed channels
corresponding to ($v=1,j=0$) and ($v=1,j=2$) and two open channels
corresponding to ($v=0,j=0$) and ($v=0,j=2$) are included to mimic
the system of one discrete state and two continua to which the theory
developed in this work applies. This calculation
yields the data of eigenphase shifts as functions of energy. Let us
call this method of calculation as the close-coupling method.
Note that the theory developed in Ref. \cite{Lee98} and in the present
paper relies on the presence of a discrete state embedded in
continua. Among various theories devised to describe the resonance
with explicit consideration of a discrete state, Fano's configuration
interaction theory is chosen in this work\cite{Lee98}. In the normal
use as described in the above paragraph, the close-coupling method can
not be connected with the configuration interaction theory since no
discrete state is assumed in the close-coupling method. But with a
little modification in its use, it can be used to calculate dynamic
parameters of the configuration interaction theory. A discrete state
with its resonance energy $E_0$ used in the configuration interaction
theory can be obtained by solving the close-coupling equations
(modified to incorporate the shooting method\cite{NumericalBook}) with
inclusion of closed channels alone. Wavefunctions obtained by solving
the close-coupling equations with inclusion of open channels alone
obviously diagonalize the Hamiltonian in the subspace spanned by open
channels alone and are the continuum wavefunctions $\psi_E^{-(l)}$
considered in the configuration interaction theory. The background
scattering matrix $S^0$ are obtained as a byproduct when the continuum
wavefunctions $\psi_E^{-(l)}$ are forced to satisfy the incoming wave
boundary conditions (or outgoing wave boundary condition if a
scattering system is considered instead of the photo-dissociation). By
diagonalizing $S^0$, background eigenphase shifts $\delta_1^0$ and
$\delta_2^0$ and the frame transformation matrix $U^0$ from the
asymptotic wavefunctions $\psi_E^{-(l)}$ to background eigen channel
wavefunctions $\psi_E^{(k)}$ are obtained. With $U^0$, background
eigenchannel wavefunctions can be obtained from the asymptotic ones as
\begin{equation}
\psi_E^{(k)} (R,r,\gamma ) = -i e^{i \delta_k^0} \sum_l
\tilde{U}_{kl}^0 \psi _E^{-(l)}(R,r,\gamma ) ,
\end{equation}
and can be used to calculate partial decay width $\Gamma_k$ as
\begin{equation}
\Gamma_k = 2 \pi \left| \left( \psi_E^{(k)} | H | \phi \right) \right|^2
= 2 \pi \left|\left( \psi_E^{(k)} | V(R,r,\gamma ) | \phi
\right)\right| ^2 ,
\label{Gamma_k}
\end{equation}
where the last equality holds for $\delta v$ = $\pm 1$ vibronic
predissociation since $V(R,r,\gamma )$ is the only term containing odd
powers of $r$ in $H$. Since $E_0$, $\delta_1^0$, $\delta_2^0$,
$\Gamma_1$, and $\Gamma_2$ are obtained, the dynamic parameters $E_a$,
$\Gamma_a$, $q_{a}$, $q_{\tau}$, and $\cot \theta_t$ are calculated
using $E_a$ = $E_0$ + $\Delta \Gamma \cot \Delta_{12}^0 /2$,
$\Gamma_a$ = $2\sqrt{\Gamma_1 \Gamma_2} / \sin \Delta_{12}^0$,
Eqs. (\ref{q_a}), (\ref{qtau}), and (\ref{theta_t}).
Though the configuration interaction theory directly calculates the
dynamic parameters $E_0$, $\Gamma$, $E_a$, $\Gamma_a$, $q_a$,
$q_{\tau}$, and $\cot \theta_t$, the close-coupling method can not
calculate them directly. Dynamical parameters directly obtainable
from the close-coupling method are the $S$ matrix, its eigenphase
shifts $ \delta_{+}$ and $ \delta_{-}$, and partial delay times
$2\hbar d\delta_{+} /dE$ and $2\hbar d\delta_{-} /dE$ as functions of
energy. If the assumptions used in configuration interaction theory
are exact, eigenphase shifts and partial delay times should satisfy
Eq. (\ref{eigenphase_shifts}) and Eq. (\ref{partial_delay_times}),
respectively. The assumptions that $\delta_1^0$, $\delta_2^0$,
$\Gamma_1$, and $\Gamma_2$ are constant of energy on which the
configuration interaction theory relies are expected not to cause much
trouble in the actual situation as far as the energy range is not wide
enough. If eigenphase shifts and partial delay times follow
Eq. (\ref{eigenphase_shifts}) and Eq. (\ref{partial_delay_times}),
values of dynamic parameters $E_a$, $\Gamma_a$, $q_a$, $q_{\tau}$, and
$\cot \theta_t$ can be obtained by varying them so that eigenphase
shifts and partial delay times calculated by the close-coupling method
fit the formulas best. Fitting is done with the Levenberg-Marquardt
method recommended for the nonlinear models in
Ref. \cite{NumericalBook}. For the reason mentioned above, only
partial delay times will be fitted.
Data fittings are done in two steps. At first, $E_{\circ}$ and
$\Gamma$ are obtained by fitting the eigentime delay sum. Then
$q_{\tau}$ and $\cot \theta_t$ are obtained by fitting partial delay
times $2\hbar d \delta_{+} /dE$ (=$\tau_r + \tau_a$) and $2\hbar
d\delta_{-} / dE$ (=$\tau_r - \tau_a$) with
(\ref{partial_delay_times}). Either $2\hbar d\delta_{+} /dE$ or
$2\hbar d\delta_{-} /dE$ can be used to obtain the values of
$q_{\tau}$ and $\cot \theta_t$. Values of $q_{\tau}$ and $\cot
\theta_t$ obtained from either of them should be identical if the
assumption of the configuration interaction method, namely that
$\Gamma_1$, $\Gamma_2$, $\delta_1^0$, and $\delta_2^0$ are constant of
energy, is exact. The differences between $q_{\tau}$'s and
$\cot\theta_t$'s for $2\hbar d\delta_{+} /dE$ and $2\hbar d\delta_{-}
/dE$ may serve as a criterion of the exactness of the configuration
interaction theory.
Numerical study shows that fitting the eigentime delay sum calculated
by the close-coupling method to the formula
(\ref{eigentime_delay_sum}) is done reliably. Compared with that,
fitting the partial delay times to the formula
(\ref{partial_delay_times}) can not be easily done. Eigentime delays
calculated by the close-coupling method show abnormal behaviors like
negative values of eigentime delays at the energy region not far from
a resonance while theory contends that eigentime delays are positive
in the neighborhood of a resonance, meaning that assumptions made for
the formula (\ref{partial_delay_times}) are prone to break down. It
may be argued that parameters obtained by data-fitting of eigentime
delays calculated by the close-coupling method will deviate more from
those obtained by the configuration interaction method as the range of
energy taken for fitting is wider. But we can not always narrow the
range of energy for this reason. Notice that $q_{\tau}$ is the energy
where $\tau_{+}$ meets $\tau_{-}$. As seen in
Eq. (\ref{td_theta_t_zero}), if $q_{\tau}$ is large, two curves are
almost identical with $\tau_{r}$ except for the neighborhood of
$q_{\tau}$. Therefore for the good fitting, the energy range taken for
fitting should include $q_{\tau}$. It means that if $q_{\tau}$ is
large, fitting is likely to be bad. This assertion is checked by
doing data-fittings
for three different ranges of energy, namely, $[ E_{\circ}
-\Gamma, E_{\circ} +\Gamma ]$, $ [E_{\circ} -2\Gamma, E_{\circ}
+2\Gamma ]$, and $[ E_{\circ} -3\Gamma, E_{\circ} +3\Gamma ]$.
Table \ref{tab:fit3044} shows the results calculated with the
parameters given in Table \ref{potential} and \ref{diatom}. Table
\ref{tab:fit5044} is obtained with the same parameters but with $r_e$
= 5.044 a.u. The former table corresponds to the case of large
$q_{\tau}$ while the latter table to the case of small $q_{\tau}$.
The tables also show values of a fudge factor $\lambda$ which can be
used as a criteria for the goodness-of-fit. The fitting is done at
first by steepest decent method with the initial value, say 0.001, of
$\lambda$. New value of $\lambda$ is suggested for the next
iteration. If the suggested value of $\lambda$ becomes sufficiently
small, inverse-Hessian method is used for fitting. At the final call,
$\lambda$ is set to zero. This method assumes that values of $\lambda$
should approach zero if everything goes O.K. This is true with the
data-fitting of the eigentime delay sum. For the partial delay times,
values of $\lambda$ do not go to zero. Though values of $\lambda$ do
not go to zero, comparison of curves obtained by two method showed
that the fitting is good enough to be acceptable if values of
$\lambda$ are not too large. In Table \ref{tab:fit3044}, the values of
$q_{\tau}$ lies beyond the first interval [$E_{\circ} - \Gamma$,
$E_{\circ} +\Gamma$] and lies in the second interval. According to the
reasons mentioned above, it is hard to achieve reliable data-fitting
in this case. The closest result to the theoretical values is obtained
for the first interval, the narrowest one, where the fudge factor is
worst indicating wider interval should be used. The calculation shows
that wider interval yields worse result indicating that the assumptions
may no longer be true. On the other hand, in Table V, the value of
$q_{\tau}$ is small and the data-fitting may be done rather reliably
and is confirmed by the calculation. This situation contrasts greatly
to the fitting of partial photo-dissociation cross-sections to the
Fano-Beutler-like line profile formulas\cite{Lee95}
\begin{equation}
\sigma_j = \sigma_j ^{\circ}
\frac{| \epsilon + q_j |^2}{1+\epsilon ^2} =
\sigma_j ^{\circ} \frac{[ \epsilon + \Re (q_j ) ]^2}{1+\epsilon ^2} +
\sigma_j ^{\circ} \frac{[ \Im (q_j ) ]^2}{1+\epsilon ^2} ,
\end{equation}
for the same predissociating system of van der Waals
molecules, where the fitting is excellent as shown in Table
\ref{tab:partial_photodissociation_cross_section}.
\section{Summary and Discussion}
In the previous work\cite{Lee98}, eigenphase shifts for the $S$ matrix
and Smith's lifetime matrix $Q$ near a resonance were expressed as
functionals of the Beutler-Fano formulas using appropriate
dimensionless energy units and line profile indices. Parameters
responsible for the avoided crossing of eigenphase shifts and
eigentime delays and the change in frame transformation in the eigentime
delays were identified. The geometrical realization of those dynamical
parameters is tried in this work, which allows us to give a
geometrical derivation of the Beutler-Fano formulas appearing in
eigenphase shifts and time delays.
The geometrical realization is based on the real three-dimensional
space spanned by the Pauli matrices $\sigma_x$, $\sigma_y$,
$\sigma_z$ as basic vectors, where vectors are orthogonal in the sense
that
\begin{equation}
{\rm Tr} (\sigma_i \sigma_j ) = 2 \delta_{ij} .
\end{equation}
Such a kind of space is called a Liouville space\cite{Fano57}. A 2
$\times$ 2 traceless Hermitian matrix, which is generally expressed as
$\boldsymbol{\sigma} \cdot {\bf r}$ with ${\bf r}$ real, is a vector
in this Liouville space. The magnitude of a vector in the
three-dimensional Liouville space corresponds to the degree of
anisotropy in coupling of eigenchannels with the cause that brings
about the dynamics of the dynamic operator corresponding to the
vector. The four-dimensional Liouville space including the unit matrix
{\bf 1} as another basic vector is also used for the 2 $\times$ 2
Hermitian matrix whose trace is not zero.
Resonant scattering can be separated from the background scattering in
the ${\cal S}$ matrix for the multichannel system around an isolated
resonance as ${\cal S}$ = ${\cal S}^0 (\pi_b + e^{-2i \delta_r } \pi_a
)$ where $\pi_a$ is the projection matrix to $\psi_E^{(a)}$ which is
the only type of continua interacting with the discrete
state and $\pi_b = 1 - \pi_a$\cite{Fano65}. When the number of open
channels are limited to two, ${\cal S}^0$, $\pi_b + e^{-2i\delta_r}
\pi_a$, and ${\cal S}$ can be expressed using Pauli's spin matrices as
\begin{eqnarray*}
{\cal S}^0 &=& e^{-i(\delta_{\Sigma}^0 {\bf 1} +
\Delta_{12}^0 \boldsymbol{\sigma} \cdot \hat{z} )} ,\\
\pi_b + e^{-2i \delta_r} \pi_a &=&
e^{-i (\delta_r {\bf 1} + \delta_r
\boldsymbol{\sigma} \cdot \hat{n}_t )} , \\
{\cal S} &=& e^{-i ( \delta_{\Sigma}{\bf 1} + \delta_a
\boldsymbol{\sigma} \cdot \hat{n}_a ) } .
\end{eqnarray*}
Phase shift matrices $\boldsymbol{\Delta}^0$, $\boldsymbol{\Delta}_r$,
and $\boldsymbol{\Delta}$ which are Hermitian may be defined for
${\cal S}^0$, $\pi_b + e^{-2i\delta_r} \pi_a$, and ${\cal S}$ as
\begin{eqnarray*}
\boldsymbol{\Delta}^0 &=& \frac{1}{2} (\delta_{\Sigma}^0 {\bf 1} +
\Delta_{12}^0 \boldsymbol{\sigma} \cdot \hat{z} ) ,
\\
\boldsymbol{\Delta}_r &=& \frac{1}{2} (\delta_r {\bf 1} + \delta_r
\boldsymbol{\sigma} \cdot \hat{n}_t ) ,
\\
\boldsymbol{\Delta} &=& \frac{1}{2} ( \delta_{\Sigma}{\bf 1} + \delta_a
\boldsymbol{\sigma} \cdot \hat{n}_a )
\end{eqnarray*}
and are vectors in the four-dimensional Liouville space.
According to the Campbell-Baker-Hausdorff formula, the phase shift
matrix $\boldsymbol{\Delta}$ is expressed as an infinite sum of
multiple commutators of $\boldsymbol{\Delta}^0$ and
$\boldsymbol{\Delta}_r$, which is difficult to use\cite{Weiss62}. The
geometric way of representing the combining rule of
$\boldsymbol{\Delta}^0$ and $\boldsymbol{\Delta}_r$ into
$\boldsymbol{\Delta}$ provides an alternative to that. We first note
that the isotropic part $\frac{1}{2} \delta_{\Sigma}$ of
$\boldsymbol{\Delta}$ is simply obtained from those of
$\boldsymbol{\Delta}^0$ and $\boldsymbol{\Delta}_r$ as a simple
addition $\frac{1}{2} (\delta_{\sigma}^0 + \delta_r )$ and factored
out in ${\cal S}$ = ${\cal S}^0 (\pi_b + e^{-2i \delta_r} \pi_a )$.
Then the remaining anisotropic part of $\boldsymbol{\Delta}$ is
obtained from those of $\boldsymbol{\Delta}^0$ and
$\boldsymbol{\Delta}_r$ as
\[
e^{-i \Delta_{12}^0 \boldsymbol{\sigma} \cdot \hat{z}} e^{-i \delta_r
\boldsymbol{\sigma} \cdot \hat{n}_t} = e^{-i \delta_a
\boldsymbol{\sigma} \cdot \hat{n}_a} .
\]
The above identity can be expressed
using the rotation matrices in the Liouville
space as
\[
R_{\hat{z}} (2 \Delta_{12}^0 ) R_{\hat{n}_t} (2 \delta_r ) =
R_{\hat{n}_a} (2 \delta_a ) .
\]
This relation leads to the construction of the spherical triangle whose
vertices are the endpoints of vectors corresponding to the anisotropic
parts of the phase shift matrices but whose lengths are limited to
unity. The lengths of the vectors are utilized as the vertex angles
of the spherical triangle. This spherical
triangle shows the rule of combining the
channel-channel couplings in the background scattering with the
resonant interaction to give the avoided crossing interactions in the
curves of eigenphase shifts as functions of energy.
The time delay matrix ${\cal Q}$ basically derives from the energy
derivative of the phase shift matrix $\boldsymbol{\Delta}$ of the
${\cal S}$ matrix. The phase shift matrix $\boldsymbol{\Delta}$ is a
vector in the four-dimensional Liouville space and is $\frac{1}{2} (
\delta_{\Sigma} {\bf 1} + \delta_a \boldsymbol{\sigma} \cdot \hat{n}_a
)$ as stated above. The energy derivative of $\delta_{\Sigma}$ and
$\delta_a$ without changing the direction of the vector $\hat{n}_a$
yields the ``partial delay time matrix'' given by $\hbar ( d
\delta_{{\bf 1} \Sigma} /dE + \boldsymbol{\sigma} \cdot \hat{n}_a ) d
\delta_a /dE $. Discussion on the time delay matrix due to a change in
the direction of the vector $\hat{n}_a$ or in frame transformation is
greatly facilitated by the use of the formula $\frac{1}{2} \hbar (d
\theta_a /dE) ({\cal S}^+ \boldsymbol{\sigma} \cdot \hat{y} {\cal S} -
\boldsymbol{\sigma} \cdot \hat{y} )$. The formula shows that the time
delay due to the change in frame transformation is the interference of
two terms. The first term inside the parenthesis comes from the energy
derivative of the frame transformation from the background
eigenchannels to the ${\cal S}$ matrix eigenchannels and the second
term from the energy derivative of the frame transformation from the
${\cal S}$ matrix eigenchannels to the background eigenchannels. The
first term corresponds to a vector rotated from the $\hat{y}$ vector
by the rotation $R_{\hat{n}_a} (-2 \delta_a )$. The net time delay
resulted from the interference is thus calculated by the vector
addition in the three-dimensional Liouville space.
Going back to the spherical triangle, the laws of sines and cosines
and other laws derivable from such laws such as the laws of cotangents
holding for the spherical triangle can be translated into dynamic laws
by converting $\delta_r$ and $\theta_a$ into energies according to
$\cot \delta_r$ = $-\epsilon_r$ and $\cot \theta_a$ =
$-\epsilon_a$. Two cotangent laws
\begin{eqnarray*}
\sin \Delta_{12}^0 \cot \delta_a &=& -
\sin \theta_a \cot \theta_t + \cos \theta_a \cos \Delta_{12}^0, \\
\cot \theta_f \sin \theta_t &=& - \cos (\pi - \delta_r ) \cos \theta_t +
\sin (\pi - \delta_r ) \cot \Delta_{12}^0 ,
\end{eqnarray*}
can be shown to correspond to two Beutler-Fano formulas for $\cot
\delta_a$ and $\cot \theta_f$
\begin{eqnarray*}
\cot \delta_a &=& - \cot \Delta_{12}^0 \frac{\epsilon_a -
q_a}{\sqrt{\epsilon_a^2 +1}} ,\\
\cot \theta_f &=&
-\cot \theta_t \frac{\epsilon_r -q_{\tau}} {\sqrt{\epsilon_r ^2
+1}} ,
\end{eqnarray*}
with such conversion.
Other laws also yields interesting relations among dynamical
parameters.
The presence of the dual triangle of the spherical triangle indicates
that we can make a one-to-one correspondence between edge angles with
vertex angles so that if there is one valid relation we can make
another valid relation by interchanging angles with its one-to-one
corresponding angles. In other words, for each edge angle, we have a
conjugate vertex angle and vice versa. This conjugation relation can
be extended to $\epsilon_r$ and $\epsilon_a$ by making use of their
relation with $\delta_r$ and $\theta_a$, respectively. The full
conjugation relations among geometrical and dynamical parameters are
listed Table \ref{table:conjug}. The duality of the spherical
triangle thus explains the symmetry found in the dynamic relations and
provides us with a systematic approach and complete symmetric
relations. Besides this use of trigonometric laws of the spherical
triangle, the geometric construction in the Liouville space
facilitates other useful consideration.
Note that the geometrical laws holding for the geometrical objects in
the real three-dimensional Liouville space deal only with the
intrinsic nature of the dynamic couplings independent of the
characteristics of the individual system. It derives from that the
reduced energies hide the specific characteristics of the dynamic
couplings of the individual system such as the strengths of the
dynamic couplings between the discrete state and continua, the
indirect couplings between continua via discrete states, the resonance
positions, the avoided crossing point energy. Intrinsic nature of the
dynamic couplings is concerned with the relations among eigenchannels
for various dynamic operators, the anisotropy in the channels coupling
in the $S^0$ and $S$ scattering, the anisotropy in the channel
coupling with the discrete state. This shows both the beauty and the
limitations of the geometrical construction in the Liouville space. In
the actual application, we have to be careful when considering the
case close to the limits in coupling strength, where abnormal
behaviors take place in actual dynamic quantities, but where no
abnormality shows up in the Liouville space.
The present theory is developed for the system of one discrete state
and two continua. It will be desirable to extend the theory to more
than two open channels and to overlapping resonances for which the
results of Refs. \cite{Lyuboshitz77,Simonius74} will be a great help.
It might be also valuable to apply the present theory to MQDT. In
connection with the latter, it might be interesting to apply the
present theory to extend the Lu-Fano plot to multi-open channel case.
\acknowledgements This work was supported
by KOSEF under contract No. 961-0305-050-2 and by
Korean Ministry of
Education through Research Fund No. 1998-015-D00186.
| proofpile-arXiv_065-8672 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The dirty boson problem, a problem of repulsively interacting bosons in
a random potential, has been the subject of much theoretical
work\cite{dirtyboson}. In the
zero temperature quantum problem, the system can undergo a phase
transition between a Bose glass phase and a superfluid phase.
An action that may be used to describe this transition is
\begin{equation}
\label{orig}
\int d^dx \, dt \, \Bigl(
+\partial_{x} \overline \phi(x,t) \partial_{x} \phi(x,t)
+\partial_{t} \overline \phi(x,t) \partial_{t} \phi(x,t)
+w(x)\overline\phi(x,t)\partial_t\phi(x,t) -
U(x) \overline\phi\phi +g(\overline\phi\phi)^2 \Bigr)
\end{equation}
This action describes a system with a number of phases. In the pure case,
in which $U(x)$ is a constant, and $w(x)$ vanishes everywhere (the commensurate
case), there is a phase transition from a gapped Mott insulator phase
to a superfluid phase as $U$ increases. If we consider a
case in which $U(x)$ is not constant,
but $w(x)$ still vanishes everywhere, there will be a sequence of transitions
from a gapped Mott insulator, to a gapless (but with exponentially vanishing
low energy density of states) Griffiths phase, and then to
a superfluid phase. The Griffiths phase occurs due to the possibility of
large, rare regions in which fluctuations in $U(x)$ make the system locally
appear superfluid.
If we consider the case in which both $U(x)$ and $w(x)$ are fluctuating,
there will also appear a Bose glass phase in which there
is a density of states tending to a constant at low energy and an infinite
superfluid susceptibility\cite{boseglass}. The physical basis for this
phase is the existence of localized states, in which a competition between
chemical potential and repulsion causes the system to desire a certain
number of particles to occupy each localized state. There exist excitations
involving adding or removing one particle from these states, and these
excitations lead to the diverging susceptibility. However, it is clear that
the Bose glass phase is very similar to the Griffiths phase, in that both
involve regions of finite size. In the Griffiths phase one needs regions
of arbitrarily large size, while in the Bose glass phase one only needs
regions large enough to support a localized state, with a nonvanishing
number of particles occupying that state, in order to produce the diverging
susceptibility characteristic of the phase. In a system in which the
disorder is irrelevant at the pure fixed point, so that the fluctuations in
$U(x)$ and $w(x)$ scale to zero, one will still find a Bose glass phase
as there will, with low probability, exist regions that can give rise to
these localized states. Thus, the interesting question to answer is not
whether the Bose glass phase exists, but whether there exists a fixed point
at which fluctuations in $w(x)$ are weak, so that the critical exponents are
near those of the pure MI-SF transition. The most likely alternative
would be governed by the scaling theory of Fisher et. al.,
which has very different critical exponents\cite{dirtyboson,herbut}. We will
refer to this scaling theory as the phase-only transition, as one
assumes fluctuations in the amplitude of the order parameter are
irrelevant at the critical point.
Recently, a large $N$ generalization of equation (\ref{orig}) was considered
in the restricted case $w(x)=0$\cite{me}. We will refer to this
case as the strongly commensurate case, while the situation in
which $w(x)$ vanishes on average, but has nonvanishing fluctuations
will be known as the weakly commensurate case. We consider a system defined
by the partition function
\begin{equation}
\label{orN}
\int \Bigl(\prod\limits_{x,t}\delta(\overline\phi_i(x,t)\phi^i(x,t)-
N\sigma^2(x))\Bigr) [d\phi_i] e^{-S}
\end{equation}
where
\begin{equation}
S=\int d^dx \, dt
(\partial_{x} \overline \phi_i(x,t) \partial_{x} \phi^i(x,t)+
\partial_{t} \overline \phi_i(x,t) \partial_{t} \phi^i(x,t)
+w(x)\overline\phi_i(x,t)\partial_t\phi^i(x,t))
\end{equation}
Here, we have, for technical simplicity later, replaced the quartic
interaction by a $\delta$-function. For most of the paper, a $\delta$-function
interaction will be used. However, for generality in the last
section, we will return to quartic interactions.
The disorder in $U(x)$ will be replaced, in the $\delta$-function case,
by weak fluctuations in $\sigma^2$. We will consider $\sigma^2=
\sigma_0^2+\delta\sigma^2$ where $\sigma_0^2$ is a constant piece used to
drive the system through the phase transition and $\delta\sigma^2$ is
a fluctuating piece.
The advantage of the large $N$ formulation of the problem is that
for any fixed realization of the problem one may exactly solve by
system by finding the solution of the self-consistency equation
\begin{equation}
\label{sc}
\sigma^2(x)=\langle x,t=0|(-\partial_{x}^2-\partial_t^2+w(x)\partial_t
+\lambda(x))^{-1}|x,t=0\rangle
\end{equation}
or
\begin{equation}
\sigma^2(x)=\int d\omega \,
\langle x,t=0|(-\partial_{x}^2+\omega^2+i w(x)\omega
+\lambda(x))^{-1}|x,t=0\rangle
\end{equation}
where $\lambda(x)$ is a Lagrange multiplier field for enforce the
$\delta$-function constraint on the length of the spins.
After solving the
self-consistency equation, any correlation function can be found simply
by finding the Green's function of a non-interacting field $\phi$ with
action $\int d^dx \, dt \, \overline\phi (-\partial_{x}^2-\partial_t^2
+w(x)\partial_t+\lambda(x))\phi$.
In equation (\ref{sc}), we assume that the Green's function on the right-hand
side has been renormalized by subtracting a divergent quantity. Specifically,
we will take a Pauli-Villars regularization for the Green's function,
and take the regulator mass to be very large, while adding an appropriate
divergent constant to $\sigma^2$ on the left-hand side. The cutoff for the
regulator is completely different from the cutoff for fluctuations in
$\delta\sigma^2$ that will be used for the RG later; the cutoff for the
regulator will be much larger than the cutoff for fluctuations in
$\delta\sigma^2$ and $w(x)$, and will be unchanged under the RG.
In the previous work, exact results were obtained for the
critical exponents for average quantities. In the present paper, I
will first consider the problem in which $w(x)\neq0$, although $w(x)$ vanishes
on average (the weakly commensurate dirty boson problem). I will
consider general problems of the large $N$ system with terms linear in
the time derivative. Next, a lowest order perturbative RG treatment
will be used to consider critical behavior. Instanton corrections to
the perturbative treatment will be briefly discussed after that.
Returning to the strongly commensurate case, previous results on the
fixed point will be extended to give results for higher moments of
the correlation functions.
Finally, as a technical aside, we consider the large $N$ self-consistency
equation in frustrated systems, and demonstrate that the
self-consistency equation always has a unique solution, as well as considering
the number of spin components needed to form a classical ground
state in frustrated systems.
\section{Bose Glass in the Large $N$ Limit}
Consider the following simple $0+1$ dimensional problem, at
zero temperature ($\beta\rightarrow\infty$):
\begin{equation}
\int [d\phi_i(t)] \prod\limits_t\delta(\overline\phi_i(t)\phi^i(t)-N\sigma^2)
e^{\int_{-\beta/2}^{\beta/2}
\{ -\partial_t\overline\phi_i\partial_t\phi^i+A
\overline\phi_i\partial_t\phi^i \} dt}
\end{equation}
The solution of this problem in the large $N$ limit via a self-consistency
equation requires finding a $\lambda$ such that
\begin{equation}
\sigma^2=\int \frac{d\omega}{2\pi} \frac{1}{iA\omega+\omega^2+\lambda}
\end{equation}
Then, by contour integration, for $\lambda>0$, we find
$\sigma^2=\frac{1}{\sqrt{A^2+4\lambda}}$. Then,
\begin{equation}
\lambda=\frac{1}{4}(\frac{1}{\sigma^4}-A^2)
\end{equation}
Unfortunately, this result for $\lambda$ leads to $\lambda$ becoming negative
for sufficiently large $A$. Although perturbation theory will not see a
problem, this is the signal for the Bose glass phase. One must separate the
self-consistency equation into two parts, one containing an integral over
non-zero $\omega$, and one containing the term for zero $\omega$. One finds
(for finite $\beta$)
\begin{equation}
\sigma^2=\int \frac{d\omega}{2\pi} \frac{1}{iA\omega+\omega^2+\lambda}+\frac{1}{\beta}\frac{1}
{\lambda}
\end{equation}
Then, the self-consistency equation can always be solved using positive
$\lambda$, but in the zero-temperature limit of the problem one will find
that one needs $\lambda$ to be of order $\frac{1}{\beta}$, and at
zero-temperature, there will appear a zero energy state.
Considering the original statistical mechanics problem of equations
(\ref{orig},\ref{orN}), one will expect to see some non-zero
density of these zero energy states, indicating the presence of a gapless
Bose glass phase, with diverging superfluid susceptibility. Even if the
fluctuations in $w(x)$ vanish at the fixed point, when the critical exponents
are unchanged from the strongly commensurate problem, such zero energy states
will exist as Griffiths effects leading to the appearance of
a Bose glass phase near the superfluid transition.
To perform a renormalization group treatment of the model, we will first
proceed in a perturbative fashion in the next section, ignoring such zero
energy effects. For small fluctuations in $w(x)$, these zero energy states
will be exponentially suppressed as will be
discussed in the section after that.
\section{Perturbative RG}
We will follow the RG techniques used in previous work on the large
$N$ problem\cite{me}. We will work near $2+1$ dimensions, specifically
we will have $d=2+\epsilon$ space dimensions and 1 time dimension. We
will at first work to one-loop in perturbation theory, which will, in
some cases, correspond to lowest order in $\epsilon$. Some results
will be extended to all orders. A fixed line is found for the large $N$
system. The fixed points are destroyed by
$1/N$ corrections.
The RG is defined as follows: start with a system containing fluctuations
in $\delta\sigma^2$ and $w(x)$ up to some wavevector $\Lambda$. Remove the
high wavevector fluctuations in these $\delta\sigma^2$ and $w(x)$ to
obtain a new system, with renormalized gradient terms
$\partial_x^2$ and $\partial_t^2$ in the action, as well as renormalized
low wavevector $\delta\sigma^2$ and $w(x)$ terms. Do this procedure so
as to preserve the average low momentum Green's function, as well
as the low wavevector fluctuations in $\lambda(x)$. See previous work\cite{me}
for more details.
If we are working in $2+\epsilon$ space dimensions, and 1 time dimension,
we can easily work out the naive scaling dimensions of the disorder. One
finds that if we assume Gaussian fluctuations in the disorder, with
\begin{equation} \langle\delta\sigma^2(p)\delta\sigma^2(q)\rangle=(2\pi)^2\delta(p+q)S\end{equation}
\begin{equation} \langle w(p)w(q)\rangle=(2\pi)^2\delta(p+q)W\end{equation}
then $S$ scales as length to the power $d-2=\epsilon$, while
$W$ scales as length to the power $2-d=-\epsilon$. So, for $d>2$
we find that disorder in $\sigma^2$ is relevant at the pure fixed point,
while disorder in $w(x)$ is irrelevant. For $d<2$ this is reversed.
Previously, a lowest order in $\epsilon$ calculation\cite{me} considering
only disorder in $\delta\sigma^2$ gave the following results. For a given
problem, with fluctuations in $\delta\sigma^2$ up to wavevector $\Lambda$,
and self-consistency equation
\begin{equation}
\label{sc1}
\sigma^2(x)=\int d\omega \,\langle x,t=0|
\Bigl(-\partial_{x}^2+\omega^2+\lambda(x)\Bigr)^{-1})|x,t=0\rangle
\end{equation}
one could define another problem, with fluctuations in $\delta\sigma^2$
only up to $\Lambda-\delta\Lambda$, with self-consistency equation
\begin{equation}
\label{nsc}
(1-\frac{\delta\Lambda}{\Lambda}c_3L)
\sigma^2(x)=\int d^{(D-d)}\omega \,\langle x,t=0|
\Bigl(-(1+\frac{\delta\Lambda}{\Lambda}c_2L)
\partial_{x}^2+
(1+\frac{\delta\Lambda}{\Lambda}c_3L)
(\omega^2
+\lambda(x))\Bigr)^{-1})|x,t=0\rangle
\end{equation}
such that the Green's function computed from the second self-consistency
equation agrees with the Green's function computed from the first
self-consistency equation averaged over disorder at large wavevector.
Here we define $L=c_1^2\Lambda^{8-2D}S$ where
$c_1=\frac{1}{\pi}^{D/2} \frac{\Gamma(D-2)} {\Gamma(2-D/2)\Gamma^2(D/2-1)}$,
$c_2=(1-4/d)c_3$, and $c_3=2\frac{\pi^{d/2}}{\Gamma(d/2)}\Lambda^{d-4}$.
The results above, to one loop, were obtained by considering the large
wavevector fluctuations in $\lambda$ due to the large wavevector fluctuations
in $\delta\sigma^2$, and then finding how they renormalize the self-energy
and vertex. To lowest order, one obtains the fluctuations in $\lambda$
by inverting a polarization bubble. That is, one expands the self-consistency
equation to linear order in $\lambda$ to solve for large wavevector
fluctuations in $\lambda$ as a function of fluctuations in $\delta\sigma^2$.
One finds then that
\begin{equation}
\label{pol}
\delta\sigma^2(p)=c_1^{-1}p^{D-4}\lambda(p)+...
\end{equation}
From this, we obtain fluctuations in $\lambda$ at wavevector $\Lambda$
which, to lowest order, are Gaussian with mean-square $L$. See
figures 1,2, and 3.
For more details, see previous work\cite{me}.
It may easily be seen that, to lowest order, the addition
of the term $w(x)$ does not produce any additional large wavevector
fluctuations in $\lambda$, as equation (\ref{pol}) is still true to
lowest order in $w(x)$ and $\lambda(x)$. However, the term $w(x)$ can produce
a renormalization of the self-energy. See figure 4. The result is
to produce a term in the self-energy equal to
\begin{equation}
\Sigma(p,\omega)=
-\delta\Lambda\omega^2\int\limits_{k^2=\Lambda^2}d^{d-1}k
\frac{1}{(p+k)^2+\omega^2}W
\end{equation}
This is equal to
\begin{equation}
-\frac{\delta\Lambda}{\Lambda}\omega^2Wc_4+...
\end{equation}
where $c_4=\Lambda^{d-2}2\frac{\pi^{d/2}}{\Gamma(d/2)}$.
There is one other term that must be included in the RG flow equations
at this order. The fluctuations in $\lambda$ due to the fluctuations
in $\delta\sigma^2$ can renormalize the vertex involving $w(x)$. See
figure 5.
This will change the term $w(x)\partial_t$ in the self-consistency
equation to $w(x)\partial_t (1+\frac{\delta\Lambda}{\Lambda}c_3L)$.
Note that the renormalization of the $w(x)$ term is equal to the renormalization
of the $\omega^2$ and $\lambda(x)$ terms in the self-consistency equation.
Putting all the terms together, we find that with a lowered cutoff
$\Lambda-\delta\Lambda$, the renormalized theory is described by the
new self-consistency equation
\begin{equation}
(1-\delta_3) \sigma^2(x)=
\int d\omega \,\langle x,t=0|
\Bigl(-(1+\delta_2)\partial_{x}^2+(1+\delta_4)\omega^2
+i(1+\delta_3) w(x)\omega
+(1+\delta_3) \lambda(x)\Bigr)^{-1})|x,t=0\rangle
\end{equation}
where $\delta_3=\frac{\delta\Lambda}{\Lambda}c_3L$,
$\delta_2=\frac{\delta\Lambda}{\Lambda}c_2L$, and
$\delta_4=\delta_3+\frac{\delta\Lambda}{\Lambda}Wc_4$.
Rescaling $\omega$ by $(1+\frac{\delta_2-\delta_4}{2})$ to make the
coefficients in front of the
$\omega^2$ and $\partial_x^2$ terms the same, rescaling $\lambda$,
and then rescaling the spatial
scale to return the cutoff to $\Lambda$ we find
\begin{equation}
\tilde\sigma^2(x)=\int d\omega \,\langle x,t=0|
\Bigl(-\partial_{x}^2+\omega^2+
i\tilde w(x)\omega+
\lambda(x)\Bigr)^{-1})|x,t=0\rangle
\end{equation}
where
\begin{equation}
\tilde \sigma^2=
(1-\delta_3+\delta_2+\frac{\delta_4-\delta_2}{2}+(1+\epsilon)
\frac{\delta\Lambda}{\Lambda}) \sigma^2(x)
\end{equation}
\begin{equation}
\tilde w(x)=
i(1+\delta_3-\delta_2+\frac{\delta_2-\delta_4}{2}) w(x)
\end{equation}
From this, we extract RG flow equations for $\sigma_0^2$, $S$, and $W$.
The result is
\begin{equation}
\frac{d{\rm ln}\sigma_0^2}{d{\rm ln}\Lambda}=1+\epsilon-c_3L+c_2L+W\frac{c_4}{2}
\end{equation}
\begin{equation}
\frac{d{\rm ln}S}{d{\rm ln}\Lambda}=\epsilon-2c_3L+2c_2L+Wc_4
\end{equation}
\begin{equation}
\frac{d{\rm ln}W}{d{\rm ln}\Lambda}=-\epsilon+2c_3L-2c_2L-Wc_4
\end{equation}
The renormalization group flow has a fixed line, as the product
$SW$ is invariant under the RG flow. It may be verified that the ratio
$S/W$ has a stable fixed point under RG flow for any $\epsilon$ and
any value of $SW$. Further, it may be seen that the critical exponent
$\nu$ on the fixed line is given by $\nu d=2$, as if $S$ is constant
under RG flow, then $\sigma_0^2$ has
$\frac{d{\rm ln}\sigma_0^2}{d{\rm ln}\Lambda}=1+\frac{\epsilon}{2}d/2$.
Later, we will consider the effect of $1/N$ corrections.
First, note that the line is peculiar to having one time dimension. For
fewer than one time dimension, there will be a stable fixed point at $W=0$,
which is attractive in the $W$ direction. Thus, in the framework of
a double-dimensional expansion, one may not see problems at low orders,
as the fixed point has nice behavior for small numbers of time dimensions.
Compare to results in the double-dimensional expansion\cite{dd}.
Further, the presence of the fixed line only required that the renormalization
of the $w(x)\partial_t$ vertex was equal to the renormalization of the
vertex on the left-hand side of the self-consistency equation defining
$\sigma^2$. This equality will persist to all orders in a loopwise
expansion via a Ward identity. Thus, we expect that the fixed line is an
exact property of the large $N$ theory.
Let us consider the effect of $1/N$ corrections on this line. To lowest
order in $1/N$, for weak disorder, the $1/N$ corrections
only modify the naive scaling dimensions in the RG flow. The scaling dimension
of $\overline\phi\partial_t\phi$ is not changed under $1/N$ corrections.
However, the scaling dimension of $\overline\phi\phi$ is changed by
an amount $\eta=\frac{32}{3\pi^2}\frac{1}{2N}$. Thus, $1/N$ corrections
will change the RG equations to
\begin{equation}
\frac{d{\rm ln}S}{d{\rm ln}\Lambda}=2\eta+\epsilon-2c_3L+2c_2L+Wc_4
\end{equation}
\begin{equation}
\frac{d{\rm ln}W}{d{\rm ln}\Lambda}=-\epsilon+2c_3L-2c_2L-Wc_4
\end{equation}
Then, we find that $SW$ is growing under the RG flow, and the system
goes off to a different fixed point. The most reasonable guess then is that
in a system with finite $N$ (including
physical systems with $N=1$), the transition is not near the
MI-SF transition, but is instead of another type, perhaps the phase-only
transition. Other authors have shown
that, in some cases, the phase-only transition is stable against
weak commensuration effects\cite{herbut}.
\section{Instanton Calculations}
Unfortunately, the ability to carry out instanton calculations in
this system is rather limited. It will not be possible to calculate
the action for the instanton with any precision, but we will at least
present some arguments about the behavior of the instanton. The idea
of the calculation is to look for configuration of $w(x)$ and $\sigma^2(x)$
(these configurations are the ``instantons"), such that the self-consistency
equation cannot be solved without including contributions from zero energy
states, as discussed in section 2. Let us first consider the case in
$2+1$ dimensions. Let us assume that we try to produce such
states in a region of linear size $L$.
Looking at the lowest energy state in this region, one would expect that
the contribution of spatial gradient terms in the action would lead to
an energy scale of order $L^{-1}$. The linear term $w(x)\omega$ in the action
will become important, and produce such a zero energy state,
when $w(x)\omega$ becomes of order $\omega^2$.
This implies occurs when $w(x)$ is of order $\omega$, which implies
$w(x)\approx L^{-1}$. For some appropriate
configuration of $w(x)$, assuming quadratic fluctuations in $w(x)$ with
strength of order $W$,
we will have an action $S_{\rm instanton}\propto \int \frac{w^2(x)}{W} d^2x$.
Thus, these configurations will occur with exponentially small probability
$e^{-S_{\rm instanton}}$ for weak disorder in $w(x)$.
Away from $2+1$ dimensions, one will find that the action for the
instanton, ignoring fluctuation corrections, is dependent on scale. For
$d>2$ it is increasing as the scale increases, indicating that large
instantons are not present. For $d<2$, it is decreasing as the scale
increases, indicating that large instantons are easy to produce. This is
simply a way of restating the fact that fluctuations in $w(x)$ are, at the pure
fixed point, irrelevant for $d>2$ and relevant for $d<2$. It is
to be expected that corrections due to fluctuations as considered in
the renormalization group of the previous section will make the action
for the instanton scale invariant. However, since we do not fully understand
how to calculate instanton corrections even in the simplest $d=2$ case,
the task of combining instanton and fluctuation corrections is
presently hopeless.
\section{Higher Moments of the Green's Function}
Having considered the weakly commensurate case, and found no fixed
point in physical systems, we return to the strongly commensurate
case with $W=0$, and consider the behavior of higher moments of the Green's
function. A lowest order calculation will show log-normal fluctuations in the
Green's function.
Let us first consider the second moment of the Green's function. That
is, we would like to compute the disorder average of the square of the
Green's function between two points, which we may
write as $\langle G(0,x)^2 \rangle$. We may Fourier transform the
square to obtain $\langle G^2(p,\omega) \rangle$. Now, one
may, when averaging over disorder, include terms in which disorder
averages connect the two separate Green's function. At lowest order,
there is no low-momentum renormalization of the two Green's function propagator,
beyond that due to the renormalization of each Green's function separately.
See figure 6.
That is, if one imagines the two Green's functions entering some diagram,
with both Green's functions at low momentum, going through a sequence of
scatterings, and exiting, again with both Green's functions at low momentum,
one does not, to lowest order, find any contribution with lines
connecting the two Green's function. The reason for this is that
at this order we will only join the Green's functions with a single
line, along which one must have momentum transfer of order $\Lambda$.
This then requires that some of the ingoing or outgoing
momenta must be of order $\Lambda$.
One does, however, find a contribution which we may call a renormalization of
the vertex. See figure 7.
In order to find the second moment of the Green's function,
one must start both Green's functions at one point, and end both
Green's functions at another point. Near the point at which both Green's
functions start, one may connect both lines with a single scattering
off of $\lambda$, at high wavevector of order $\Lambda$. Then, one
can have a large momentum of order $\Lambda$ circulating around the
loop formed, while the two lines that leave to connect to the rest of
the diagram still have low momentum. This then replaces the two
Green's function vertex, which we will refer to as $V_2$, by a
renormalized vertex.
The result of the above contribution is that the two Green's function
vertex $V_2$ is renormalized under RG flow as
\begin{equation}
\frac{d{\rm ln}V_2}{d{\rm ln}\Lambda}=
c_3L=c_3 c_1^2\Lambda^{8-2D}S
\end{equation}
Then, the second moment of the Green's function, at momentum scale $p$
is given in terms of the first moment by
\begin{equation}
\langle G^2(p,\omega) \rangle \propto \langle G(p,\omega) \rangle^2 p^{-2c_3L}
\end{equation}
Note the factor of 2 in front of $c_3 L$, as the second moment of the
Green's function gets renormalized at both vertices. One must insert
the value of $L$ at the fixed point into the above equation to
obtain the behavior of the second moment.
For higher moments, the calculation is similar. In this case, one must,
for the $n$-th moment, renormalize a vertex $V_n$. The result is
\begin{equation}
\frac{d{\rm ln}V_n}{d{\rm ln}\Lambda}=
\frac{n(n-1)}{2}c_3L=\frac{n(n-1)}{2}c_3 c_1^2\Lambda^{8-2D}S
\end{equation}
The factor $\frac{n(n-1)}{2}$ arises as at each stage of the RG one may
connect any one of the lines in the vertex $V_n$ to any other line in the
vertex. There are $\frac{n(n-1)}{2}$ ways to do this.
Then one finds
\begin{equation}
\langle G^n(p,\omega) \rangle \propto \langle G(p,\omega) \rangle^n
p^{-n(n-1)c_3L}
\end{equation}
This result for the behavior of the higher moments of the Green's function
is quite typical for disordered systems. Compare for example to the results
on 2-dimensional Potts models\cite{ludwig}. From the results for the
moments of the Green's function one may, under mild assumptions,
determine the distribution function of the Green's function. This distribution
function is the probability that, for a given realization of disorder, the
Green's function between two points assumes a specific value. From the result
for the moments given above one finds that the distribution function is
log-normal. That is, the log of the function has Gaussian fluctuations.
Physically this should be expected from any lowest order calculation, as
lowest order calculations generally treat momentum scales hierarchically, and
one is simply finding that at each scale there are random multiplicative
corrections to the Green's function, causing the log of the Green's function
to obey a random walk as length scale is increased.
\section{Glassy Behavior in the Large $N$ Limit}
First, we would like to demonstrate that, in the large $N$ limit, the
self-consistency equation always has a unique solution. For
generality, we consider here the case of quartic interactions instead
of $\delta$-function interactions.
In the absence of terms linear in $\omega$, uniqueness is clear on physical
grounds, for the models considered above in which the coupling between
neighboring fields $\phi$ is ferromagnetic and unfrustrated. However,
we will show this to be true for any coupling between neighboring
fields and in the presence of terms linear in $\omega$.
Note that, for finite $N$, the terms linear in $\omega$ lead to frustration.
Consider an $N=1$ system with a finite number of sites.
Assume that there is no hopping between
sites, but there is some repulsion between sites due to a quartic term.
Let there be terms linear in $\omega$ in the action, but no terms
quadratic in $\omega$. The
states of the theory are then determined by how many particles occupy
each site. The repulsion leads to an effective
anti-ferromagnetic interaction, in the case in which each site has
zero or one particles and we imagine one particle to represent
spin up and no particles to represent spin down.
This can then produce frustration. Compare to the Coulomb gap
problem in localized electron systems\cite{efros}.
However, physically speaking, as
$N \rightarrow \infty$, the discreteness of particle number on each
site disappears, and the system becomes unfrustrated. We will now
show this precisely.
Consider a problem at non-zero temperature, so that there is a
sum over frequencies $\omega$. Consider an arbitrary single particle
Hamiltonian $H_0$, defined on a $V$-site lattice, so that the
self-consistency equation involves finding $\lambda_i$, where
$i$ ranges from $1$ to $V$, such that
\begin{equation}
\label{arbsc}
\sigma_i^2+\sum\limits_{j} M_{ij}\lambda_j
=\sum\limits_{\omega}\langle i|(H_0+\lambda_i+iA_i\omega
+B_i\omega^2)^{-1}|i\rangle
\end{equation}
Here, $\sigma_i^2$ is a function of site $i$, and $A_i$ and $B_i$
are functions of site $i$ defining the local value of the linear
and quadratic terms in the frequency. The matrix $M_{ij}$ is included
to represent the effects of a quartic interaction. For the problem
to be physically well defined, $M_{ij}$ must be positive definite.
The proof that equation (\ref{arbsc}) has only one solution
proceeds in two steps. First we note that if $H_0$ vanishes, then
the equation obviously only has one solution with
$\lambda\geq 0$. Next, we will show
that as $H_0$ varies, $\lambda_i$ varies smoothly, and therefore
any arbitrary $H_0$ can be deformed smoothly into a vanishing $H_0$,
leading to a unique solution for $\lambda_i$ for arbitrary $H_0$.
Consider small changes $\delta H_0$ and $\delta\lambda_i$. In order
for the self-consistency equation to remain true, if we define
\begin{equation}
v_i= -\sum\limits_{\omega}\langle i|
(H_0+\lambda_i+iA_i\omega+B_i\omega^2)^{-1}
\delta H_0
(H_0+\lambda_i+iA_i\omega+B_i\omega^2)^{-1}|i\rangle
\end{equation}
we must have
\begin{equation}
\label{linop}
v_i=\sum\limits_{\omega}\langle i|
(H_0+\lambda_i+iA_i\omega+B_i\omega^2)^{-1}
\delta\lambda_i
(H_0+\lambda_i+iA_i\omega+B_i\omega^2)^{-1}|i\rangle
+\sum\limits_{j} M_{ij} \lambda_{j}
\end{equation}
The right hand side of equation (\ref{linop}) defines a linear function
on $\delta\lambda_i$. If it can be shown that this function is invertible,
then the theorem will follow.
However, we have that
\begin{equation}
{\rm Tr}(
\delta\lambda_i
\sum\limits_{\omega}
(H_0+\lambda_i+iA_i\omega+B_i\omega^2)^{-1}
\delta\lambda_i
(H_0+\lambda_i+iA_i\omega+B_i\omega^2)^{-1})>0
\end{equation}
due to the well known fact that second order perturbation theory always
reduces the free energy of a quantum mechanical system with a Hermitian
Hamiltonian at finite temperature.
We also have, as discussed above, that $M_{ij}$ is positive definite.
Thus, the linear function on $\delta\lambda_i$ defined above is a sum of
positive definite functions, and hence positive definite.
Therefore, it is invertible and the desired result follows.
This result is interesting considering the phenomenon of replica
symmetry breaking. It has been noticed by several authors that large
$N$ infinite-range spin glass models do not exhibit replica symmetry breaking
within a meanfield approximation, both in the classical and
quantum cases\cite{rsb}. Although those calculations were based on the
absence of unstable directions, in the large $N$ limit, for fluctuations
about the replica symmetric state, it is possible that the real reason
for the absence of replica symmetry breaking is the uniqueness of the
solution of the self-consistency equation, as shown above.
A second interesting question, having begun to consider possible glassy
behavior in the large $N$ limit, has to do with the nature of the ground
state in the classical limit. If we drop all terms in $\omega$, to
produce a classical problem, and ask for the classical ground state,
for some arbitrary bare Hamiltonian $H_0$, one may ask how many of
the $N$ available spin components will be used.
In this case, consider Hamiltonian $H_0$, which is a $V$-by-$V$ matrix
in the case where there are $V$ sites. First consider the case in
which $H_0$ is a real Hermitian matrix. Since we are considering arbitrary
Hamiltonians $H_0$, we can, without loss of generality, constrain all
spins to be the same length. We can find the classical ground
state by looking for solutions of the self-consistency equation
\begin{equation}
\sigma^2=\langle i|(H_0+\lambda_i)^{-1}|i\rangle
\end{equation}
in the limit as $\sigma^2\rightarrow\infty$.
In this limit, the right-hand side will be dominated by zero energy
states (more precisely, states that tend to zero energy as $\sigma^2$ tends to
infinity) of the operator $H_0+\lambda$. If the system has $k$ of these
states, the ground state of the system will use $k$ of the spin components.
If the system {\it needs} to use all $k$ of these components to form
a ground state, that is, ignoring the case in which a state using $k$
spin components is degenerate with a state using fewer components, then
even under small deformations of $H_0$ the system will use $k$ spin
components in the ground state. Then, under these small deformations,
$H_0+\lambda$ will still have $k$ zero eigenvalues. To produce $k$
zero eigenvalues for all real Hermitian matrices in a neighborhood
of a given Hermitian matrix $H_0$ requires
$k(k+1)/2$ free parameters. The elements of $\lambda$ provide these
parameters. Since there $V$ of these elements, we find that
$k(k+1)/2\leq V$, and the number of spin components needed to form
the classical ground state is at most $\sqrt{2V}$.
If $H_0$ were an arbitrary Hermitian matrix, with complex elements, or
a symplectic matrix, one would find a similar result, with $k$ still
at most order $\sqrt{V}$, although the factor $2$ would change. This
is analogous to the different universality classes in random matrix
theory\cite{rmt}. Finally, we make one note on the number of parameters
available to solve the self-consistency equation. There are $V$ free
parameters. However, self-consistency requires solving $V$ independent
equations, so the number of variables matches the number of equations. By
considering the number of parameters requires to produce zero eigenvalues
of $H_0+\lambda$, we were able to obtain a bound on the number of zero
eigenvalues. Still, one might wonder if there are enough free parameters
to produce multiple zero eigenvalues and still solve the self-consistency
equations, as it appears that one would then need $k(k+1)/2+V$ free
parameters. However, if there are $k$ zero eigenvalues, by considering the
different ways of populating the zero energy states (that is, considering
the different ways in which the eigenvalues tend towards zero as
$\sigma^2$ tends toward infinity) one obtains an additional $k(k+1)/2$
parameters, so the number of parameters available always matches the number
of equations.
We can extend this theorem to look at metastable states. Suppose
a configuration of spins is a local extremum of the energy $H_0$, for
fixed length of spins. Then, since the derivative of the energy vanishes,
one finds that a matrix $(H_0+\lambda_i)$ must have a number
of zero eigenvalues equal to the number of spin components used.
Suppose that for small deformations of $H_0$ there is still a nearby local
minimum, as one would like to require for a stable state.
Then, one can argue that the number of spin components $k$
used in the state obeys $k(k+1)/2\leq V$.
This second theorem may be of interested in considering the onset of
replica symmetry breaking. If we have a system in a large volume and large
$N$ limit, one must ask in which order the limits are taken.
If the $N\rightarrow\infty$ limit is taken first, there will be no
replica symmetry breaking. However, if the infinite volume limit is
taken first, there may be replica symmetry breaking.
If one has $N \geq 2k_{\rm max}$, where $k_{\rm max}$ is the largest $k$ such
that $k(k+1)/2 \leq V$, then there are no local minima other than
the ground state. This follows because, as shown above, a local extremum of
the energy, $\phi_i$,
will use at most $k_{\rm max}$ spin components, and the ground state,
$\phi^{\rm gr}_i$, can be constructed using a different set of
$k_{\rm max}$ spin compoents. Then starting from $\phi_i$,
one finds that deforming the state along the path
$\sqrt{1-\delta^2}\phi_i+\delta\phi^{\rm gr}_i$ as $\delta$ goes
from 0 to 1 provides an unstable direction for fluctuations.
\section{Conclusion}
In conclusion, we have considered the large $N$ dirty boson model,
including the effects of local incommensuration (the terms linear in
$\omega$). In the large $N$ limit, a fixed line under RG is found,
but is destabilized by including $1/N$ corrections. This suggests that
the phase transition in experimental ($N=1$) systems is of the phase-only
type, instead of the MI-SF type.
There is a problem with local incommensuration in a perturbative
approach, as discussed in the section on the
Bose glass in the large $N$ limit and the section on instanton calculations.
One would like a quantitative method of assessing the results of the
instantons, although this is largely a technical issue, as it appears
that there are no accessible fixed points in the RG using this approach.
In the strongly commensurate case, it has been shown that one can
calculate higher moments of the correlation functions. The result shows
that the correlation functions have a log-normal distribution.
The two theorems proved in the last section give useful information
on the relevance of the large $N$ expansion in frustrated
problems. It would be interesting to use these results as a starting
point for a better understanding of replica symmetry breaking.
The large $N$ approximation has been a useful approximation for pure,
unfrustrated systems. It is hoped that it may become as useful for
disordered interacting systems.
\section{Acknowledgements}
The result for the number of spin components needed
to form a ground state was obtained in collaboration with David Huse,
who I would also like to thank for many useful discussions on other
results in this work. I would like to thank the ICTP, in Trieste, Italy,
for their hospitality while some of this work was in progress.
| proofpile-arXiv_065-8675 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The standard Kolmogorov $k^{-5/3}$ scaling law for energy cascading in
3D turbulence and the corresponding $k^{-3}$ scaling law for enstrophy
cascade in 2D turbulence is still debated. Direct numerical
calculations of the full Navier-Stokes equation is by and large still
impossible for high Reynolds number $( > 100-200)$ flows. However, the
cascading mechanisms and its multi fractal nature can be analyzed in
reduced wave-number models for very high Reynold numbers with high
accuracy. In this paper we investigate the GOY shell
model \cite{Gledzer,GOY} where the spectral velocity or vorticity is
represented by one complex variable for each shell evenly spaced in
$log(k)$ in spectral space. For this type of model the Kolmogorov
scaling arguments can be applied as for real flow regardless of how
realistically they mimic the dynamics of the Navier-Stokes equation.
The scaling behavior of the fields depends on the inviscid invariants
of the model. In the simple model we are able to control which
symmetries and conserved integrals of the dynamics that are present in
the inviscid and force-free limit. In the models we interpret as
simulating 3D turbulence there are 2 inviscid invariants, similar to
energy and helicity \cite{benzi}, of which the first is positive definite and the
second is not. For the models we interpret as 2D turbulence the 2
inviscid invariants, similar to energy and enstrophy, are both positive
definite. We will mainly be concerned with an investigation of the 2D
like models. The specific parameter choice previously assigned to
simulating 2D turbulence are such that the GOY model does not show
enstrophy cascading but rather a statistical equilibrium where the
enstrophy is transported through the inertial sub-range by
diffusion \cite{aurell}. We show that this is a borderline case for
which, on one side, the model behaves as a cascade model and, on the
other side, it behaves as a statistical equilibrium model, where the
enstrophy spectrum is characterized by a simple equipartitioning among
the degrees of freedom of the model. The difference in behavior is
connected with the different typical timescales of the shell velocities
as function of shell number. This probabily also influences the (non-universal)
multi fractal behavior of the shell velocities. If timescales in the
viscous sub-range are not smaller than in the beginning
of the inertial sub-range,the low wave-number end, the model does not have a
multi fractal spectrum.
\section{The GOY model}
The GOY model is a simplified analogy to the spectral Navier-Stokes
equation for turbulence. The spectral domain is represented as shells,
each of which is defined by a wavenumber $k_n = k_0 \lambda^n$, where
$\lambda$ is a scaling parameter defining the shell spacing; in our
calculations we use the standard value $\lambda=2$. The reduced phase
space enables us to cover a large range of wavenumbers, corresponding
to large Reynolds numbers. We have $2 N$ degrees of freedom, where $N$
is the number of shells, namely the generalized complex shell
velocities or vorticities, $u_n$ for $n=1,N$. The dynamical equation
for the shell velocities is,
\begin{equation}
\dot{u}_n=i k_n (a
u^*_{n+2}u^*_{n+1}+\frac{b}{\lambda}u^*_{n+1}u^*_{n-1}
+\frac{c}{\lambda^2}u^*_{n-1}u^*_{n-2}) -\nu k_n^{p_1} u_n - \nu'
k_n^{-p_2} u_n+ f_n,
\label{dyn}
\end{equation}
where the first term represents the non-linear wave interaction or
advection, the second term is the dissipation, the third term is a
drag term, specific to the 2D case, and the fourth term is the forcing. Throughout this paper
we use $p_1=p_2=2$. We will for convenience set $a=k_0=1$, which can
be done in (\ref{dyn}) by a rescaling of time and the units in $k$ space.
A real form of the GOY
model, as originally proposed by Gledzer \cite{Gledzer}, can be
obtained trivially by having purely imaginary velocities and forcing.
The GOY model in its original real form contains no information about phases between waves, thus
there cannot be assigned a flow field in real space to the spectral
field. The complex form of the GOY model and extensions in which there
are more shell variables in each shell introduce some degrees of
freedom, which could be thought of as representing the phases among
waves. However, it seems as if these models do not behave differently
from the real form of the model in regard to the conclusions in the
following \cite{euro,aurell}. The key issue for the behavior of the
model is the symmetries and conservation laws obeyed by the model.
\subsection{Conservation laws}
The GOY model has two conserved integrals, in the case of no forcing
and no dissipation $(\nu = f = 0)$.
We denote the two conserved integrals by
\begin{equation}
E^{1,2}=\sum_{n=1}^N E^{1,2}_n=\frac{1}{2}\sum_{n=1}^N
k_n^{\alpha_{1,2}}|u_n|^2=
\frac{1}{2}\sum_{n=1}^N \lambda^{n\alpha_{1,2}}|u_n|^2
\end{equation}
By setting $\dot{E}^{1,2}=0$ and using $\dot{u}_n$ from (\ref{dyn})
we get
\[ \left\{ \begin{array}{ll}
1 + b z_1 + c z_1^2 = 0 \\
1 + b z_2 + c z_2^2 = 0,
\end{array} \right. \]
\begin{equation}\label{bc}\end{equation}
where the roots $z_{1,2}=\lambda^{\alpha_{1,2}}$ are the generators
of the conserved integrals. In the case of negative values of $z$ we
can use the complex formulation, $\alpha=(log|z|+i\pi)/log\lambda$.
The parameters $(b, c)$ are determined from (\ref{bc}) as
\[ \left\{ \begin{array}{ll}
b=-(z_1+z_2)/z_1z_2 \nonumber \\
c=1/z_1z_2.
\end{array} \right. \]
\begin{equation}\label{2}\end{equation}
In the $(b,c)$ parameter plane the curve $c=b^2/4$ represents models
with only one conserved integral, see figure 1. Above the parabula the
generators are complex conjugates, and below they are real and
different. Any conserved integral represented by a real nonzero
generator $z$ defines a line in the $(b, c)$ parameter plane, which is
tangent to the parabula in the point $(b,c)=(-2/z,1/z^2)$. The rest of
our analysis we will focus on the line defined by $z_1=1$. The
conserved integral,
\begin{equation}
E^1=\frac{1}{2} \sum_{n=1}^N |u_n|^2,
\end{equation}
is the usual definition of the energy for the GOY model \cite{benzi}.
The parameters are then determined by $1+b+c=0$, which with the
definitions $b=-\epsilon$ and $c=-(1-\epsilon )$ agree with the notation
of ref. \cite{biferale}. The generator of the other conserved
integral is from (\ref{2}) given as,
\begin{equation}
z_2=\frac{1}{\epsilon-1}.
\label{z2}
\end{equation}
For $\epsilon < 1$ the second conserved integral is not positive
definite and is of the form,
\begin{equation}
E^2=H=\frac{1}{2} \sum_{n=1}^N (-1)^n |z_2|^n |u_n|^2,
\label{helicity}
\end{equation}
which can be interpreted as a generalized helicity. For $\epsilon=1/2$,
$z_2 =-2=-\lambda$ the model is the usual 3D shell model and H is the
helicity as defined in ref. \cite{benzi}. By choosing $\lambda$ such
that $\lambda = 1/(1-\epsilon)$ we get $E^2= \sum (- 1)^n\lambda^n
|u_n|^2$. This form was argued in ref. \cite{benzi} to be the proper
form for the helicity. In this paper we will alternatively use the
definition (\ref{helicity}) for the helicity.
For $\epsilon >
1$ the second conserved integral is positive definite and of the form,
\begin{equation}
E^2=Z=\frac{1}{2}\sum_{n=1}^N z_2^n |u_n|^2,
\end{equation}
which can be interpreted as a generalized enstrophy. For
$\epsilon=5/4$, $z_2=4 =\lambda^2$ the model is the usual 2D shell
model and Z is the enstrophy as defined in ref. \cite{aurell}. The
sign of $c$, which is the interaction coefficient for the smaller
wavenumbers, changes when going from the 3D - to the 2D case. This
could be related to the different role of backward cascading in the two
cases. To see this, consider the non-linear transfer of $E^i$ through
the triade interaction between shells, $n-1,n,n+1$. This is simply
given by,
\begin{eqnarray}
\dot{E}^i_{n-1}&=&k^{\alpha_i}_{n-1}\Delta_n \nonumber \\
\dot{E}^i_{n}&=&bz_ik^{\alpha_i}_{n-1}\Delta_n\nonumber \\
\dot{E}^i_{n+1}&=&cz_i^2k^{\alpha_i}_{n-1}\Delta_n,
\end{eqnarray}
with
\begin{equation}
\Delta_n=k_{n-1}Im(u_{n-1}u_nu_{n+1}).
\end{equation}
The detailed conservation of $E^i$ in the triade interaction is
reflected in the identity, $1+bz_i+cz_i^2=0$. Using (\ref{2}) and
(\ref{z2}) we have for the exchange of energy, $E^1$, with
$\alpha_1=0$;
\begin{eqnarray}
\dot{E}^1_{n-1}&=&\Delta_n\nonumber \\
\dot{E}^1_{n}&=&-\epsilon\Delta_n\nonumber \\
\dot{E}^1_{n+1}&=&(\epsilon-1)\Delta_n
\end{eqnarray}
and for helicity/enstrophy, $E^2$, with $\alpha_2=\alpha$;
\begin{eqnarray}
\dot{E}^2_{n-1}&=&k^{\alpha}_{n-1}\Delta_n\nonumber \\
\dot{E}^2_{n}&=&-(\epsilon/(\epsilon-1))k^{\alpha}_{n-1}\Delta_n \nonumber \\
\dot{E}^2_{n+1}&=&(1/(\epsilon-1))k^{\alpha}_{n-1}\Delta_n.
\end{eqnarray}
We have $\epsilon<1$ for 3D like models and $\epsilon>1$ for 2D like
models, the two situations are depicted in figure 2, where the
thickness of the arrows symbolize the relative sizes of the exchanges
in the cases of $\epsilon=1/2$ and $\epsilon=5/4$.
\subsection{Scaling and inertial range.}
The inertial sub-range is defined as the range of shells where the
forcing and the dissipation are negligible in comparison with the
non-linear interactions among shells. Since we apply the forcing at the
small shell numbers and the dissipation at the large shell numbers, the
inertial range (of forward cascade) is characterized by the constant
cascade of one of the conserved quantities. The classical Kolmogorov
scaling analysis can then be applied to the inertial range. There is,
however, in the shell model, long range influences of the dissipation
and forcing into the inertial subrange. This is an artifact of the
modulus 3 symmetry, see equation (\ref{ggg}), and the truncation of the
shell model which is not expected to represent any reality. These
features are treated in great detail in ref. \cite{schorghofer}.
Denoting $\eta_{1,2}$ as the average dissipation of $E^{1,2}$, this is
then also the amount of $E^{1,2}$ cascaded through the inertial range.
The spectrum of $E^{1,2}$ does then, by the Kolmogorov hypothesis, only
depend on $k$ and $\eta_{1,2}$.
From dimensional analysis we have,
$[ku]=s^{-1}$, $[\eta_{1,2}]=[E^{1,2}]s^{-1}$, $[E^{1,2}]=
[k^{\alpha_{1,2}}u^2]=[k]^{\alpha -2}s^{-2}$, and we get,
\begin{equation}
E^{1,2} \sim \eta_{1,2}^{2/3}k^{(\alpha_{1,2} -2)/3}.
\label{k41}
\end{equation}
For the generalized velocity, $u$, we then get the "Kolmogorov-
scaling",
\begin{equation}
|u| \sim \eta_{1,2}^{1/3}k^{-(Re(\alpha_{1,2})+1)/3}.
\end{equation}
The non-linear cascade, or flux, of the conserved quantities defined by $z_{1,2}$
through shell number $n$ can be expressed directly as,
\begin{eqnarray}
\Pi_n^{1,2} = \sum_{m=1}^n\dot{E}^{1,2}(m)&=&\frac{1}{2}\sum_{m=1}^n
z_{1,2}^m (u^*_m\dot{u}_m+c.c.)\nonumber \\
&=&z_{1,2}^n(-\Delta_n /z_{2,1}+
\Delta_{n+1}).
\label{cascade1}
\end{eqnarray}
In the inertial range the cascade is constant,
$\Pi_n^{1,2}=\Pi_{n+1}^{1,2}$, so from (\ref{cascade1}) we get
following ref. \cite{biferale}
\begin{eqnarray}
z_1z_2 \Delta_{n+2}-(z_1+z_2)\Delta_{n+1}+\Delta_{n}=0 \Rightarrow
\nonumber \\
q_n+z_1z_2/q_{n+1}=z_1+z_2
\label{cascade3}
\end{eqnarray}
where we have defined
\begin{equation}
q_n=\Delta_n/\Delta_{n+1}.
\label{cascade2}
\end{equation}
The inertial range scaling requires $q_n=q_{n+1}=q$ to be
independent of $n$. Solving (\ref{cascade3}) for $q$ and using
(\ref{cascade2}) gives,
\[ q= \left\{ \begin{array}{ll}
z_1 \Rightarrow u_n \sim k_n^{-(\alpha_1+1)/3} &
\mbox{Kolmogorov for $E^1$} \\
z_2 \Rightarrow u_n \sim k_n^{-(\alpha_2+1)/3} &
\mbox{Kolmogorov for $E^2$}.
\end{array} \right. \label{cascade4} \]
Inserting this into (\ref{cascade1}) gives for the cascade of $E^1$
in the two solutions,
\[ \Pi^1\sim \left\{\begin{array}{ll}
1-z_2/z_1&\mbox{Kolmogorov for $E^1$}\\
0&\mbox{fluxless for $E^1$,} \end{array} \right. \]
\begin{equation} \label{pi1} \end{equation}
and correspondingly for $E^2$,
\[ \Pi^2\sim \left\{\begin{array}{ll}
0&\mbox{fluxless for $E^2$}.\\
1-z_1/z_2&\mbox{Kolmogorov for $E^2$}.\end{array} \right. \]
These are the two scaling fixed points for the model. The Kolmogorov
fixed point for the first conserved integral corresponds to the
fluxless fixed point for the other conserved integral and visa versa.
This is of course reflected in the fact that (\ref{cascade3}) is
symmetric in the indices 1 and 2. That these points in phase space are
fixed points, in the case of no forcing and dissipation, is trivial,
since $\Pi_n=\Pi_{n+1}\Rightarrow \dot{E}_{n+1}=0 \Rightarrow
\dot{u}_{n+1}=0$. It should be noted that the Kolmogorov fixed point,
\begin{equation}
u \sim k^{-(\alpha +1)/3},
\label{cascade_scaling}
\end{equation}
obtained from this analysis is in agreement with the dimensional
analysis (\ref{k41}).
The scaling fixed points can be obtained directly from the
dynamical equation as well. For $u_n \sim k_n^{-\gamma} g(n) =
\lambda^{-n\gamma} g(n)$, where $g(n+3)=g(n)$ is any period 3
function, we get by inserting into (\ref{dyn}) with $a=1$,
\begin{equation}
g(n-1)g(n)g(n+1)\lambda^{n(1-\gamma )+3\gamma }(1+b\lambda^{3\gamma -
1}+c(\lambda^{3\gamma -1})^2)=0
\label{ggg}
\end{equation}
and the generators reemerge,
$z_{1,2}=\lambda^{\alpha_{1,2}}=\lambda^{3\gamma_{1,2} -1}$, giving the
Kolmogorov fixed points for the two conserved integrals, $\gamma_{1,2}=
(\alpha_{1,2}+1)/3$.
The period 3 symmetry seems to have little implications for the
numerical integrations of the model, except perhaps in accurately
determining the structure function.
The stability of the fixed point for energy cascade in the 3D case,
$\epsilon= 1/2$, is characterized by few unstable directions, where the
corresponding eigenmodes mainly projects onto the high shell numbers,
and a large number of marginally stable directions which mainly
projects onto the inertial range. This also holds in the case with
forcing and dissipation \cite{OY}. In the case where dissipation and
forcing are applied for some values of $\epsilon$ the Kolmogorov fixed
point can become stable. Biferale et al. \cite{biferale} show that
there is a transition in the GOY model, for $\nu=10^{-6}$ and $f=5
\times 10^{-3} \times (1+i)$, as a function of $\epsilon$ from stable
fixed point $(\epsilon < 0.38..)$, through Hopf bifurcations and via a
Ruelle-Takens scenario to chaotic dynamics $(\epsilon > 0.39..)$.
\section{Forward and backward cascades}
Until this point we have not specified which of the two conserved
quantities will cascade. Assume, in the chaotic regime where
the Kolmogorov fixed points are unstable, that there is, on average, an
input of the same size of the two quantities, $E^1$ and $E^2$, at the
forcing scale, this can of course always be done by a simple rescaling
of one of the quantities. If $N_d$ is a shell number at the beginning
of the viscous subrange, we have that $u_{N_d}/k_{N_d}\approx \nu$,and the dissipation, $D^i$, of the conserved quantity, $E^i$, can
be estimated as
\begin{equation}
D^i\sim \nu k_{N_d}^{\alpha_i+2)}|u_{N_d}|^2.
\label{diss}
\end{equation}
The ratio of dissipation of $E^1$ and $E^2$ scales with
$k_{N_d}$ as $D^1/D^2\sim k_{N_d}^{\alpha_1-\alpha_2}$, so that, in
the limit $Re \rightarrow \infty$ when $\alpha_1 < \alpha_2$, there
will be no dissipation in the viscous sub-range of $E^1$ where $E^2$
is dissipated. Therefore, a forward cascade of $E^1$ is prohibited and we
should expect a forward cascade of $E^2$. For the backward cascade
the situation is reversed, so we should expect a backward cascade of
$E^1$.
The situation is completely different in the 2D like and the 3D like
cases. In the 3D like models $E^2$ is not positive definite, $E^2$
(helicity) is generated also in the viscous sub-range and for the usual
GOY model we do not see a forward cascade of helicity, see, however, ref.
\cite{pdd1}. This is in agreement with the observed $k^{-5/3}$ energy
spectrum observed in real 3D turbulence corresponding to the forward
cascade
of energy. In the 2D case we observe the direct cascade of enstrophy,
while the inverse cascade of energy is still debated. In the rest of
this paper we will concentrate on 2D like models where we will
implicitly think of $E^1=E$, with $\alpha_1=0$, as the energy and
$E^2=Z$, with $\alpha_2=\alpha>0$, as the enstrophy. With regard to the
inverse cascade of energy one must bare in mind that in 2D turbulence
the dynamics involved is probably related to the generation of large
scale coherent structures, vortices, and vortex interactions. Vortices
are localized spatially, thus delocalized in spectral space. This is
in agreement with the estimate that 2D is marginally delocalized in
spectral space \cite{Kraichnan2}. In the GOY model there is no spatial
structure and the interactions are local in spectral space. The model
is therefore probably not capable of showing a realistic inverse energy
cascade. We will thus only consider the forward cascade in this paper.
Figure 3 shows the scaling in the inertial sub-range of the model with
$\epsilon=5/4$ corresponding to $\alpha=2$. The cascades of the
enstrophy and energy are shown in figure 4. It is seen that enstrophy
is forward cascaded while energy is not.
\section{Statistical description of the model}
In a statistical equilibrium of an ergodic dynamical system we will have
a probability distribution among the
(finite) degrees of freedom, assuming an ultraviolet cutoff,
of the form, $P_i\sim \exp(-BE_i^1-AE_i^2)$, where $E^1$ and
$E^2$ are the conserved quantitied, energy and enstrophy.
Thus, the temporal mean of any quantity, which is
a function of the shell velocities is given as
\begin{equation}
\overline{g}=\int \prod_i du_i g(u_1,...,u_N) \exp(-BE_i^1-AE_i^2)/
\int \prod_i du_i \exp(-BE_i^1-AE_i^2).
\end{equation}
$A$ and $B$ are Lagrange multipliers, reflecting
the conservation of energy and enstrophy when maximizing the entropy of
the system, corresponding to inverse temperatures, denoted as inverse
"energy-" and "enstrophy-temperatures" \cite{Kraichnan}. The shell
velocities themselves will in this description be independent and gaussian
distributed variables with standard deviation
$\sigma(u_i)=1/(2(Bk_i^{\alpha_1}+Ak_i^{\alpha_2}))$. The average
values of the energy and enstrophy becomes,
\begin{eqnarray}
\overline{E_i^1}=k_i^{\alpha_1}\overline{|u_i|^2}=(B+Ak_i^{\alpha_2-
\alpha_1})^{-1}\nonumber \\
\overline{E_i^2}=k_i^{\alpha_2}\overline{|u_i|^2}=(Bk_i^{\alpha_1-
\alpha_2}+A)^{-1}.
\label{stat}
\end{eqnarray}
For $k\rightarrow 0$ we will have equipartitioning of energy,
$k_i^{\alpha_1}\overline{|u_i|^2}=B^{-1}$ and the scaling $|u_i|\sim
k_i^{-\alpha_1/2}$ and for the other branch, $k\rightarrow \infty$, we
will have equipartitioning of enstrophy
$k_i^{\alpha_2}\overline{|u_i|^2}=A^{-1}$ and the scaling $|u_i|\sim
k_i^{-\alpha_2/2}$. In the case of no forcing and no viscosity the
equilibrium will depend on the ratio $A/B$ between the initial
temperatures $A^{-1},B^{-1}$. To illustrate this we ran the model
without forcing and viscosity but with 2 different initial spectral
slopes of the velocity fields, the larger the slope the higher the
ratio of the energy temperature to the enstrophy temperature. Figure 5
shows the equilibrium spectra for $\epsilon=5/4, \nu=f=0$, in the cases
of initial slopes -1, -0.8. The full lines are the equilibrium
distribution given by (\ref{stat}) for $A/B=10^2$ and $A/B=10^{-2}$
respectively.
\section{Distinguishing cascade from statistical equilibrium}
For the forward enstrophy cascade the spectral slope is $-(\alpha
+1)/3$ and the enstrophy equipartitioning branch has spectral slope
$-\alpha/2$. Thus for the 2D case where $\alpha=2$ we cannot distinguish
between statistical (quasi-) equilibrium and cascading. This was
pointed out by Aurell et al. \cite{aurell} and it was argued that the
model can be described as being in statistical quasi-equilibrium with
the enstrophy transfer described as a simple diffusion rather than an
enstrophy cascade. This coinciding scaling is a caviate of the GOY
model not present in the real 2D flow where the statistical
equilibrium energy spectrum scales as $k^{-1}$ and the cascade energy
spectrum scales as $k^{-3}$. For other values of $\alpha$ the scaling
of the two cases are different, see figure 6. This figure represents
the main message of this paper. First axis is the parameter $\epsilon$,
along the line shown in fig. 1, defining the spectral ratio between the
two inviscid invariants. Second axis is the scaling exponent $\gamma$.
The horizontal dashed line $\gamma=1/3$ is the Kolmogorov scaling
exponent for energy cascade. The full curve is the scaling exponent for
the enstrophy cascade, and the dotted curve corresponds to the
enstrophy equipartitioning.
All the 3D like models (asterisks in figure 6) are near energy
cascade scaling
(dashed line). Statistical equilibrium corresponds
to the line $\gamma=0$. The bold line piece, $0<\epsilon < 0.39...$,
represents parameter values where the Kolmogorov fixed point is
stable \cite{biferale}. The scaling for $\epsilon > 0.39...$ is
slightly steeper than the Kolmogorov scaling, which is attributed to
intermittency corrections originating from the viscous
dissipation \cite{p+v+j}. It seems as if there is a slight trend showing
increasing spectral slopes for increasing $\epsilon$.
For the 2D like models the scaling slope is also everywhere on or
slightly above both the cascade - and the equilibrium slopes (diamonds
in the figure). The classical argument for a cascade is that given an
initial state with enstrophy concentrated at the low wave-number end of
the spectrum, the enstrophy will flow into the high wave-numbers in
order to establish statistical equilibrium. The ultra-violet
catastrophe is then prevented by the dissipation in the viscous
sub-range. Therefore, we cannot have a non-equilibrium distribution
with more enstrophy in the high wave-number part of the spectrum than
prescribed by statistical equilibrium since enstrophy in that case
would flow from high - to low wave-numbers. This means that the
spectral slope in the inertial sub-range always is above the slope
corresponding to equilibrium (dotted line in figure 6). Consequently,
the 2D model with $\epsilon=5/4$ separates two regimes,
$1<\epsilon<5/4$ where enstrophy equilibrium is achieved and
$5/4<\epsilon<2$ where the enstrophy is cascaded through the inertial
range.
In figure 7 the spectra and the cascades are shown for different values
of $\epsilon$. The model was run with 50 shells and forcing on shell
number 15 for $2 \times 10^4$ time units and averaged. Even then there
are large fluctuations in the cascades not reflected in the spectra.
The large differences in the absolut values for the cascades, $Pi$,
is a reflection of the scaling relation (\ref{diss}).
We
interpret the peaks around the forcing scale for $\epsilon=11/10$ as
statistical fluctuation and the model shows no cascade. For
$\epsilon>5/4$ we see an enstrophy cascade and what seems to be an
inverse energy cascade. However, we must stress that we do not see a
second scaling regime for small $n$ corresponding the inverse
cascade. Note that for $\epsilon=2$ energy and enstrophy are identical
and we have only one inviscid invariant. So if a regime of inverse
energy cascading existed in parameter space near $\epsilon=2$
the scaling exponents will be almost identical and coincide at
$\epsilon=2$.
The two regimes corresponding to equipartitioning and cascade can be
understood in terms of timescales for the dynamics of the shell
velocities. A rough estimate of the timescales for a given shell $n$,
is from (\ref{dyn}) given as $T_n\sim (k_nu_n)^{-1}\sim
k_n^{\gamma-1}$. Again $\epsilon=5/4$, corresponding to $\gamma=1$,
becomes marginal where the timescale is independent of shell number.
For $\epsilon< 5/4$ the timescale grows with $n$ and the fast
timescales for small $n$ can equilibrate enstrophy among the degrees of
freedom of the system before the dissipation, at the "slow" shells, has
time to be active. Therefore these models exhibits statistical
equilibrium. For $\epsilon>5/4$ the situation is reversed and the
models exhibits enstrophy cascades. Time evolutions of the shell
velocities are shown in figure 8, where the left columns show the
evolution of a shell in the beginning of the inertial subrange and the
right columns show the evolution of a shell at the end of the inertial
subrange. This timescale scaling might also explain why no inverse
cascade branch has been seen in the GOY model. The timescales at the
small wave-number end of the spectrum, with the dissipation or drag
range for inverse cascade, is long in comparison with the timescales of
the inertial range of inverse cascade. Therefore a statistical
equilibrium will have time to form. The analysis suggests that
parameter choices $\epsilon > 5/4$ might be more realistic than
$\epsilon=5/4$ for mimicing enstrophy cascade in real 2D turbulence.
\section{Intermittency corrections}
The numerical result that the inertial range scaling has a slope
slightly higher than the K 41 prediction, is not fully understood.
This is attributed to intermittency corrections originating from
the dissipation of enstrophy in the viscous subrange.
The evolution of the shell velocities in the viscous sub-range is
intermittent for $\epsilon>5/4$, where the PDF's are non-gaussian,
while the PDF's for $\epsilon=5/4$ are
gaussian in both ends of the inertial sub-range, see figure 9.
The deviation from the Kolmogorov scaling is expressed through the
structure function, $\zeta (q)$ \cite{p+v+j}. The structure
function is defined through the scaling of the moments of the
shell velocities;
\begin{equation}
\overline{|u_n|^q}\sim k_n^{\zeta(q)}=k_n^{-q\gamma -\delta\zeta (q)}
\end{equation}
where $\delta\zeta (q)$ is the deviation from Kolmogorov scaling. The
structure function, $\zeta (q)$, and $\delta\zeta (q)$ for
$\epsilon=11/10,5/4,3/2,7/4,2$ are shown in figure 10. For
$\epsilon>5/4$ there are intermittency corrections to the scaling in
agreement with what the PDF's show.
We know of no analytic way to predict the intermittency corrections
from the dynamical equation. Our numerical calculations suggest that
the intermittency corrections are connected with the differences in
typical timescales from the beginning of the inertial sub-range, where
the model is forced, to the viscous sub-range. The ratio of timescales
between the dissipation scale and the forcing scale can be estimated
by; $T_{\nu}/T_f\approx \lambda^{ \Delta N (1+\gamma )}$, where $\Delta
N$ is the number of shells beween the two. Figure 11 (a) shows the
numerical values of $\delta\zeta (10)$ as a function of $\epsilon$ and
figure 11 (b) shows $log_2(T_{\nu}/T_f)$ as a function of $\epsilon$.
The vertical line indicates the crossover between statistical
equilibrium and cascading.
We must stress that caution should be taken upon drawing conclusions
from this since the authors have no physical explanation of the
appearent relationship.
\section{Summary}
The GOY shell model has two inviscid invariants, which govern the
behavior of the model. In the 2D like case these corresponds to the
energy and the enstrophy of 2D turbulent flow. In the model we can
change the interaction coefficient, $\epsilon$, and tune the spectral
ratio of enstrophy to energy, $Z_n/E_n=k_n^\alpha$. For $\alpha>2$ we
can describe the dynamics as being in statistical equilibrium with two
scaling regimes corresponding to equipartitioning of energy and
enstrophy respectively. The reason for the equipartitioning of
enstrophy in the inertial range (of forward cascading of enstrophy) is
that the typical timescales, corresponding to eddy turnover times, are
growing with shell number, thus the timescale of viscous dissipation is
large in comparison with the timescales of non-linear transfer. Thus,
this choice of interaction coefficient is completely unrealistic for
mimicing cascades in 2D turbulence. For $\alpha<2$ the model shows
forward cascading of enstrophy, but we have not identified a backward
cascade of energy. The usual choice $\epsilon=5/4$, $\alpha=2$ is a
borderline and we suggest that $\alpha<2$ in respect to mimicing
enstrophy cascade might be more realistic. We observe that the dynamics
becomes more intermittent when $\alpha<2$, in the sense that the
structure function deviates more and more from the Kolmogorov
prediction. For $\epsilon=2$ we have $\alpha=0$, thus energy and
enstrophy degenerates into only one inviscid invariant, this point
could then be interpreted as a model of 3D turbulence. However, as is
seen from (\ref{pi1}), in this case the fluxless fixed point is the one
surviving, but as is seen in figure 7, bottom panels, this model also
shows cascading. This choice for 3D turbulence model could shed some
light on the dispute of the second inviscid invariant (helicity) being
important \cite{benzi} or not \cite{procaccia} for the deviations from
Kolmogorov theory, work is in progress on this point.
\section{Acknowledgements}We would like to thank Prof. A. Wiin-Nielsen
for illuminating discussions. This work was supported by the
Carlsberg Foundation.
| proofpile-arXiv_065-8688 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{sec1}
The stability of matter problem concerns the question whether the
minimal energy of a system of particles is bounded from below
(stability of the first kind), and whether it is bounded from
below by a constant times the number of particles (stability of
the second kind). Stability of the second kind for
non-relativistic quantum-mechanical electrons and nuclei was first
proved in 1967 by Dyson and Lenard \cite{DysonLenard1967I,
DysonLenard1967II}. Since the new proofs of Lieb and Thirring, and
Federbush in 1975 stability of matter is a subject of ongoing
interest dealing with more and more realistic models of matter
such as systems with a classical or quantized magnetic field
included or with relativistic electrons (see \cite{Liebetal1997}
and the references therein). Stability with relativistic electrons
is more subtle because of the uniform 1/length scaling behavior of
the energy, which holds for massless particles (high
particle-energy limit). Then the minimal energy is either
non-negative or equal $-\infty$, so that stability of the second
kind becomes equivalent to the statement that stability of the
first kind holds for any given number of particles. We simply call
this stability henceforth.
This paper is about a pseudo-relativistic model of matter which is
stable, but which becomes unstable when the electrons are allowed
to interact with the self-generated magnetic field. The
self-generated magnetic field may be described using either an
effective potential (the Breit-potential), an external magnetic
field over which the energy is minimized, or the quantized
radiation field. In all these cases we find instability for all
positive values of the fine-structure constant. In contrast to
most other models, where the collapse of the system, if it occurs,
is due to the attraction of electrons and nuclei
\cite{Liebetal1986, LiebLoss1986, LiebYau1988, Loss1997} (there
would be no collapse without this interaction), the instability
here is due to the attraction of parallel currents.
The model we study is based on a pseudo-relativistic Hamiltonian
sometimes called no-pair or Brown-Ravenhall Hamiltonian
describing $N$ relativistic electrons and $K$ fixed nuclei
interacting via Coulomb potentials. The electrons are vectors in
the positive energy subspace of the free Dirac operator and their
kinetic energy is described by this operator. For a physical
justification of this model see the papers of Sucher
\cite{Sucher1980, Sucher1987}, for applications of the model in
computational atomic physics and quantum chemistry see Ishikawa
and Koc \cite{IshikawaKoc1994, IshikawaKoc1997}, and
Pyykk\"o~\cite{Pyykkoe1988}. The Brown-Ravenhall Hamiltonian
yields stability for sufficiently small values of the fine
structure constant and the charge of the
nuclei~\cite{Evansetal1996,Tix1997, Tix1997b,
BalinskyEvans1998b,Liebetal1997}; there are further rigorous
results concerning the virial theorem~\cite{BalinskyEvans1998a}
and eigenvalue estimates~\cite{GriesemerSiedentop1997}.
We are interested in the minimal energy of this model when it is
corrected to account for the interaction of the electrons with the
self-generated magnetic field. This correction may be done for
instance by introducing a external magnetic field
$\nabla\times\vektor{A}$ to which the electrons are then minimally
coupled and whose field energy is added to the energy of the
system. The field $\vektor{A}$ is now considered part of the system and
hence the energy is to be minimized w.r.t. $\vektor{A}$ as well. The
minimizing $\vektor{A}$ for a given electronic state is the
self-generated one (to avoid instability for trivial reasons the
gauge of $\vektor{A}$ has to be fixed). The energy of this system is
unbounded from below if $N\alpha^{3/2}$ is large, $\alpha$ being
the fine structure constant, even if the vector potential is
restricted to lie in a two parameter class
\(\{\gamma\vektor{A}_0(\delta\vektor{x}):\gamma,\delta\in \field{R}_+\}\) where $\vektor{A}_0$
is fixed and obeys a weak condition requiring not much more than
$\vektor{A}_0\not\equiv 0$. This is our first main result. It extends a
previous result of Lieb et al. \cite{Liebetal1997} and is
reminiscent of the fact that a static non-vanishing classical
magnetic field in QED is not regular, in the sense that the
dressed electron-positron emission and absorption operators do not
realize a representation of the CAR on the Fock space of the free
field \cite{NenciuScharf1978}.
Alternatively the energy-shift due to the self-generated magnetic
field may approximately be taken into account by including the
Breit potential in the energy. The resulting model is unstable as
well. That is, the energy is unbounded from below if
$N\alpha^{3/2}$ is large, no matter how small $\alpha$ is. This is
our second main result. It concerns a Hamiltonian that is closely
related the Dirac-Coulomb-Breit or Dirac-Breit Hamiltonian, which
is the bases for most calculations of relativistic effects in many
electron atoms \cite{Sucher1980, Pyykkoe1988}. We mention that for
$\alpha=1/137$ the energy is bounded below if $N\leq 39$ and
unbounded below if $N\geq 3.4\cdot 10^7$ (Theorem~\ref{theorem3}
and Theorem~\ref{theorem2}).
A third way of accounting for the self-generated field is to
couple the electrons to the quantized radiation field. From a
simple argument using coherent states (Lemma~\ref{lemma}) it
follows that the instability of this model is rather worse than
the instability of the model first discussed.
As mentioned above the instability with external magnetic field
was previously found by Lieb et al. \cite{Liebetal1997}. Our
result extends their result and our proof is simpler. The model
with Breit interaction corresponds to the classical system
described by the Darwin Hamiltonian, which has been studied in the
plasma physics literature (see \cite{AppelAlastuey1998} an the
references therein). This classical model is thermodynamically
unstable as well \cite{Kiessling1998}.
In Sections~\ref{sec2}, \ref{sec3} and \ref{sec4} we introduce the
models with external magnetic field, with Breit potential, and
with quantized radiation field and prove their instability
(Theorem~\ref{theorem1}, Theorem~\ref{theorem2}, and
Lemma~\ref{lemma}). In Section~\ref{sec3} we also discuss dynamic
nuclei for the model with Breit potential. There is an appendix
where numerical values for stability bounds on $N\alpha^{3/2}$
given in the main text are computed.
\section{Instability with Classical Magnetic Field}\label{sec2}
We begin with the model of matter with external magnetic field. For
simplicity the electrons are assumed to be non-interacting and no nuclei
are present. We could just as well treat a system of interacting
electrons and static nuclei and would obtain essentially
the same result (see Remark 4 below).
Consider a system of $N$ non-interacting electrons in the external
magnetic field \(\nabla\times \vektor{A}\). The energy of this system
is
\begin{equation*}
\mathcal{E}_N(\psi,\vektor{A})=\sprod{\psi}{\sum_{\mu=1}^{N}D_{\mu}(\vektor
{A})\psi}+\frac{1}{8\pi}\int |\nabla\times\vektor{A}(\vektor{x})|^2d\vektor{x}
\end{equation*}
where \(D_{\mu}(\vektor{A})\) is the Dirac operator
\(D(\vektor{A})=\vektor{\alpha}\cdot(-i\nabla +\alpha^{1/2}\vektor{A}(\vektor{x}))
+ \beta m\) acting on the $\mu$-th particle, and the vector $\psi$,
describing the state of the electrons, belongs to the Hilbert space
\begin{eqnarray*}
\H_N &=& \bigwedge_{\mu=1}^N \Lambda_{+}L^2(\field{R}^3,\field{C}^4)\\
\Lambda_{+} &=& \chi_{(0,\infty)}(D(\vektor{A}\equiv 0)),
\end{eqnarray*}
or rather the dense subspace \(\mathfrak{D}_N=\H_N\cap H^1[(\field{R}^3\times\{1,\ldots,4\})^N]\).
That is, an electron is by definition a vector in the positive energy
subspace of the free Dirac operator. We will always assume the
vector potential $\vektor{A}$ belongs to the class $\mathcal{A}$ defined by
the properties:
\begin{eqnarray*}
i) & & \nabla\cdot\vektor{A} = 0,\\
ii)& & \vektor{A}(\vektor{x}) \rightarrow 0\makebox[2cm]{as}|\vektor{x}|\rightarrow \infty,\\
iii) & & \int_{\field{R}^3}|\nabla\times\vektor{A}|^2 < \infty.
\end{eqnarray*}
Notice that $\H_N$ is not invariant under multiplication with
smooth functions, in particular it is not invariant under gauge
transformations of the states. It follows that the minimal energy
for fixed $\vektor{A}$ is gauge-dependent. It can actually be driven to
$-\infty$ by a pure gauge transformation (see Remark 3 below). To
avoid this trivial instability we fixed the gauge of $\vektor{A}$ by
imposing conditions i) and ii).
The constants $\alpha>0$ and $m\geq0$ in the definition of
\(D(\vektor{A})\) are the fine structure constant and the mass of the
electron respectively. In our units \(\hbar=1=c\), so that
\(\alpha =e^2\) which is about $1/137$ experimentally. We denote the
Fourier transform of a function $f$ by $\widehat{f}$ or $\mathfrak{F}(f)$ and use $\vektor{p}$ or
$\vektor{k}$ for its argument rather then $\vektor{x}$ or $\vektor{y}$. Our first result is
\begin{theorem}\label{theorem1}
Suppose $\vektor{A}\in \mathcal{A}$ is such that \(Re[\vektor{e}\cdot
\widehat{\vektor{A}}(\vektor{p})] < 0\) in $B(0,\varepsilon)$ for some $\vektor{e}\in\field{R}^3$ and
$\varepsilon>0$. Then there exist a constant $C_{\vektor{A}}$ such that for all
$\alpha > 0$, $m\geq 0$ and \(N\geq C_{\vektor{A}}\alpha^{-3/2}\)
\[ \inf_{\psi\in\mathfrak{D}_N,\|\psi\|=1,\ \gamma,\delta\in \field{R}_{+}} \mathcal{E}_N(\psi,\gamma\vektor{A}(\delta\vektor{x}))= -\infty.\]
\end{theorem}
\noindent
{\em Remarks.}
\begin{enumerate}
\item It is sufficient that \(\vektor{A}\in \mathcal{A}\cap L^1\) and
\(\int_{\field{R}^3}\vektor{A}(\vektor{x})d\vektor{x} \neq 0\), since $\widehat{\vektor{A}}$ is then
continuous and \(\widehat{\vektor{A}}(0)\neq 0\). Thus we have instability
for virtually all non-vanishing $\vektor{A}\in\mathcal{A}$.
\item The smallness of $N\alpha^{3/2}$ is not only necessary but also
sufficient for stability (see \cite[Section 4]{Liebetal1997}).
\item If the condition ii) that $\vektor{A}$ vanishes at infinity (and thus
the gauge fixing) is dropped there is instability even for $N=1$ and
the theorem becomes trivial. In fact for $N=1$ and \(\vektor{A}(\vektor{x})\equiv
\vektor{a}\neq 0\), \(\mathcal{E}_{N=1}(\psi,\gamma\vektor{A}) =
\sprod{\psi}{D(0)\psi}+ \gamma\alpha^{1/2}\vektor{a}\int
\psi^+(\vektor{x})\vektor{\alpha}\psi(\vektor{x})d\vektor{x}\) which, as a function of $\gamma$,
is unbounded from below for suitable
\(\psi\in\Lambda_{+}L^2(\field{R}^3,\field{C}^4)\).
\item The statement of the theorem also holds for the system of
electrons and static nuclei with energy
\(\mathcal{E}_N(\psi,\vektor{A})+\alpha\sprod{\psi}{V_c\psi}\) where
\begin{equation}\label{Coulomb}
V_c:=
-\sum_{\mu=1}^{N}\sum_{\kappa=1}^{K} \frac{Z_{\kappa}}{|\vektor{x}_{\mu}-
\vektor{R}_{\kappa}|}
+ \sum_{\mu <\nu}^{N}\frac{1}{|\vektor{x}_{\mu}- \vektor{x}_{\nu}|} +
\sum_{\kappa<\sigma}^{K} \frac{Z_{\kappa}
Z_{\sigma}}{|\vektor{R}_{\kappa}-\vektor{R}_{\sigma}|}
\end{equation}
if both $N$ and $\sum Z_{\kappa}$ are bigger than
\(C_{\vektor{A}}\alpha^{-3/2}\) and if the energy is in addition minimized
with respect to the pairwise distinct nuclear positions
$\vektor{R}_{\kappa}$. (see the proof of Theorem~\ref{theorem2}).
\item Quantizing the radiation field does not improve the stability of
the system (see Section~\ref{sec4}).
\end{enumerate}
The only way to restore stability we know is to replace $\H_N$
by the $\vektor{A}$-dependent Hilbert space
\[ \H_{N,\vektor{A}}= \bigwedge_{\mu=1}^N \chi_{(0,\infty)}(D(\vektor{A}))L^2(\field{R}^3,\field{C}^4).\]
Obviously \(\mathcal{E}_N(\psi,\vektor{A})\geq 0\) for \(\psi\in \H_{N,\vektor{A}}\). In
fact even \(\mathcal{E}_N(\psi,\vektor{A})+\alpha\sprod{\psi}{V_c\psi}\) is
non-negative for $Z_{\kappa}$ and $\alpha$ small enough
\cite{Liebetal1997}.
\begin{proof}[Proof of Theorem~\ref{theorem1}] We will only work with
Slater determinants and the following representation of
one-particle orbitals. If \(u\in L^2(\field{R}^3;\field{C}^2)\) then
\begin{equation}\label{apf1}
\widehat{\psi}(\vektor{p})=\left(\frac{E(\vektor{p})+m}{2E(\vektor{p})}\right)^{1/2}
\begin{pmatrix}u(\vektor{p})\\ \frac{\vektor{\sigma}\cdot\vektor{p}}{E(\vektor{p})+m}u(\vektor{p})
\end{pmatrix},
\end{equation}
with \(E(\vektor{p})=\sqrt{\vektor{p}^2+m^2}\), is the Fourier
transform of a vector $\psi\in\Lambda_{+}L^2$, and the map
\(u\mapsto\psi,
L^2(\field{R}^3;\field{C}^2)\rightarrow\Lambda_{+}L^2(\field{R}^3;\field{C}^4)\) is unitary.
It suffices to consider the case $m=0$ and find a Slater determinant
\(\psi=\psi_1\wedge\ldots\wedge\psi_N\) and $\gamma,\delta\in \field{R}_{+}$ such
that \(\mathcal{E}_N(\psi,\gamma\vektor{A}(\delta\vektor{x}))<0\). In fact by the scaling
\(\psi\mapsto\psi_{\delta},\ \vektor{A}\mapsto\vektor{A}_{\delta}\) defined by
\(u_{\mu,\delta} =\delta^{-3/2}u_{\mu}(\delta^{-1}\vektor{p})\) and
\(\vektor{A}_{\delta}(\vektor{x}) =\delta\vektor{A}(\delta\vektor{x})\) we can then drive the
energy with $m>0$ to $-\infty$ because \(\mathcal{E}(\psi_{\delta},\vektor{A}_{\delta},m)=
\delta\mathcal{E}(\psi,\vektor{A},m/\delta)\) and \(\mathcal{E}(\psi,\vektor{A},m/\delta)\rightarrow
\mathcal{E}(\psi,\vektor{A},m=0)\) for \(\delta\rightarrow\infty\).
{\em Choice of $\psi$.} Let $Q$ be the unit cube \(\{\vektor{p}\in\field{R}^3|0\leq
p_i\leq 1\}\), \(u(\vektor{p})=(\chi_Q(\vektor{p}), 0)^T\), and $\vektor{e}\in\field{R}^3$ an
arbitrary unit vector. Set
\begin{equation}\label{apf1.5}
u_{\mu}(\vektor{p})= u(\vektor{p}-\lambda N^{1/3}\vektor{e}-\vektor{n}_{\mu}),\hspace{3em}
\mu=1,\ldots,N
\end{equation}
where $\lambda$ is a positive constant to be chosen sufficiently large
later on, and \((\vektor{n}_{\mu})_{\mu=1\ldots N}\subset\field{Z}^3\) are
the $N$ lattice sites nearest to the origin, i.e.,
\(\max_{\mu=1\ldots N}|\vektor{n}_{\mu}|\) is minimal. We define
\(\psi=\psi_1\wedge\ldots\wedge\psi_N\) by
\begin{equation}\label{apf2}
\widehat{\psi}_{\mu}(\vektor{p})=\frac{1}{\sqrt{2}}\begin{pmatrix}u_{\mu}(\vektor{p})\\
\vektor{\sigma}\cdot\vektor{\omega}_{\vektor{p}}u_{\mu}(\vektor{p})
\end{pmatrix},\hspace{2em} \vektor{\omega}_{\vektor{p}}=\frac{\vektor{p}}{|\vektor{p}|},
\end{equation}
which is (\ref{apf1}) for $m=0$. Then $\psi\in\H_N$ and
\(\sprod{\psi_{\mu}}{\psi_\nu}= \sprod{u_{\mu}}{u_\nu} =
\delta_{\mu\nu}\). Notice that
\begin{equation}\label{apf3}
|\vektor{p}-\lambda N^{1/3}\vektor{e}|\leq N^{1/3}\makebox[5em]{for all}\vektor{p}\in\mbox{supp}(u_{\mu})
\end{equation}
at least for large $N$ (see the appendix), i.e., in Fourier space all
electrons are localized in a ball with radius $N^{1/3}$ and a distance
from the origin which is large compared to the radius (since $\lambda$
will be large).
Since \(\psi=\psi_1\wedge\ldots\wedge\psi_N\) and $m>0$ we have
\begin{equation}\label{apf4}\begin{split}
\mathcal{E}_N(\psi,\vektor{A}) = &\sum_{\mu=1}^{N}\sprod{\psi_{\mu}}{|\nabla|\psi_{\mu}} +
\alpha^{1/2}\sum_{\mu=1}^{N}\int \vektor{J}_{\mu}(\vektor{x})\vektor{A}(\vektor{x})d\vektor{x}\\
& + \frac{1}{8\pi}\int |\nabla\times\vektor{A}(\vektor{x})|^2d\vektor{x}
\end{split}\end{equation}
where \(\vektor{J}_{\mu}(\vektor{x})= \psi_{\mu}^{*}(x)\vektor{\alpha} \psi_{\mu}(x)\).
By definition of \(\psi_{\mu}\)
\begin{equation}\label{apf5}
\widehat{\vektor{J}}_{\mu}(\vektor{p})=
\frac{1}{2}(2\pi)^{-3/2}\int
u_{\mu}^{\ast} (\vektor{k}-\vektor{p})
\left[\vektor{\sigma}
(\vektor{\omega}_{\vektor{k}}\cdot\vektor{\sigma}) +
(\vektor{\omega}_{\vektor{k-p}}\cdot\vektor{\sigma})\vektor{\sigma}
\right]u_{\mu}(\vektor{k})d\vektor{k}.
\end{equation}
Replace here $u_{\mu}$ by its defining expression and substitute
\((\vektor{k}-\lambda N^{1/3}\vektor{e} -\vektor{n}_{\mu}) \mapsto\vektor{k}\). Since
\(\vektor{\omega}_{\vektor{k}+\lambda N^{1/3}\vektor{e}+\vektor{n}_{\mu}} \rightarrow\vektor{e}\) as
\(\lambda\rightarrow\infty\) and since $u$ has compact support, it
follows that \(\widehat{\vektor{J}}_{\mu}(\vektor{p})\) converges to the current
\begin{equation}\label{apf6}
\widehat{\vektor{J}}_{0}(\vektor{p})= \vektor{e}\,(2\pi)^{-2/3}\int
u^{\ast} (\vektor{k}-\vektor{p}) u(\vektor{k})d\vektor{k}
\end{equation}
as $\lambda\rightarrow\infty$. More precisely
\(|\widehat{\vektor{J}}_{\mu}(\vektor{p})-\widehat{\vektor{J}}_{0}(\vektor{p})|\leq
C\lambda^{-1} |\widehat{\vektor{J}}_{0}(\vektor{p})|\) for \(\lambda\geq\lambda_0\)
where $\lambda_0$ and $C$ are independent of $\mu$ and $N$. From
\(\widehat{\vektor{J}}_{0}(\vektor{p})|\vektor{p}|^{-1},\ \widehat{\vektor{A}}(\vektor{p})|\vektor{p}|\in L^2\)
it follows that
\begin{equation}\label{apf7}
\int\widehat{\vektor{J}}_{\mu}^{\ast}(\vektor{p})\widehat{\vektor{A}}(\vektor{p})d\vektor{p} =
\int\widehat{\vektor{J}}_{0}^{\ast}(\vektor{p})\widehat{\vektor{A}}(\vektor{p})d\vektor{p} +
O(\lambda^{-1}), \hspace{2em}\lambda\rightarrow\infty.
\end{equation}
After a scaling \(\vektor{A}\mapsto\vektor{A}_{\delta}\) we may assume \({\mathrm
Re}[\vektor{e}\cdot\widehat{\vektor{A}}(\vektor{p})]<0\) in the support of
$\widehat{\vektor{J}}_{0}$ rather then in \(B(0,\varepsilon)\), so that
(\ref{apf7}) is bounded from above by some $-c_1<0$ for
$\lambda\geq\lambda_0$ where $c_1$ and $\lambda_0$ are independent of
$\mu$ and $N$. Observing finally that
\begin{equation}\label{apf8}
\sprod{\psi_{\mu}}{|\nabla|\psi_{\mu}} = \int |\widehat{\psi}_{\mu}(\vektor{p})|^2|\vektor{p}|d\vektor{p}
\leq (\lambda+1)N^{1/3}
\end{equation}
for all $\mu$, we conclude
\begin{eqnarray*}
\mathcal{E}_N(\psi,\gamma\vektor{A}) &\leq& (\lambda_0+1)N^{4/3} - \alpha^{1/2}\gamma N
c_1 + \gamma^2 c_2\\
&=& (\lambda_0+1)N^{4/3} - \alpha \frac{c_1^2}{4c_2}N^2
\end{eqnarray*}
which is negative for $N\alpha^{3/2}$ large enough. At the end we
inserted the optimal $\gamma$.
\end{proof}
The theorem has the obvious corollary
\begin{corollary}\label{corollary}
There is a constant $C$ such that for all $\alpha>0$, $m\geq 0$ and \(N\geq C\alpha^{-3/2}\),
\[ \inf_{\psi\in\mathfrak{D}_N,\|\psi\|=1; \vektor{A}\in\mathcal{A}} \mathcal{E}_N(\psi,\vektor{A}) = -\infty.\]
\end{corollary}
This result is due to Lieb, Siedentop, and Solovej
\cite{Liebetal1997}.
\noindent {\em Remark.} It is sufficient that \(C=1.4\cdot 10^5\)
or that \(N\geq 3.4\cdot 10^7\) for \(\alpha^{-1}=137\), see the
appendix.\\
To conclude this section we compute
\(\min_{\vektor{A}\in\mathcal{A}}\mathcal{E}_N(\psi,\vektor{A})\). This will provide a link
to the instability with Breit-potential discussed in the next section. To
exhibit the $\vektor{A}$-dependence we write the energy as
\begin{equation*}
\mathcal{E}_N(\psi,\vektor{A}) = \mathcal{E}_N(\psi,\vektor{A}\equiv 0) + \alpha^{1/2} \int
\vektor{J}(\vektor{x})\vektor{A}(\vektor{x}) + \frac{1}{8\pi}\int |\nabla\times\vektor{A}(\vektor{x})|^2d\vektor{x},
\end{equation*}
where $\vektor{J}(\vektor{x})$ is the probability current density associated
with $\psi$. Its functional dependence on $\psi$ is not crucial
here. A straight forward computation shows that the Euler-Lagrange
equation for $\vektor{A}$ is \(-\Delta \vektor{A} = 4\pi \alpha^{1/2} \vektor{J}_T\)
where $\vektor{J}_T$ is the divergence free - or transversal - part of
$\vektor{J}$. Comparing this equation with the Maxwell-equation for $\vektor{A}$
in Coulomb gauge, which is \(\square \vektor{A} = 4\pi\alpha^{1/2}
\vektor{J}_T\), we find that the minimizing magnetic field is the
self-generated one up to effects of retardation. Solving the
Euler-Lagrange equation gives
\begin{equation}\label{minimum}
\min_{\vektor{A}\in\mathcal{A}}\mathcal{E}_N(\psi,\vektor{A}) = \mathcal{E}_N(\psi,\vektor{A}\equiv 0) - \frac{\alpha}{2}\int
\frac{\vektor{J}_T(\vektor{x})\vektor{J}_T(\vektor{y})}{|\vektor{x}-\vektor{y}|}d\vektor{x} d\vektor{y}.
\end{equation}
\section{Instability with Breit Potential}\label{sec3}
\subsection{Static nuclei}
We now consider a system of $N$ (interacting) electrons in the
external electric field of $K$ static nuclei. There is no external
magnetic field but a self-generated one which is approximately
accounted for by the Breit potential. The energy is now
\begin{equation}\label{energy}
\mathcal{E}_N(\psi,\vektor{R})=\sprod{\psi}{(\sum_{\mu=1}^{N}D_{\mu}+\alpha (V_c-B))\psi}
\end{equation}
where
\begin{equation}\label{Breit}
B= \sum_{\mu<\nu}^{N}\frac{1}{2|\vektor{x}_{\mu}-\vektor{x}_{\nu}|}
\left(\sum_i\alpha_{i,\mu}\otimes\alpha_{i,\nu} +
\frac{\vektor{\alpha}_{\mu}\cdot(\vektor{x}_{\mu}-\vektor{x}_{\nu})
\otimes \vektor{\alpha}_{\nu}\cdot(\vektor{x}_{\mu}-\vektor{x}_{\nu})}{|\vektor{x}_{\mu}-\vektor{x}_{\nu}|^2}\right)
\end{equation}
and $V_c$ is the Coulomb potential defined in (\ref{Coulomb}).
$\vektor{R}$ denotes the $K$-tuple \((\vektor{R}_1,\ldots,\vektor{R}_K)\) of pairwise
different nuclear positions and $D_{\mu}=D_{\mu}(\vektor{A}\equiv 0)$. As
before $\psi$ belongs to \(\mathfrak{D}_N\subset\H_N\). The interaction
$-\alpha B$ is usually derived from the corresponding interaction
in the Darwin Hamiltonian by the quantization
\(\vektor{p}/m\mapsto\vektor{\alpha}\) \cite{LandauLifshitz1971} or from QED:
treating the interactions of the electrons with the quantized
radiation field in second order perturbation theory leads to a
shift of the bound state energy levels approximately given by
\(-\alpha\sprod{\psi}{B\psi}\) \cite{BetheSalpeter1957}. Important
for our purpose is that
\begin{equation}\label{Breit-cc}
\sprod{\psi}{B\psi} + \left(\begin{array}{c}\mbox{self-energy \&}\\
\mbox{exchange terms}\end{array}\right)= \frac{1}{2}\int
\frac{\vektor{J}_T(\vektor{x})\vektor{J}_T(\vektor{y})}{|\vektor{x}-\vektor{y}|} d\vektor{x} d\vektor{y}
\end{equation}
for any Slater determinant \(\psi=\psi_1\wedge\ldots\wedge\psi_N\)
of orthonormal functions $\psi_{\mu}$ (see the proof of
Theorem~\ref{theorem2}).
We are interested in the lowest possible energy
\begin{equation*}
E_{N,K} = \inf \mathcal{E}_N(\psi,\vektor{R})
\end{equation*}
where the infimum is taken over all \(\psi\in \mathfrak{D}_N\) with $\|\psi\|=1$
and all $K$-tuples \((\vektor{R}_1,\ldots,\vektor{R}_K)\) with \(\vektor{R}_j\neq\vektor{R}_k\)
for $j\neq k$. Our second main result is
\begin{theorem}\label{theorem2}
There exists a constant $C$ such that for all \(\alpha>0,\ m\geq 0, K\in\field{N}\) and
\(Z_1,\ldots,Z_K\in\field{R}_{+}\) \[E_{N,K}=-\infty\] whenever
\(N,\sum Z_{\kappa}\geq C\max(\alpha^{-3/2},1)\). If \(\sum
Z_{\kappa}^2\geq 1\) it suffices that
\(C=5\cdot 10^4\) or - when $\alpha^{-1}=137$ - that \(N=\sum Z_\kappa\geq 3.4\cdot 10^7\).
\end{theorem}
\noindent
{\em Remarks.}
\begin{enumerate}
\item Similar as in Section~\ref{sec1}, $V_c$ and hence the condition
on $\sum Z_{\kappa}$ may be dropped. Then there is instability for
$N\geq C\max(\alpha^{-3/2},1)$. It is for completeness of the model
we keep $V_c$ in this section.
\item Without $B$ the energy is proven to be non-negative \(\alpha
Z_{\kappa}\leq2/\pi\) for all $\kappa$ and if \(\alpha\leq 1/94\)
\cite{LiebYau1988} (see also \cite{Liebetal1997}). One expects
however stability even for \(\alpha Z_{\kappa} \leq
2\left(\frac{2}{\pi}+\frac{\pi}{2}\right)^{-1}\) $\alpha\leq 0.12$
\cite{Evansetal1996, BalinskyEvans1998b}, which would cover the
atomic numbers of all known elements.
\end{enumerate}
At least partly this theorem can be understood from Corollary
\ref{corollary}, Equation (\ref{minimum}) and Equation
(\ref{Breit-cc}). \medskip
\begin{proof}[Proof of Theorem~\ref{theorem2}]
To begin with we prove (\ref{Breit-cc}). Let \(\psi =
\psi_1\wedge\ldots\wedge\psi_N\) with
\(\sprod{\psi_{\mu}}{\psi_{\nu}}= \delta_{\mu\nu}\) and let
\(\vektor{J}(\vektor{x})= \sum_{\mu=1}^{N}
\psi_{\mu}^{+}(x)\vektor{\alpha}\psi_{\mu}(x)\) be the current
density of $\psi$. Note that \(\widehat{J}_{T,i}(\vektor{p})=
\sum_{j=1}^{3}(\delta_{ij}- \frac{p_ip_j}{p^2})\widehat{J}_{j}(\vektor{p})\) and that
\begin{equation*}
\mathfrak{F}\frac{4\pi}{p^2}(\delta_{ij}-\frac{p_ip_j}{p^2})
= \frac{1}{2|x|}\left(\delta_{ij}+\frac{x_ix_j}{x^2}\right).
\end{equation*}
With $B(\vektor{x})$ defined by
\begin{equation*}
B(x)= \frac{1}{2|x|}\sum_{i,j}\alpha_i
\left(\delta_{ij}+\frac{x_ix_j}{x^2}\right)\alpha_j =
\frac{1}{2|x|}\left(\sum_i\alpha_i\otimes\alpha_i+
\frac{\vektor{\alpha}\cdot\vektor{x}\otimes
\vektor{\alpha}\cdot\vektor{x}}{|\vektor{x}|^2}\right)
\end{equation*}
it follows that
\begin{equation}\begin{split}\label{Breit=cc}
\frac{1}{2}\int
\frac{\vektor{J}_T(\vektor{x})\vektor{J}_T(\vektor{y})}{|\vektor{x}-\vektor{y}|}
d\vektor{x} d\vektor{y}
&= \frac{1}{2}
\sum_{\mu,\nu}\sprod{\psi_{\mu}\otimes\psi_{\nu}}{B(x-y)\psi_{\mu}\otimes\psi_{\nu}}\\
&= \sprod{\psi}{B\psi}+ \frac{1}{2}
\sum_{\mu,\nu}\sprod{\psi_{\mu}\otimes\psi_{\nu}}{B(x-y)\psi_{\nu}\otimes\psi_{\mu}}.
\end{split}
\end{equation}
which is equation (\ref{Breit-cc}). Similar as in the proof of
Theorem~\ref{theorem1} it suffices to consider the case $m=0$ and to
find a Slater determinant \(\psi=\psi_1\wedge\ldots\wedge\psi_N\) and
nuclear positions \(\vektor{R}_1,\ldots,\vektor{R}_K\) such that
\(\mathcal{E}_N(\psi,\vektor{R})<0\).
{\em Choice of the nuclear positions.} A beautiful argument given in
\cite{Liebetal1997} show that, after moving some electrons or nuclei
far away from all others
\begin{equation*}
\sprod{\psi}{V_c\psi}\leq \varepsilon+\frac{1}{2N^2}\sum_{\mu,\nu} \int
\frac{|\psi_{\mu}(\vektor{x})|^2 |\psi_{\nu}(\vektor{y})|^2}{|\vektor{x}-\vektor{y}|}
d\vektor{x} d\vektor{y}
\end{equation*}
for suitably chosen nuclear positions. Here $\varepsilon>0$ is the (arbitrary
small) contribution of the particles moved away. The second term can
be dropped if \(\sum_{\kappa=1}^{K}Z_{\kappa}^2\geq 1\). We use the
inequality obtained in \cite{Tix1997} to estimate it from above and find
\begin{equation}\label{bpf1}
\sprod{\psi}{V_c\psi}\leq \varepsilon + \mbox{const}\frac{1}{N}\sum_{\mu=1}^{N}
\sprod{\psi_{\mu}}{D\psi_{\mu}}.
\end{equation}
The number $N$ of remaining electrons obeys $N<\sum Z_{\kappa}+1$
which is the reason for the assumption on \(\sum Z_{\kappa}\). Of
course the choice of the nuclear positions depends on $\psi$, which
has not been specified yet.
Define one-particle orbitals $\psi_{\mu}$ and currents $\vektor{J}_{\mu}$ and
$\vektor{J}_{0}$ exactly as in the proof of Theorem~\ref{theorem1} with
$\vektor{e}$ being an arbitrary unit vector in $\field{R}^3$. The convergence
\(\widehat{\vektor{J}}_{\mu}(\vektor{p})\rightarrow\widehat{\vektor{J}}_{0}(\vektor{p})\) as
$\lambda\rightarrow\infty$ now implies that
\begin{equation}\label{bpf2}\begin{split}
\frac{1}{2}\int \frac{\vektor{J}_T(\vektor{x})\vektor{J}_T(\vektor{y})}{|\vektor{x}-\vektor{y}|}
d\vektor{x} d\vektor{y} &= N^2\left[\frac{1}{2} \int \frac{\vektor{J}_{0,T}(\vektor{x})\vektor{J}_{0,T}(\vektor{y})}{|\vektor{x}-\vektor{y}|}
d\vektor{x} d\vektor{y} + O(\lambda^{-1})\right] \\ &\geq c_2 N^2
\end{split}
\end{equation}
for \(\lambda\geq\lambda_0\), where $\lambda_0$ and $c_2>0$ are independent of $N$.
To estimate the sum of exchange- and self-energy terms in
(\ref{Breit=cc}) notice that
\begin{equation}\label{bpf3}
\sprod{\psi_{\mu}\otimes\psi_{\nu}}{B(x-y)\psi_{\nu}\otimes\psi_{\mu}} =
\int\frac{4\pi}{p^2}|\widehat{\vektor{J}}_{\mu\nu,T}(\vektor{p})|^2 d\vektor{p},
\end{equation}
where \(\vektor{J}_{\mu\nu}(\vektor{x})= \psi_{\mu}^{\ast}(x)\vektor{\alpha}\psi_{\nu}(x)\).
After writing \(\widehat{\vektor{J}}_{\mu\nu}(\vektor{p})\) as an integral
in Fourier space in terms of $u_{\mu}$ and $u_{\nu}$ similar as in
(\ref{apf5}) it is easily seen, using the support properties of $u_{\mu}$ and
$u_{\nu}$, that
\begin{equation}\label{bpf4}
|\widehat{\vektor{J}}_{\mu\nu,T}(\vektor{p})|^2\leq|\widehat{\vektor{J}}_{\mu\nu}(\vektor{p})|^2\leq
3(2\pi)^{-3}\chi(|\vektor{p}+\vektor{n}_{\mu}-\vektor{n}_{\nu}|
\leq\sqrt{3}).
\end{equation}
The $N$ balls \(B(\vektor{n}_{\nu},\sqrt{3}),\ \nu=1,\ldots,N\) all lie in the ball
$B(0,N^{1/3})$ and cover a given point at most, say, $4^3=64$ times
(replace the balls by cubes with side $2\sqrt{3}$). Therefore (\ref{bpf4})
implies
\begin{equation*}
\sum_{\nu=1}^{N} |\widehat{\vektor{J}}_{\mu\nu}(\vektor{p})|^2\leq
192(2\pi)^{-3}\chi(|\vektor{p}+\vektor{n}_{\mu}|<N^{1/3})
\leq \frac{24}{\pi^3}\chi(|\vektor{p}|<2N^{1/3})
\end{equation*}
which in conjunction with (\ref{bpf3}) gives
\begin{equation}\label{bpf5}
\frac{1}{2}\sum_{\mu,\nu}\sprod{\psi_{\mu}\otimes\psi_{\nu}}{B(x-y)\psi_{\nu}\otimes\psi_{\mu}}
\leq\frac{384}{\pi}\ N^{4/3}.
\end{equation}
Rewriting the energy using (\ref{Breit=cc}) and inserting the
estimates (\ref{bpf1}), (\ref{apf8}), (\ref{bpf2}) and (\ref{bpf5}) we arrive at
\begin{equation*}
\mathcal{E}_N(\psi,\vektor{R})\leq c_1 (1+\alpha)N^{4/3}-c_2\alpha N^{2},\hspace{3em}c_2>0
\end{equation*}
which is negative for \(N>\mbox{const}\ \max(\alpha^{-3/2},1)\). This proves the theorem.
\end{proof}
\noindent
For small $N$ and small $\alpha$ there is stability. A similar result
for the energy in Section 1 was proved in \cite{Liebetal1997}.
\begin{theorem}\label{theorem3}
Suppose \(\tilde{\alpha}\leq 1/94\), \(\max_\kappa\ Z_{\kappa}\leq
2/\pi\,\tilde{\alpha}^{-1}\) and \(N-1\leq
2(2/\pi+\pi/2)(\alpha^{-1}-\tilde{\alpha}^{-1})\). Then
\(E_{N,K}\geq 0\). Inserting \(\tilde{\alpha}= 1/94\) and
\(\alpha=1/137\) we find stability for $N\leq 39$ and \(\max\
Z_{\kappa}\leq 59\).
\end{theorem}
\begin{proof} Since \(B(x)\leq 2/|x|\) on \(\field{C}^4\otimes\field{C}^4\) and
\(1/|x|\leq \delta^{-1}D\) on \(\Lambda_{+}L^2(\field{R}^3;\field{C}^4)\)
where \(\delta=2(2/\pi+\pi/2)\) \cite{Tix1997} one has by the
symmetry property of the states in $\H_N$
\begin{equation}\label{spf1}
B\leq \frac{N-1}{\delta}\sum_{\mu=1}^{N}D_{\mu}\hspace{3em}\hbox{on}\ \H_N.
\end{equation}
Furthermore
\begin{equation}\label{spf2}
V_c\geq -\frac{1}{\tilde{\alpha}}\sum_{\mu=1}^{N}D_{\mu}\hspace{3em}\hbox{on}\ \H_N
\end{equation}
for all $\tilde{\alpha}>0$ with \(\tilde{\alpha}\max\ Z_{\kappa}\leq
\frac{2}{\pi}\) and \(\tilde{\alpha}q\leq 1/47\) by \cite{LiebYau1988},
where the number $q$ of spin states may be set equal 2
\cite{Liebetal1997}. Inserting (\ref{spf1}) and (\ref{spf2}) in the
energy proves the theorem.
\end{proof}
\subsection{Dynamic nuclei}
\label{subsec3}
Making the nuclei dynamical would improve stability if their
kinetic energy were the only term we added to (\ref{energy}).
However if the nuclei are relativistic spin 1/2 particles like the
electrons and if the Breit-potential couples all pairs of
particles, taking their charges into account, then the instability
will actually become worse.
Let us illustrate this for a system of $N$ electrons and $K$ identical
nuclei of spin 1/2 and atomic number $Z>0$. These nuclei are described
by vectors in the positive energy subspace of the free Dirac operator
with the mass $M>0$ of the nuclei. To prove instability we
adopt the strategy of the proof of Theorem~\ref{theorem2} and thus
assume $M=0$ and $m=0$. As a trial-wave function we take
\begin{equation*}
\psi = (\psi_1\wedge\ldots\wedge\psi_{N})\otimes(\phi_1\wedge\ldots\wedge\phi_{K})
\end{equation*}
where $\psi_{\mu}$ is defined by equations (\ref{apf1.5}) and
(\ref{apf2}) and $\phi_{\kappa}$ is defined like $\psi_{\kappa}$
except that $\vektor{e}$ and $N$ are replaced by $-\vektor{e}$ and $K$ respectively.
It follows that in the limit \(\lambda\rightarrow\infty\) we get $N+K$
(charge-) currents, the nuclear ones being larger than the electronic
ones by a factor of $Z$ but otherwise identical. The Breit
interactions thus gives a negative contribution to the energy of order
\(\alpha(N+ZK)^2\). While the parallel currents of the $N+K$ particles
add up, the opposite charges of the electrons and nuclei cancel
themselves. In fact for $\psi$ defined as above
\begin{equation}\begin{split}
\sprod{\psi}{V_c\psi} &\leq \sum_{\mu<\nu}^N\int d\vektor{x}
d\vektor{y}\frac{|\psi_{\mu}(\vektor{x})|^2|\psi_{\nu}(\vektor{y})|^2}{|\vektor{x}-\vektor{y}|} \\
& \quad + Z^2\sum_{\kappa<\sigma}^K\int d\vektor{R}_1
d\vektor{R}_2\frac{|\phi_{\kappa}(\vektor{R}_1)|^2|\phi_{\sigma}(\vektor{R}_2)|^2}{|\vektor{R}_1-\vektor{R}_2|}
\\
& \quad+ Z\sum_{\kappa=1}^K\sum_{\mu=1}^N\int d\vektor{x}
d\vektor{R}\frac{|\psi_{\mu}(\vektor{x})|^2|\phi_{\kappa}(\vektor{R})|^2}{|\vektor{x}-\vektor{R}|}\\
&=
\left[\frac{N(N-1)}{2}+Z^2\frac{K(K-1)}{2}-NKZ\right](I+O(\lambda^{-1}))\\
&= \left[(KZ-N)^{2}-KZ^2-N\right](I/2+O(\lambda^{-1})),
\end{split}
\end{equation}
where $I$ is the limit of the above double integrals as
$\lambda\rightarrow \infty$. Hence \(\sprod{\psi}{V_c\psi}\) is
negative, e.g., if $KZ=N$ and \(\lambda\) is large. To achieve
this in the static case we had to choose the nuclear positions
properly. It is instructive to recall how this was done. The total
energy is bounded from above by
\(c_1(N^{4/3}+K^{4/3})-c_2\alpha(N+KZ)^2,\ c_2>0,\) for $N=KZ$ and
$\lambda$ large, and is therefore negative for $N=KZ$ large
enough.
\section{Stability and Instability with Quantized Radiation Field}\label{sec4}
Instability for the model with classical magnetic field implies
instability for the model with quantized radiation field without
UV-cutoff. In fact, for each classical magnetic field there is a
coherent state of photons which reproduces the classical field as
far as the energy is concerned. If an UV cutoff is introduced the
relativistic scale invariance of the energy is broken and
stability of the first kind is restored. The lower bound depends
on the cutoff and goes to $-\infty$ as the cutoff is removed.
The state of the system is now described by a vector \(\Psi\in
\H_N\otimes\mathcal{F}\) where $\mathcal{F}$ denotes the bosonic Fock-space over
\(L^2(\field{R}^3)\otimes\field{C}^2\), the factor $\field{C}^2$ accounting for the two
possible polarizations of the transversal photons, and the total
energy of $\Psi$ is
\begin{align*}
\mathcal{E}_N^{\text{qed}}(\Psi) &=
\sprod{\Psi}{\sum_{\mu=1}^{N}[\vektor{\alpha}_{\mu}\cdot(-i\nabla_{\mu} +
\alpha^{1/2}\vektor{A}(\vektor{x}_{\mu}))+\beta_{\mu}m]\Psi} \\
&\quad+\sprod{\Psi}{(1\otimes H_f)\Psi}\\
H_f &= \sum_{\lambda=1}^{2}\int d\vektor{k} |\vektor{k}|
a_{\lambda}^{\dagger}(\vektor{k}) a_{\lambda}(\vektor{k}),
\end{align*}
where
\begin{align*}
\vektor{A}(\vektor{x}) &:= \sum_{\lambda=1}^{2}\int dk
\left[\vektor{e}_{\lambda}(\vektor{k})e^{i\vektor{kx}}\otimes
a_{\lambda}(\vektor{k}) + \vektor{e}_{\lambda}(\vektor{k})e^{-i\vektor{kx}}\otimes
a_{\lambda}^{\dagger}(\vektor{k})\right]\\
&=: \vektor{A}^{+}(\vektor{x})+\vektor{A}^{+}(\vektor{x})^{*}
\end{align*}
is the quantized vector potential in Coulomb gauge. The operators $a_{\lambda}(\vektor{k})$
and $a_{\lambda}^{\dagger}(\vektor{k})$ are creation and annihilation
operators acting on $\mathcal{F}$ and obeying the CCR
\begin{equation*}
[a_{\lambda}(\vektor{k}_1),a_{\mu}^{\dagger}(\vektor{k}_2)] =
\delta_{\mu\nu}\delta(\vektor{k}_1-\vektor{k}_2),\hspace{3em}
[a_{\lambda}^{\sharp}(\vektor{k}_1),a_{\mu}^{\sharp}(\vektor{k}_2)] = 0
\end{equation*}
where $a_{\lambda}^{\sharp}=a_{\lambda}$ or
$a_{\lambda}^{\dagger}$, and the two polarization vectors
$\vektor{e}_{\lambda}(\vektor{k})$ are orthonormal and perpendicular to $\vektor{k}$
for each $\vektor{k}\in \field{R}^3$. We use $dk$ as a short hand for
\((2\pi)^{-3/2}(2|\vektor{k}|)^{-1/2}d\vektor{k}\), and the subindex of
\(\vektor{\alpha}_{\mu},\ \nabla_{\mu}\) and $\beta_{\mu}$ indicates that
these one particle operators act on the $\mu$-th particle. While
we used Gaussian units in Section~\ref{sec2} and \ref{sec3} we now
work with Heaviside Lorenz units.
\begin{lemma}\label{lemma}
For each \(\vektor{A}_{cl}\in\mathcal{A}\cap L^2(\field{R}^3)\) there exists a
vector \(\theta\in \mathcal{F}\) (coherent state) such that
\[ \mathcal{E}_N^{\text{qed}}(\psi\otimes\theta) = \mathcal{E}_N(\psi,\vektor{A}_{cl}) \]
for all \(\psi\in \mathfrak{D}_N.\)
\end{lemma}
\begin{proof}
Pick \(\vektor{A}_{cl}\in \mathcal{A}\cap L^2(\field{R}^3)\) and define
\(\eta_{\lambda(\vektor{k})}=(|\vektor{k}|/2)^{1/2} \vektor{e}_{\lambda}(\vektor{k})\cdot\widehat{\vektor{A}}_{cl}(\vektor{k})\)
so that \(\vektor{A}_{cl}(\vektor{x})=\vektor{A}_{cl}^{+}(\vektor{x})+\vektor{A}_{cl}^{+}(\vektor{x})^{\ast}\) with
\begin{equation}\label{qapf1}
\vektor{A}_{cl}^{+}(\vektor{x})=\sum_{\lambda=1}^{2}\int dk \eta_{\lambda}(\vektor{k})\vektor{e}_{\lambda}(\vektor{k})e^{i\vektor{k}\vektor{x}}.
\end{equation}
Next set
\begin{equation*}
\Pi(\eta) := i\sum_{\lambda=1}^{2}\int d\vektor{k}
\left[\overline{\eta_{\lambda}(\vektor{k})}a_{\lambda}(\vektor{k})+
\eta_{\lambda}(\vektor{k})a_{\lambda}^{\dagger}(\vektor{k})\right]
\end{equation*}
and \(\Theta = e^{-i\Pi(\eta)}\Omega\in\mathcal{F}\). $\Theta$ is called a
coherent state, it is normalized and most importantly it is an
eigenvector of all annihilation operators
\begin{equation}\label{qipf2}
a_{\lambda}(\vektor{k})\Theta = \eta_{\lambda}(\vektor{k})\Theta.
\end{equation}
>From (\ref{qapf1}), (\ref{qipf2}) and the
definition of $\eta_{\lambda}(\vektor{k})$ it follows that
\begin{equation*}
\vektor{\alpha}_{\mu}\vektor{A}^{+}(\vektor{x}_{\mu})\psi\otimes\Theta =
\left(\vektor{\alpha}_{\mu}\vektor{A}^{+}_{cl}(\vektor{x}_{\mu})\otimes\vektor{1}\right) \psi\otimes\Theta
\end{equation*}
and
\begin{equation*}
\sprod{\Theta}{H_f\Theta} = \int d\vektor{k} |\vektor{k}|
\sum_{\lambda}|\eta_{\lambda}(\vektor{k})|^2=\frac{1}{2}\int d\vektor{k} k^2|\widehat{\vektor{A}}_{cl}(\vektor{k})|^2.
\end{equation*}
Inserting this in the energy proves the theorem.
\end{proof}
If an ultraviolet cutoff is introduced in the field operator
$\vektor{A}(\vektor{x})$ then stability of the first kind is restored for all
$N$ and a certain range of values for $\alpha$ and $Z_{\kappa}$.
This follows from \cite[Lemma I.5]{Bachetal1998a} and
\cite[Theorem 1]{Liebetal1997}.
\section*{Acknowledgement}
\label{sec:ack}
It is a pleasure to thank Heinz Siedentop for many discussions,
and Arne Jensen, Jan Philip Solovej and Erik Skibsted for the
hospitality at Aarhus University in August 97, where this work was
begun. M.~G.~ also thanks Michael Loss for clarifying discussions.
This work was partially supported by the European Union under
grant ERB4001GT950214 and under the TMR-network grant FMRX-CT
96-0001.
| proofpile-arXiv_065-8701 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Quantum Computers}
A quantum computer is a collection of 2-level systems (qubits). Thus
the quantum computer (QC) is described by a vector in a Hilbert space
which is the tensor product of 2-dimensional Hilbert spaces. With $l$
qubits this space has dimension $2^l$.
The state of an $l$-qubit quantum register can be written as a
superposition of the ``computational basis states''. These are the
states where each qubit is in one of its two basis states $\ket{0}$ or
$\ket{1}$. We label these basis states by the integer which they
represent in binary. Thus:
\begin{equation} \label{QCstate}
\ket{\mbox{register}}=\sum_{n=0}^{2^l-1} c_n \ket{n}
\end{equation}
To compute, a quantum computer makes a sequence of unitary
transformations. Each unitary transformation acts on a small set
of qubits by using ``exterior fields'' which can effectively be
treated classically. Thus if, e.g., a qubit is realized by the 2-level
approximation of an atom, we can induce some U(2) transformation by
applying an electromagnetic field with the right frequency for a
certain time and with a specific phase. If no such field is applied we
assume that the state of the QC doesn't change.
A special case of unitary transformations are permutations of the
basis states; e.g., a NOT which flips the $\ket{0}$ and $\ket{1}$
states of a qubit or a so-called controlled-NOT acting as follows on
the basis states of 2 qubits:
\begin{displaymath}
U_{\mbox{\small CNOT}}~ \ket{a,b} = \ket{a,a~ \mbox{XOR}~ b} \qquad a,b=0,1
\end{displaymath}
Together with the controlled-controlled-NOT (CCNOT or Toffoli gate)
\begin{displaymath}
\mbox{CCNOT}: \quad \ket{a,b,c} \quad \to
\quad \ket{a,b,(a~ \mbox{AND}~ b)~ \mbox{XOR}~ c} \quad ,
\end{displaymath}
we get a so-called ``universal set'' of gates for what is called
``reversible computation''. In reversible computation every gate has
as many output bits as input bits and because the gate has to be
reversible (1 to 1), only permutations of the possible input states
are allowed. It is not difficult to see that with the above 3 gates
we can compute any function of a binary input, just as we can
with conventional computation. (The CCNOT can be used as AND.)
When starting with a superposition of all ``computational'' basis
states, such as (\ref{QCstate}) the quantum computer can compute a
superposition of outputs of a function for all possible $2^l$
inputs. Now, at the end of the quantum computation we have to make a
measurement on the quantum computer. We measure for each qubit whether
it is in state $\ket{0}$ or $\ket{1}$. Thereby we collapse the state
of the QC onto a computational basis state $\ket{n}$ with probability
$|c_n|^2$.
If we do this to a superposition of function values, this is not going
to be of interest. The trick is to look for interference between the
computational basis states. For this we have to add ``non-classical''
gates to the above ones, that is, gates which transform computational
basis states into a superposition thereof.
Shor's quantum algorithm for factoring large integers \cite{shor1}
first computes a superposition of functional values and then applies a
number of ``non-classical'' gates before measuring the QC. The
non-classical gates bring the QC into a state where only about square
root of the computational basis states have a sizable amplitude
(coefficient $c_n$), thus the final measurement will pick one of
them. The observed basis state $\ket{n}$ will thus have a ``random
component'' but also carries some information which can be used to
solve the mathematical problem at hand. Shor's algorithm is described
in more detail in section \ref{shor}.
\section{Possible technical realizations: ions in an electromagnetic trap}
One proposal \cite{cirac} for building a quantum computer is to use a
linear ion trap. In such a trap a number of ions will line up along
the $z$-axis. Along the $x$- and $y$-axis they are strongly confined
by a high-frequency electric field that switches between being
confining in the $x$-direction and deconfining in the $y$-direction
and vice versa. The net effect is that the ions are confined strongly
in both directions. In the $z$-direction the ions are confined by a
relatively weak harmonic electric potential. Due to their mutual
electrostatic repulsion the ions will form a string with separations
of the order of micro meters.
Each ion represents a qubit. By shining laser light at an individual
ion, U(2) transformations can be applied to that qubit. For
``universal quantum computation'' we at least also need to be able to
induce unitary transformations on a pair of ions such that the initial
product states will become non-product states (``entangled'' states).
To do this, the ions have to be cooled so that they occupy the lowest
energy state of their motional degrees of freedom in the confining
potential.
It is not so difficult to do this for the strongly confining $x$- and
$y$-directions. For the $z$-direction, instead of looking at the
individual motions of the ions, one looks at collective motions,
``normal modes'' which are like uncoupled harmonic oscillators. The
lowest such mode is the ``center-of-mass'' mode where the ions can be
imagined to oscillate synchronously without changing their spacings.
The idea now is to use the two lowest states of this center-of-mass
oscillator as a ``bus-qubit''. The internal state of an ion can be
coupled to the ``center-of-mass'' degree of freedom, e.g., by shining
laser light at it with a frequency which will take the ground state of
the ion and the center-of-mass motion to the first excited state of
both.
This has been done experimentally with a single ion. Also pictures of
some 30 ions aligned in a linear ion trap (and fluorescing in laser
light) have been obtained, but presently the main problem is to cool
such a string of ions to the motional ground state. One tries to
achieve this with laser-cooling (doppler-cooling and others). See e.g.
\cite{monroe}.
\section{Quantum algorithms}
\subsection{factoring large integers} \label{shor}
The quantum factoring algorithm arguably is the only known case where
a quantum computer could solve an interesting problem much faster than
a conventional computer. Actually the computation time is a small
power ($2^{nd}$ or $3^{rd}$) of the number of digits of the integer to
be factored. The fastest known classical integer-factoring algorithms
use super-polynomial time, and it is believed that no polynomial-time
such algorithms exist.
Shor's quantum factoring algorithm relies on the fact that factoring
can be ``reduced'' to the problem of finding the period of the
following periodic function:
\begin{displaymath}
f(x)=a^x~ \mbox{mod}~ N \quad ,
\end{displaymath}
where $N=p\cdot q$ is the number to be factored and $a$ is essentially
an arbitrary constant. Note that Euler's generalization of Fermat's
little theorem states that \mbox{$a^{(p-1)(q-1)} \mbox{mod} (pq)
=1$}. Thus $(p-1)(q-1)$ is a multiple of the period of $f(x)$. It
should therefore be plausible that there are efficient ways to get the
factors $p$ and $q$ from the period.
After computing a superposition of the form $\sum \ket{x,f(x)}$, the
period can be found by employing the ``quantum Fourier transform''
which applies the discrete Fourier transform to the $2^l$
amplitudes of a quantum register.
To obtain $\ket{x,f(x)}$ with reversible computation we have to make a
little detour. By using additional qubits in the state 0, we can use
CNOT and CCNOT to compute XOR and AND, but we also produce unwanted
output bits. In a superposition, the unwanted qubits will be
quantum-correlated (entangled) with the wanted qubits. Observing and
resetting the unwanted qubits doesn't work, as we thereby also
collapse the wanted part of the QC. The trick is to first compute
$f(x)$ including the garbage $g(x)$, then copy $f(x)$ into a ``save''
register and then undo the first step, which is of course possible in
reversible computation:
\begin{displaymath}
\ket{x,0,0,0} ~\to~ \ket{x,f(x),g(x),0} ~\to~ \ket{x,f(x),g(x),f(x)}
~\to~ \ket{x,0,0,f(x)}
\end{displaymath}
The copying of $f(x)$ in the second step can be done with a sequence of
CNOT's.
So by starting with a superposition with equal amplitudes of all $x$ ,
we get:
\begin{displaymath}
\frac{1}{\sqrt{2^l}} \sum_{n=0}^{2^l-1} \ket{x,0} \quad \to \quad
\frac{1}{\sqrt{2^l}} \sum \ket{x,a^x \bmod N}
\end{displaymath}
For the following it is easier to imagine that now we measure the
second register, but it is not necessary. After such a measurement the
first register will be in a superposition of all $x$ that give the
measured output value. The value of the smallest such $x$ is random,
but the spacing of the following values is the period of $f(x)$ which
we want to know. Thus the amplitudes in the first register are peaked
with constant spacings between the peaks, but everything is shifted by
a random value. When applying the quantum Fourier transform to the
first register we will again get amplitudes that are peaked at regular
intervals, but now the first peak is at the origin and the random
shift in the previous peaks only shows up as some complex phase of the
peaks. (To get sharp peaks we choose the size of the $x$-register such
that $2^l$ is at least of the order of the period squared.) By
measuring the Fourier transformed register, and repeating the whole
quantum computation a few times, one obtains the spacing of the peaks.
The quantum Fourier transform is done by using the fast Fourier
transform algorithm (FFT), which applies very naturally to
transforming the amplitudes of a quantum register. Actually it can be
done so efficiently that it is negligible for the overall computation
time.
\subsection{unstructured search}
To search through $N$ cases, there is a simple quantum algorithm
\cite{grover} that takes some $\sqrt{N}$ steps. This is not a very
strong improvement over the classical case with $N$ steps. Furthermore
it can be shown that for this problem no better quantum algorithm
exists \cite{bennett, zalka3}. To prove this, the unstructured search
is formalized with a so-called ``oracle'', a black-box subroutine
which gives output 1 only for one out of all possible inputs.
\subsection{simulating arbitrary quantum systems}
The amplitudes of the computational basis states of a QC can be made
to follow the time evolution of the amplitudes of essentially any
quantum system \cite{zalka1, wiesner}. Obtaining information about the
quantum state is then of course restricted by the same fundamental
quantum principles as it is for the original quantum system.
Say we have a quantum mechanical Hamilton operator that is a sum of
a kinetic and a potential term, thus a sum of a term which is a function
of momentum operators and a term that is a function of position
operators. We now discretize the wave function of this quantum
mechanical system and ``store'' it as the amplitudes of the
computational basis states of the QC. The point is that we can go to
momentum space by applying the quantum Fourier transform. Then we will
have the discretized wavefunction in momentum space. Time evolution is
implemented by evolving the wavefunction for a short time only
according to the potential term, then go to momentum space and evolve
according to the kinetic term, and so on.
Evolving the wavefunction for a short time according to e.g. only the
potential term, amounts to multiplying with a complex phase. We have
to carry out a transformation of the form $\ket{x} \to e^{i f(x)}
\ket{x}$, which can be done quite easily.
\section{Quantum error correcting codes}
To carry out a long quantum computation seems to require very precise
operations and low noise. From imprecisions in the applications of the
exterior fields (lasers, etc.) quantum gates will always be somewhat
different from the intended unitary operations. In this respect we have the
same problems as with an analog computer where, contrary to digital
hardware, slightly deviating values are not automatically reset to a
standard value. Also it is very difficult to isolate the degrees of
freedom of a quantum computer from the environment.
Therefore quantum computation might well be practically impossible,
were there not the possibility of quantum error correction.
Interestingly, it is possible to correct (continuous) quantum
amplitudes much better than e.g. continuous classical quantities in an
analog computer.
To simplify things, the following discussion will not encompass the
most general possibilities for quantum error correction. I will
describe how the 5-qubit error correcting code \cite{cesar} works, which
has been shown to be the shortest possible quantum code.
So we want to encode (and thus protect) a qubit in 5 qubits. The code
is a 2 dimensional subspace of the 32 dimensional Hilbert space of the
code. How can we correct for errors? Making a full (no degenerate
eigenvalues) measurement of the code in order to correct for errors is
not good as this will collapse the (generally unknown) encoded
qubit. The trick is to only measure what error has affected the code
without learning anything about the encoded qubit. The eigenspaces of
the error measurement have to be at least 2 dimensional so that the
encoded quantum information will not collapse.
An error correcting code can of course only correct for some errors,
ideally the most probable ones. The standard assumption is that the
probabilities of errors affecting different physical qubits are
independent. Then it makes sense to correct for all errors which
affect just one qubit.
For every qubit there are 3 such errors, namely bit flips, phase flips
and bit-phase flips, corresponding to the 3 Pauli matrices. We must
also take into account that no error may have happened. Then for the
5-qubit code we have $5 \cdot 3 +1$ possible ``errors''. After having
been exposed to noise the code will in general be in a superposition
of the 16 resulting states, but a measurement of the error will
collapse it to one of these states. Thus for convenience we can simply
imagine that one of the 16 errors has happened.
Now let's see what conditions the code (= the 2-dim. code subspace)
has to fulfill. The condition is that the 16 images under the error
operators of the 2-dim. code subspace have to be pairwise
orthogonal. This makes it possible to construct an ``error
observable'' with these eigenspaces. Note that the 16 mutually
orthogonal 2-dim. subspaces ``fill'' the 32 dimensional code Hilbert
space, which is why the 5 qubit code is called ``perfect''.
Once we have determined which error has happened we simply undo this
error (it's a unitary transformation). It may technically not be
feasible to directly measure the error observable, but one can apply a
series of quantum gates such that the error observable then
corresponds to simply measuring 4 of the 5 qubits.
Such codes can, e.g., be used to store an (unknown) quantum state or
transmit it through a noisy transmission line. In quantum computing we
also must be concerned about the noisiness of the error correction
operations. Also, we never want to decode the processed quantum
information, as this would expose it to noise. Schemes have been
developed for ``fault tolerant quantum computing'' where error
correction operations can increase the probability of having an
undisturbed state even though they are noisy themselves, and
operations (quantum gates) can be applied to encoded qubits such that
again some level of noise can be tolerated. If we imagine one of our
standard errors $E_i$ to happen suddenly with some probability, then
such schemes can tolerate a certain number of such errors before the
quantum computation goes wrong, provided that the errors don't
cluster too much. Without fault tolerant quantum computing, a single
error would already be too much.
One can think of iterating the encoding procedure, thus, e.g., one
qubit would be encoded in five qubits which in turn would each be
encoded in five qubits. It has been shown that with such a scheme
arbitrarily long quantum computations could be carried out once the
rate of errors per physical qubit is below a certain level of the
order of $10^{-4}$ per operation (see e.g. \cite{zalka2}).
\section{Recommended papers}
{\bf introductions:} \\
\cite{cipra} is a semi-popular short introduction. \\
Also John Preskill gives a lecture on quantum computation at Caltech.
The Web site \cite{preskill} contains the lecture notes and other
useful information and links. \\ \\
{\bf factoring:} \\
Shor's paper \cite{shor1} is well written but long. A short account of
Shor's algorithm is also given in \cite{laf}. \\ \\
{\bf error correction:} \\
In \cite{shor3} Shor describes how his 9-qubit quantum error
correcting code works. This was the first quantum code. \\ \\
{\bf fault tolerant quantum computing:} \\
In \cite{shor2} Shor describes how to carry out fault tolerant error
correction and fault tolerant operations on the 7-qubit and related
codes.
| proofpile-arXiv_065-8715 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The possibility of photon splitting process in strong
magnetic field around $B_c = 4.414\times 10^{13}$~Gauss
has been first noted in the early seventies (\cite{adler,bb70}).
Adler (1971) found that only one type of splitting is allowed,
namely $\perp \rightarrow \parallel +\parallel$, where a
perpendicularly polarized photon splits into two parallel
polarized photons. This calculation took into account the
vacuum dispersion and also the influence of matter with density
that of a pulsar magnetosphere. The normal modes are polarized
linearly in such conditions. The photon splitting absorption
coefficient for this allowed reaction is
\begin{equation}
\kappa = 0.12 \left( {B\over B_c}\sin\theta\right)^6
\left({\hbar\omega\over
mc^2}\right)^5\, .
\label{adler}
\end{equation}
Since then a number of authors have derived the matrix elements,
and absorption coefficients for this process using different
approaches (Papanyan \& Ritus 1972, Stoneham 1979) and also
recently (Baier, Milshtein \& Shaisultanov 1996, Baring \&
Harding 1997). These papers have settled the controversy
sparked by Mentzel, Berg \& Wunnner (1994) who
suggested that the photon splitting rate may actually be a few
orders of magnitude higher than previously thought.
This exotic process has found a number of astrophysical
applications. A natural environment where it may play a role is
provided by neutron stars, since a number of them have magnetic
fields in excess of $10^{12}$~Gauss, and about a dozen of radio
pulsars has spin down fields larger than $10^{13}$~Gauss. it
has been found that photon splitting plays an important role in
the formation of the gamma ray spectrum of PSR~1509-58, where it
inhibits emission above 1~MeV (\cite{HBG97}). Soft gamma-ray
repeaters (SGR) form another class of objects where photon
splitting has been suggested to play an important role. There
were three firmly established SGR's with and the fourth source
was recently discovered (\cite{Kouv97}). All of them are
characterized by short durations, below a second, and spectra
with cut-offs around $30$~keV. A number of arguments have been
presented (\cite{TD95}) pointing that these sources are actually
{\em magnetars}, i.e. neutron stars with magnetic fields
reaching $10^{15}$~Gauss, for which the main pool of energy is
the magnetic field. Baring (1995) suggested that photon
splitting may be responsible for the spectral cutoffs in the
spectra of SGRs, invoking a possibility of photon splitting
cascades. Thompson and Duncan (1995) showed that even in the
absence of cascades photon splitting influences the shape of the
spectra of SGRs.
In this work we investigate the process of photon splitting
not only in vacuum but also in the presence of matter with the
density like that in a neutron star atmosphere, when the plasma
dispersion may be important and for some range of propagation
directions the normal modes are polarized circularly. The
relevance of plasma effects can have a very substantial
influence on the absorption coefficients. For example, Bulik
and Miller (1997) have shown that plasma effects leads to a
broad absorption like feature below $\approx 10$~keV for a
large range of SGR emission models. In section 2 we analyze the
polarization of the normal modes in the strongly magnetized
plasma, in section 3 we calculate the photon splitting matrix
elements and absorption coefficients, in section 4 we present
the results and we discuss them in section 5.
\section{Polarization of the normal modes, refraction coefficient}
We use the normal mode formalism to describe
opacities in strong magnetic field.
The dispersion equation is
\begin{equation}
\vec k \times \left[ \mu^{-1} \left( \vec k \times \vec E\right)\right] +
\left({\omega\over c}\right)^2 \epsilon \vec E =0 \, ,
\label{maxwell}
\end{equation}
where $\epsilon$ is the electric permitivity tensor, $\mu$ is
the magnetic permeability tensor and $\vec E$ is the electric
field of the wave.
Equation~(\ref{maxwell}) has been discussed by
e.g.\cite{Ginzburg}, and Meszaros (1992). In general
equation~(\ref{maxwell}) has
three solutions: one representing the longitudinal
plasma oscillations that do not propagate (Meszaros 1992, p.69),
and two perpendicular representing the electromagnetic waves.
In the presence of the magnetic field there exist
two nondegenerate wave solutions of equation~(\ref{maxwell}),
and therefore the medium is birefringent.
We follow the convenient
method introduced by Gnedin~\&~Pavlov~(1974) for the
case of tenuous plasma to solve equation~(\ref{maxwell}).
The roots of the dispersion equation are
\begin{equation}
n_j = n_I \pm \sqrt{ n_L^2 + n_C^2}\, , \label{disp-root}
\end{equation}
where $j=1,2$ indicates which normal mode is selected. The
refraction coefficients are given by the real part of $n_j$, and
the total absorption coefficients are proportional to the
imaginary part of $n_j$; $\xi_j= (2\omega/c){\rm Im}(n_j)$, and
\begin{equation}
\begin{array}{rcl}
n_I \equiv& 1 + {1\over 4} (\epsilon_{xx} + \epsilon_{yy}\cos^2\theta
+\epsilon_{zz}\sin^2\theta-\epsilon_{xz}\sin 2\theta\\[3mm]
& -\mu^{-1}_{xx} -\mu^{-1}_{yy}\cos^2\theta-\mu^{-1}_{zz}\sin^2\theta +
\mu^{-1}_{xz}\sin 2\theta ) \; ,
\end{array}
\label{nI-def}
\end{equation}
\begin{equation}
\begin{array}{rcl}
n_L \equiv& {1\over 4}(\epsilon_{xx} - \epsilon_{yy}\cos^2\theta
-\epsilon_{zz}\sin^2\theta+\epsilon_{xz}\sin 2\theta\\[3mm]
& +\mu^{-1}_{xx} -\mu^{-1}_{yy}\cos^2\theta-\mu^{-1}_{zz}\sin^2\theta +
\mu^{-1}_{xz}\sin 2\theta ) \; ,
\end{array}\label{nL-def}
\end{equation}
\begin{equation}
n_C \equiv {i\over 2} ({ \epsilon_{xy} \cos\theta
+\epsilon_{xz}\sin\theta})\, .
\label{nC-def}
\end{equation}
Here $\mu^{-1}_{ab}$ is the $ab$ component of the inverse
$\mu$ tensor, and we use the system of coordinates
with the magnetic field along the $z$-axis, and
the wave vector in the $yz$ plane at an angle $\theta$ to the
magnetic field.
We describe the polarization of the normal modes by the position angle
$\chi_j$ between the major axis of the polarization ellipse and the projection
of the magnetic field $\vec B$ on the plane perpendicular to the wave vector
$\vec k$, and the ellipticity ${\cal P}$, whose modulus is equal to the ratio of
the minor axis to the major axis of the polarization ellipse, and whose sign
determines the direction of rotation of the electric field:
\begin{eqnarray*}
{\cal P}_j = {r_j -1\over r_j+1}\, , &
r_j\exp(2i\chi_j) = {\displaystyle
-n_C \pm \sqrt{ n_L^2 + n_C^2}\over \displaystyle n_L}\,.
\end{eqnarray*}
The polarization vectors can be described by a complex variable
$b$, or by two real parameters $q$ and $p$ (Pavlov, Shibanov, \&
Yakovlev 1980)
\begin{equation}
b \equiv q+ ip = {n_L\over n_C}\, . \label{qpdef}
\end{equation}
Let us consider a wave travelling in the
direction described by $\theta$ and $\phi$
in the coordinate system with the magnetic field along the
$z$-axis, and the $x$-axis chosen arbitrarily.
The wave vector is $\vec k = k (\cos\theta,
\sin\theta\sin\phi, \sin\theta\cos\phi)$.
In the rotating coordinate system
$e_\pm = 2^{-1/2}(e_x \pm i e_y )$, $e_0 =e_z$,
the polarization vectors are
\begin{equation}
\begin{array}{rcl}
e_\pm^j &=& i^{j+1} {1\over\sqrt{2}} C_j e^{\mp i\phi}\left( K_j\cos\theta \pm
1\right)\\[3mm]
e_0^j &= & C_j K_j \sin\theta
\end{array}
\label{vectors}
\end{equation}
where
\begin{equation}
K_j = b \left[ 1 + (-1)^j \left(1 + b^{-2}\right)^{1/2}\right]
\label{K-def}
\end{equation}
and $C_j = (1+ |K_j|^2)^{-1/2}$. In general $K_j$ are complex.
\subsection{Vacuum polarization effects}
We use the convention where we label the polarization
state by the direction of the electric field vector of the
wave in relation to the external magnetic field. Adler (1971)
used a different convention using the magnetic,
not electric field of the wave to define polarization.
The refraction indices in strong magnetic field have been derived
by Adler (1971). The result is
\begin{equation}
n_{\parallel,\perp}^{vac} = 1 - {1\over 2}\sin^2\theta A^{\parallel,\perp}
(\omega\sin\theta, B)\, ,
\label{adler-n}
\end{equation}
where the functions $A^{\parallel,\perp}$ are rather lengthy double integrals
given by equation~(51) in Adler (1971):
\begin{equation}\begin{array}{l}
A^{\parallel,\perp} (\omega, B) =
{\displaystyle ={\alpha\over 2\pi}
\int_0^\infty {ds\over s^2} \exp(-s) \int_0^s \exp[\omega^2 R(s,t)]
J^{\parallel,\perp}(s,v)}
\end{array}
\end{equation}
and $v= 2t s^{-1} -1$. The remaining functions $J^{\parallel,\perp}$ are
\begin{equation}
J^\perp (s,v) = \displaystyle - {Bs \cosh (Bsv)\over \sinh(Bs)} + \\
\displaystyle + {Bsv \sinh(Bsv)\coth(Bs)\over \sinh(Bs)} +
\displaystyle - {2Bs[\cosh(Bsv) - cosh Bs)] \over \sinh^3(Bs)}
\end{equation}
\begin{equation}
\displaystyle J^{\parallel} = \displaystyle {Bs\cosh(Bsv)\over \sinh(Bs)}
\displaystyle -Bs\coth(Bs)\left[ 1- v^2 + v {\sinh(Bsv)\over \sinh(Bs)}
\right] \, .
\end{equation}
The function $R$ is given by
\begin{equation}
R(s,t) = {1\over 2} \left[ 2t \left( 1- {t\over s} \right) +
{\cosh(Bsv) -\cosh(Bs) \over B \sinh(Bs)}\right] \, .
\end{equation}
The refraction indices $n_{1,2}^{vac} = 1+ \eta \sin^2\theta$
have been calculated for
$\hbar\omega\ll m_ec^2$
and arbitrary magnetic field by Tsai \& Erber (1974)
In the low field limit, $B\ll B_c$,
\[
\begin{array}{rcl}
\eta_\parallel(h)&\approx & \displaystyle{14\over 45} h^2 - {13\over 315} h^4 \\[3mm]
\eta_\perp(h)&\approx &\displaystyle {8\over 45} h^2 - {379\over 5040} h^4 \, ,
\end{array}
\]
and when $B\gg B_c$
\[
\begin{array}{rcl}
\eta_\parallel (h)&\approx & \displaystyle
{2\over 3} h + \left( {1\over 3} + {2\over 3}\gamma
-8L_1\right) \\[3mm]
\eta_\perp (h) &\approx &\displaystyle {2\over 3} - h^{-1}
\ln(2h)\, ,
\end{array}
\]
where $h=B/B_c$, and $L_1 = 0.249..$, and $\gamma = 0.577..$ is
the Euler's constant (Tsai \& Erber 1974).
Thus in the limit of strong magnetic field refraction for the
parallel mode grows linearly with the field while that of the
perpendicular mode is nearly constant. We present the
dependence
of the vacuum refraction coefficients as a function of the
mangetic field in Figure~\ref{n-of-b}.
We have evaluated numerically
the integrals of equations~(\ref{adler-n}).
Figure~\ref{n-of-om} shows the dependence
of the refraction coefficients on the photon energy for a few
magnetic fields.
\subsection{Refraction coefficient in the system of plasma and vacuum}
Plasma in strong magnetic field
can be described by the dielectric tensor
\[
\epsilon_{ab} = \epsilon_{ab}^{vac} +\epsilon_{ab}^{p}- \delta_{ab}\, ,
\]
and the vacuum permeability tensor. The plasma dielectric tensor
can be expressed as $\epsilon_{ab} = \displaystyle\delta_{ab} -
\left(\omega_p^2\over \omega^2\right)
\Pi_{ab}$, where $\omega_p = \displaystyle \left( {4\pi N e^2\over m}
\right)^{1/2} $ is the plasma frequency and $\Pi_{ab}$ is
the plasma polarization tensor.
The plasma polarization tensor is diagonal in the rotating coordinates,
and for cold electron plasma is given by
\begin{equation}
\Pi_{\alpha\alpha}= {\omega\over \omega + \alpha\omega_B -i \gamma_r}
= {\omega \over \omega_t +\alpha\omega_B} \, ,
\label{coldPi}
\end{equation}
where $\alpha=-1,0,+1$, and $\gamma_r= (2/3)(e^2/mc^3)\omega^2$ is
the radiative width, and we denote $\omega_t = \omega -i\gamma_r$.
Inserting equation~(\ref{coldPi}) into
equations~(\ref{nL-def}) and~(\ref{nC-def}), we obtain
\begin{eqnarray}
n_I &=& 1 - {1\over 4}{\omega_p^2\over \omega} \left[
(1+\cos^2\theta) {\omega_t\over \omega_t^2-\omega_B^2}
+ \sin^2\theta {1\over\omega_t}
\right] + {1\over 4} (A^{\parallel}(\omega \sin\theta, B)+
A^{\perp}(\omega \sin\theta, B)) \, ,
\label{nI-sys}
\end{eqnarray}
\begin{equation}
\begin{array}{l}
\displaystyle n_L = -{\sin^2\theta\over 4} \times
{\displaystyle\left\{{\omega_p^2 \over \omega\omega_t}
{\omega_B^2\over \omega_t^2-\omega_B^2 }
+[A^{\parallel}(\omega \sin\theta, B) -
A^{\perp}(\omega \sin\theta, B) ]\right\}}
\label{nL-sys}
\end{array}
\end{equation}
and
\begin{equation}
n_C= -{1\over 2} {\omega_p^2 \over \omega} \left(\omega_B\over
\omega_t^2 - \omega_B^2\right)\cos\theta\, .
\label{nC-sys}
\end{equation}
Inserting equations~(\ref{nL-sys}) and~(\ref{nC-sys}) into
equation~(\ref{qpdef}), and neglecting terms proportional
to $\gamma_r^2$ we obtain
\begin{equation}
\begin{array}{l}
\displaystyle q = {\sin^2\theta \over 2\cos\theta}{\omega_B\over\omega}
\displaystyle \left[ 1-(A^{\parallel}(\omega \sin\theta, B) -
A^{\perp}(\omega \sin\theta, B))
{\omega^2\over\omega_p^2} \left( {\omega^2\over\omega_B^2} -1\right)
\right]
\end{array}
\label{q-sys}
\end{equation}
and
\begin{equation}
\begin{array}{l}
\displaystyle p= {\sin^2\theta \over 2\cos\theta} \times
{\displaystyle {\omega_B\gamma_r\over \omega^2}
\left[ 1 + 2 (A^{\parallel}(\omega \sin\theta, B) -
A^{\perp}(\omega \sin\theta, B){\omega^2\over\omega_p^2}
\right] \, .}
\label{p-sys}
\end{array}
\end{equation}
polarization vectors and refraction coefficients are determined
by equations~(\ref{nI-sys}), (\ref{nL-sys}),and (\ref{nC-sys})
combined with~(\ref{disp-root}) and (\ref{vectors}).
The presence of matter influences refraction also because
of the electron cyclotron resonance at $\omega_b$.
These effects increase for propagation direction along the
magnetic field, and are proportional to the matter density.
For a detailed discussion see e.g. M{\'e}sz{\'a}ros (1992).
\section{Photon splitting rate}
We will use two coordinate
systems U, and U'. The z-axis
in the system U lies along the magnetic field, and the
wave vector of the initial photon is in the zx-plane.
The {\em z}-axis in the system U' lies along $\vec k$, the wave vector
of the initial photon, and
the magnetic field lies in the {\em xz}-plane.
The system U is convenient for calculation of the matrix element
$M$ and the refraction indices $n_j$, while U' will be used for integration.
In U the wave vector of the initial photon is
$\vec k_0 = k_0(\cos\theta_0,0,\sin\theta_0)$.
The vector $\vec k_1$ in U' is $\vec k_1 = ( \cos\theta',
\sin\theta'\sin\phi', \sin\theta' \cos\phi' )$, and in U
it is $\vec k_1 = k_1 (
\cos\theta_0\cos\theta' - \sin\theta_0\sin\theta'\cos\phi',$
$\sin\theta'\sin\phi',$
$\sin\theta_0\cos\theta' + \cos\theta_0\sin\theta'\cos\phi'
)$.
Using the momentum conservation we obtain in U'
$\vec k_2 = \vec k - \vec k_1$.
Polarization vector of a photon is given by
$\vec e(\vec k) = \vec e (\omega,\theta,\varphi)$. We calculate
the polarization vectors for each photon separately, given its
frequency and the direction of propagation.
The photon splitting
absorption coefficient can be found by integrating the S-matrix
element over the phase space of the final states,
\begin{equation}
r = \int {1\over 2 }{1\over 2\omega} {d^3k_1\over (2\pi)^3 2 \omega_1}
{d^3k_2\over (2\pi)^3 2 \omega_2} {|S|^2 \over VT}\, .
\label{int-general}
\end{equation}
This can be expressed as
\[
r = {2\alpha^6\over (2\pi)^3} \int |{\cal M}|^2 \omega\omega_1
\omega_2 d^3k_1 d^3k_2 \delta(\sum\omega_i)
\delta( \sum\k_i) \, .
\]
where the matrix element $\cal M$ is a function of energies
and polarization vectors of the incoming and outgoing photons.
\subsection{Matrix elements}
We use the system of units in which $\hbar=c=m_e=1$,
so the fine structure
constant is $\al=e^2$.
The magnetic field is expressed in the units of the
critical field $B_c$.
The effective Lagrangian of the electromagnetic field
with QED corrections is given by
(Berestetskii, Lifshits and Pitaevskii 1982)
\begin{equation}
\displaystyle L_{\rm eff} =
\frac{1}{8\pi^2} \int_0^\infty
\frac{e^{-\lam} \, {\rm d} \lam}{\lam^3}
{\displaystyle\times\left\{ -(\xi \cot \xi) (\eta \coth \eta) + 1 -
\frac{\xi^2 - \eta^2}{3} \right\}\, .}
\end{equation}
Here \begin{eqnarray*}
\xi &=& -\lam\frac{ie}{\sqrt{2}} \left\{
( {\cal F} + i {\cal J} )^{1/2} - ( {\cal F} - i {\cal J} )^{1/2}
\right\},
\\
\eta &=& \lam \frac{e}{\sqrt{2}} \left\{
( {\cal F} + i {\cal J} )^{1/2} + ( {\cal F} - i {\cal J} )^{1/2}
\right\} ,
\end{eqnarray*}
and $
{\cal F} = \frac{1}{2} \left( \B^2 - \E^2 \right), \;
{\cal J} = \B \E $
are the field invariants. In the low frequency approximation
the $S$-matrix element can be calculated as
\[
S_{\gamma \rightarrow \gamma_1 + \gamma_2} =
\langle \gamma \,
| \int {\rm d} \vv{r} \, {\rm d} t \, V_{\rm int} | \,
\gamma_1 \, \gamma_2 \rangle ,
\]
where the interaction operator is $V_{\rm int} = L_{\rm eff}$.
We find that the two lowest order terms of the S-matrix
correspond to a square and hexagon diagrams:
\begin{equation}
S_6 =
-i \frac{ 2 \alpha^3 B^3 }{(2\pi)^2 315 } (4\pi)^{3/2} \times
{ { (2\pi)^4 \,
\om \om_1 \om_2 \,
\delta( \sum\om_i) \,
\delta( \sum\k_i ) \, M_6 .}}
\end{equation}
and
\begin{equation}
S_4= i {\displaystyle2\alpha^2\over \displaystyle 45(4\pi)^2}
(4\pi)^{3/2}B \omega\omega_1 \omega_2 \delta( \sum\om_i) \,
\delta( \sum\k_i )\ M_4\, .
\end{equation}
The matrix elements are
\begin{eqnarray*}
M_6 & = & 48 {\cal A} +26{\cal B} + 13{\cal C} + 16\cal{D} \, ,\\
M_4 & = & 8{\cal C} + 14 {\cal D}\, ,
\end{eqnarray*}
and denoting the by $n_i$, $\n_i$, $\e_i$, the index of
refraction, the direction of propagation, and the polariztion
vector of th $i-$th photon respectively we obtain:
\[
{\cal A} =n_0 n_1 n_2
(\n_0 \times \e_0)_z (\n_1 \times \e_1^\ast)_z (\n_2 \times \e_2^\ast)_z
\]
\begin{eqnarray*}
{\cal B}& =&n_0 (\n_0\times \e_0)_z e_{1z}^\ast e_{2z}^\ast +
n_1 (\n_1 \times \e_1^\ast)_z e_{0z} e_{2z}^\ast +
n_2 (\n_2 \times \e_2^\ast)_z e_{0z} e_{1z}^\ast .
\end{eqnarray*}
\begin{eqnarray*}
{\cal C} & = & n_0 (\n_0 \times \e_0)_z
\{ [ n_1n_2 (\n_1\n_2) - 1 ] (\e_1^\ast \e_2^\ast) -
n_1n_2 (\n_1 \e_2^\ast) (\n_2 \e_1^\ast) \} +
\nonumber \\
&\;\;& n_1 (\n_1 \times \e_1^\ast)_z
\{ [ n_0 n_2 (\n_0 \n_2) - 1 ] (\e_0 \e_2^\ast) -
n_0 n_2 (\n_0 \e_2^\ast) (\n_2 \e_0) \} +
\nonumber \\
&\;\;& n_2 (\n_2 \times \e_2^\ast)_z
\{ [ n_0n_1 (\n_0 \n_2) - 1 ] (\e_0 \e_1^\ast) -
n_0n_1 (\n_0 \e_1^\ast) (\n_1 \e_0) \}
\end{eqnarray*}
and
\begin{eqnarray*}
{\cal D}&=&e_{0z} (n_1\n_1 - n_2\n_2 ) (\e_1^\ast \times \e_2^\ast) +
e_{1z}^\ast (n_0\n_0 - n_2\n_2 ) (\e_0 \times \e_2^\ast) +
e_{2z}^\ast (n_0\n_0 - n_1\n_1 ) (\e_0 \times \e_1^\ast)
\end{eqnarray*}
The matrix element can be expressed as
\begin{equation}
S= -i {2\alpha^3 \over (2\pi)^2} (4\pi)^{3/2} \omega\omega_1 \omega_2
{\cal M} (2\pi)^4 \delta (\sum\omega_i)
\delta(\sum\k_i)
\label{smatrix}
\end{equation}
where
\begin{equation}
{\cal M} = {B^3\over 315} M_6 + {B\over 45 \alpha^2} M_4\, .
\label{m-element}
\end{equation}
\subsection{Polarization selection rules}
In the vacuum, polarization of the normal modes is linear, i.e.:
\begin{eqnarray}
\n &=& (\cos\theta, 0, \sin\theta) \nonumber \\
\e_\parallel & = & (\sin\theta, 0 ,-\cos\theta) \label{linpol}\\
\e_\perp &=& (0,1,0)\, .\nonumber
\end{eqnarray}
Functions ${\cal C}=0$ and $\cal D$
vanish when we neglect refraction, while otherwise
they are of the order of $n -1$.
Using equations (\ref{linpol}) and neglecting refraction we find that
$M_4$ vanishes while
\begin{eqnarray*}
M_6 (\perp \rightarrow \perp +\perp) & =& 48 \sin^3\theta \nonumber \\
M_6 (\perp \rightarrow \parallel + \parallel ) & =& 26 \sin^3\theta \\
M_6 (\parallel \rightarrow \perp + \parallel ) & =& 26 \sin^3\theta \nonumber
\, ,
\end{eqnarray*}
and $ M_6 = 0$ for all other transitions.
In the presence of matter the functions $\cal C$ and $\cal D$ no
longer vanish. This is due to the fact that, in general, the
refraction coefficients differ from unity, polarization vectors
are elliptical rather than linear, photons in the process are
not exactly collinear, and there exists a small degree of
longitudinal polarization. The polarization related effects are
the strongest for high matter density and propagation direction
along the field.
\subsection{Kinematic selection rules}
The kinematic selection rules arise from the fact that
for some transitions the the energy conservation
cannot be satisfied.
The kinematic selection rules can be discussed analytically
using the dispersion relation $k=n(\omega,\mu) \omega\equiv (1+\xi(\omega,\mu))\omega$.
We note that the refraction coefficient is very close to unity,
see Figures~\ref{n-of-b} and \ref{n-of-om},
and therefore for a given $\mu$ and one can
invert the dispersion relation to obtain
\begin{equation}
\omega = (1- \xi(k,\mu) +0(\xi^2) ) k
\label{om-of-k}
\end{equation}
We note that for the fixed initial photon the function
$f \equiv \omega_0 -\omega_1 -\omega_2 $
is monotonically decreasing with $\mu'$- the cosine of the angle between
the initial and one of the final photons.
Thus, it suffices to verify that equation $f=0$
has no solution for the collinear photons, to be sure
that there are no solutions for non-collinear photons.
Using momentum conservation and equation~(\ref{om-of-k}), one
obtains for the collinear photons:
\begin{equation}
f_{\rm collinear} = -\xi(\k_0,\mu_0) k_0 +\xi(k_1,\mu_0)k_1
+\xi(k_2,\mu_0)k_2
\label{f-check}
\end{equation}
The condition for the kinematic selection rules to be satisfied is
$f_{\rm collinear} > 0$.
In vacuum in the limit ${\omega\over m} \ll 1$ the refraction
coefficients are functions of the magnetic field and the
dependence on photon energy $\omega$ is weak, see
Figure~\ref{n-of-om}. In this case condition (\ref{f-check}) can
be rewritten as $-\xi_0+ x\xi_1 +(1-x)\xi_2 > 0$, where $x$ is a
number between $0$ and $1$. It is clear, that transitions
$\parallel \rightarrow \parallel + \perp$ and $\parallel
\rightarrow \perp + \perp $ are not allowed. Since in the first
order in $\omega\over m$ the functions $\xi$ increase as a
function of $\omega$ transitions $\parallel \rightarrow
\parallel + \parallel$ and $\perp \rightarrow \perp + \perp$
are also forbidden. Thus in vacuum only two transitions are
kinematically allowed: $\perp \rightarrow \perp + \parallel $
and $\perp \rightarrow \parallel + \parallel$.
Combining the polarization selection rules and the kinematic
selection rules we conclude, in agreement with Adler, that only
the transition $\perp \rightarrow \parallel + \parallel$ is
allowed vacuum.
In the presence of matter the discussion of kinematic selection
rules becomes more complicated since the refraction indices as a
function of energy are rather complicated functions. The
kinematic selection rules become complicated when the field is
strong enough so that the electron cyclotron resonance
influences refraction significantly. Around the resonance
refraction coefficients are non monotonic functions of energy.
In this case there may be a fraction of final state space that
becomes kinematically allowed because of the influence of
plasma on refraction coefficients.
\subsection{Absorption coefficient}
In the case of propagation in vacuum we ignore dispersion and calculate the
integral of equation~(\ref{int-general})
\begin{eqnarray*}
\int dr = {2\alpha^6\over (2\pi)^3} |{\cal M}|^2 \times
\int k k_1 (k-k_1) k_1^2 dk_1 d\cos\theta d\phi
\delta(k -k_1 -|\vec k-\vec k_1|)
= {\alpha^ 6 \over 2 \pi^2} |{\cal M}|^2 {\omega^5 \over 30}\, .
\end{eqnarray*}
Due to the selection rules discussed above the only non-vanishing photon
splitting absorption coefficients is
\[
r(\perp \rightarrow \parallel + \parallel ) = {\alpha^6\over 2\pi^2}
B^6 \sin^6\theta {\omega^5\over 30} \left({26 \over 315}\right)^2
\, ,
\]
in full agreement with the result obtained by Adler (1971).
In general, when the polarization modes are not linear and we take into
account the dispersion relation equation~(\ref{int-general}) can be written as
\begin{equation}
\displaystyle r = {2\alpha^6 \over (2\pi)^3}
{\int |{\cal M}|^2 \omega\omega_1
\omega_2 k_1^2 dk_1 d\cos\theta' d\phi' \delta\left( \omega-\omega_1-\omega_2
\right) }
\label{toiowo}
\end{equation}
where we wrote explicitly the variables in the system U' and $\omega$-s are
function of $k$ and $\mu$. Defining $f\equiv \omega_0 -\omega_1 -\omega_2$, we
can integrate over $\cos\theta'$ and obtain
\begin{equation}
r = {2\alpha^6 \over (2\pi)^3}
\int_0^k dk_1 k_1^2
\int_0^{2\pi} d\phi'\omega \omega_1 \omega_2 { |{\cal M}|^2 }
\left|{df \over d\mu}\right|^{-1}_{f=0} \, ,
\label{int-final}
\end{equation}
which can be evaluated numerically.
The integrand in equation~(\ref{int-final}) is understood to vanish
whenever there is no solution of equation $f=0$.
\section{Results}
We evaluate equation~(\ref{int-final}) numerically to obtain the
photon splitting absorption coefficient for different polarization
channels, and various photon energies, magnetic fields and
plasma densities. In the calculation we use the refraction
indices of equation~(\ref{disp-root}), the polarization vectors
given by equation~(\ref{vectors}), and calculate the matrix
element using equation~(\ref{m-element}). At each integration
point we evaluate the function $f$ and find whether the energy
conservation is satisfied.
We present the results in Figures \ref{rhob1}, \ref{rhob2},
\ref{omb1}, and \ref{omb2}. Each figure consists of four panels
which show the splitting rates for four angles of propagation
with respect to the magnetic field: $10^\circ$, $20^\circ$,
$30^\circ$, and $70^\circ$. Figures \ref{rhob1}, \ref{rhob2} show the
splitting absorption coefficient as a function of matter
density of a photon with the energy $\hbar\omega= 1.5 m_e c^2$;
Figure \ref{rhob1} for the case $B=B_c$ and Figure \ref{rhob2}
for the case $B= 2 B_c$. We present photon splitting absorption
coefficients as functions of energy when the plasma density is
$100$g~cm$^{-3}$ in for $B=B_c$ in Figure \ref{omb1}, and for
$B=B_c$ in Figure \ref{omb2}. We show the photon splitting
absorption coeficients for the energies below $2m_ec^2$ since
above this energy the opacity is dominated by the single photon
pair production.
We first consider the effects of matter density on the photon
splitting rates. At low densities the influence of matter is
negligible, and we recover the vacuum case when all the
polarization vectors are linear, and the kinematic selection
rules are determined by the magnetic vacuum refraction. With
the increase of density polarization of the normal modes
becomes elliptical starting at the photons propagating near to
the direction of the field. At small angles this effects are
pronounced already at the density of $0.1$g~cm$^{-3}$. However
comparing Figures 4 and 5 we see that the dominant effect is due
to the kinematic selection rules and the influence of the
electron cyclotron resonance is crucial. This is also clearly
seen in Figure 6, where a number of splitting channels that are
forbidden in vacuum suddenly turns on for energies above the
cyclotron resonance.
At a low value of the magnetic field the process rapidly
becomes unimportant, since the rate scales $B^6$. With the
increasing value of the magnetic field the plasma effects start
to be important when the electron cyclotron resonance falls
right around the electron mass, i.e. the field has a value close
to the critical field. In these case when the matter is
sufficiently dense the effects of the electron cyclotron
resonance influence the kinematic selection rules
significantly thus allowing more photon splitting polarization
channels. When the value of the magnetic field is higher than $2
B_c$ the electron cyclotron resonance falls above region of
integration over the final states and the does not influence
the splitting rate. Moreover with the increasing magnetic field
refraction becomes dominated by the vacuum terms, and the
effects of plasma become less and less important.
Figures \ref{omb1} and \ref{omb2} show the effects of photon
energy on the splitting absorption coefficient. When the
electron cyclotron resonance effects are ignored only small
frequencies and small angles of propagation are influenced,
since it is there where the refraction coefficients and the
polarization of the normal modes are the most influenced by
plasma. However this is also the region where the photon
splitting coefficient is the smallest, see
equation~(\ref{adler}).
\section{Discussion}
In this work we extend the results of Adler (1971) to the case
of propagation in the magnetized plasma and concentrate on the
case of the density typical for a neutron star atmosphere. Our
approach is accurate for the magnetic fields up to
approximately the critical field since we use a low field
approximation in the calculation of the matrix element. We
calculate the refraction coefficients accurately and thus the
kinematic selection rules do not suffer from this limitation.
We have calculated the photon splitting absorption coefficient
as a function of magnetic field, plasma density, and the photon
energy and direction. We have found a region of the parameter
space (density $\rho > 1\,$g~cm$^{-3}$, the magnetic field $0.1
B_c < B < 2 B_c$, propagation angles $\theta < 30^\circ$) where
the effects of plasma are the most pronounced. A part of these
region is where the photon splitting absorption is small, so
the region of importance is limited to photon energies above
$m_e c^2$. We find that the the photon splitting rate is well
described by the vacuum approximation in the remaining part of
the parameter space.
Photon splitting absorption coefficient is small when compared
to other processes that may play a role in plasma, for example
the electron scattering opacity is $k_{scat} = 0.4
(\hbar\omega/m_e c^2)^2 (B_c/B)^2 (\rho/$g~cm$^{-3})$cm$^{-1}$ for the
extraordinary mode, a value a few orders of magnitude higher
than that for photon splitting. Thus photon splitting can play a
significant role only in very special astrophysical cases. An
example of such environment could be in deep layers of a
neutron star atmosphere. Therefore the results presented here may
apply to soft gamma-ray repeaters, where a large amount of
energy is deposited in the crust of a neutron star. Photon
splitting in a high density plasma may be a way of producing a
large number of soft X-ray photons which later on escape. Our
results can also be applied to the high energy radiation from
isolated neutron stars provided that we see radiation from the
surface and not the magnetopshere. However, the main conclusion is
that surprisingly the effects due to the presence of plasma are
important in a rather small fraction of the parameter space and
the vacuum approximation can be used in most calculations.
Acknowledgements. This work was supported by the following
grants KBN-2P03D00911, NASA NAG 5-4509 and NASA NAG 5-2868. The
author thanks Victor Bezchastnov for assitance in calculating
the matrix elements, and George Pavlov, Don Lamb and Cole
Miller for many helpful discussions during this work.
| proofpile-arXiv_065-8723 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\lbl{sec.intro}
Given an oriented link $L$ in (oriented) $S^3$, one can associate to it
a family of (oriented) 3-manifolds, namely its $p$-fold {\em
cyclic branched covers}
$\Sigma^p_L$, where $p$ is a positive integer. Using these 3-manifolds, one
can associate a family of integer-valued invariants of the link $L$,
namely its $p$-{\em signatures},
$\sigma_p(L)$. These signatures, being concordance invariants,
play a key role in the approach
to link theory via surgery theory.
On the other hand, any numerical invariant of 3-manifolds, evaluated
at the p-fold branched cover, gives numerical invariants of oriented links.
The seminal ideas of mathematical physics, initiated by Witten \cite{Wi}
have recently
produced two axiomatizations (and constructions) of numerical invariants of
links and 3-manifolds; one under the name of topological quantum field theory
(e.g. \cite{At, RT1, RT2}) and another under the name of finite
type invariants (e.g. \cite{Oh, LMO, BGRT}). Moreover, each of these two
approaches
offers a conceptual unification of previously known numerical invariants
of links and 3-manifolds, such as the {\em Casson invariant} \cite{AM}
and the {\em Jones
polynomial} \cite{J}.
It turns out that the Casson invariant $\lambda$, extended
to all rational homology 3-spheres by Walker \cite{Wa}, and further
extended to all 3-manifolds by Lescop \cite{Le} equals (up to a sign)
to twice
the degree 1 part of a graph-valued invariant of 3-manifolds \cite{LMO, LMMO}
which turns out to be a
universal finite type invariant of integral homology 3-spheres, \cite{L}.
Recently D. Mullins \cite{Mu} discovered a beautiful relation between
the value of
Casson-Walker invariant of the 2-fold branched cover of a link $L$ in $S^3$
in terms of the link's (2)-signature $\sigma(L)=\sigma_2(L)$
and the value of its Jones polynomial $J_L$
at $-1$, under the assumption that the 2-fold branched cover is a rational
homology 3-sphere. It is a natural question to ask whether this assumption
is really needed.
We can now state our result (where the nullity $\nu(L)$ of a link $L$
can be defined as the first betti number of $\Sigma^2_L$):
\begin{theorem}
\lbl{thm.1}
For an oriented link $L$ in $S^3$ we have:
\begin{equation}
\lbl{eq.blah}
i^{\sigma(L)+ \nu(L)} \lambda(\Sigma^2_L)
= \frac{1}{6} J'_L(-1) + \frac{1}{4}J_L(-1) \sigma(L)
\end{equation}
\end{theorem}
A few remarks are in order:
\begin{remark}
\lbl{rem.mullins}
In case $\Sigma^2_L$ is a rational homology 3-sphere, the above formula
is Mullin's theorem, as expected. See also Remark \ref{rem.vj}.
\end{remark}
\begin{remark}
For the class of links such that $\Sigma^2_L$ is a rational
homology 3-sphere (e.g. for all knots), there is a skein theory
relation of the signature, see \cite{Li,Mu}. However, the literature
on signatures
seems to be avoiding the rest of the links. Our result shows that such
a restriction is not necessary.
\end{remark}
\begin{remark}
Mullin's proof uses the Kauffman bracket definition of the Jones polynomial
\cite{Ka}, and oriented, as well as unoriented, smoothings of the link
that are special to the Jones polynomial. Our proof, simpler and derived from
first principles, does not use any of
these special properties of the Jones polynomial. In addition, it provides
a hint for general relations between link signatures and finite type link and
3-manifold invariants.
\end{remark}
\begin{corollary}
\lbl{cor.1}
If $L$ is a link with nullity at least 4, then $J_L(-1)=J'_L(-1)=0$.
\end{corollary}
It is natural to ask whether Theorem \ref{thm.1} can be extended to the
case of more general covers (such as $p$-fold branched covers), as well
as the case of more general 3-manifold invariants, such as the LMO invariant
$Z^{LMO}$, or its degree at most $n$ part $Z^{LMO}_n$.
In this direction, we have the following partial result. Let $D_mK$
denote the $m$-fold twisted double of a knot $K$ in $S^3$, see Figure
\ref{double}.
\begin{figure}[htpb]
$$ \eepic{double}{0.03} $$
\caption{The $m$-twisted double of a trefoil. In the region marked by
$X$ are $m-3$ full twists.}\lbl{double}
\end{figure}
\begin{theorem}
\lbl{thm.2}
Fix a knot $K$ in $S^3$ and integers $p,m,n$.
Then $Z_n^{LMO}(\Sigma^p_{D_mK})$ depends only on $p,m$ and the degree
$2n$ part of the Kontsevich integral \cite{Ko} of $K$.
\end{theorem}
With the above notation, setting $n=1$ we obtain that:
\begin{corollary}
\lbl{cor.n=1}
$$ \lambda(\Sigma^p_{D_mK})= a_{p,m} \Delta^{''}(K)(1) + b_{p,m}
$$
where $\Delta(K)$ is the Alexander-Conway polynomial of $K$, \cite{C,Ka}
and $a_{p,m}, b_{p,m}$ are constants depending on $p,m$.
\end{corollary}
The above corollary was obtained independently by Ishibe \cite{I}
for general $m,p$, Hoste \cite{Ho} for $m=0$ and Davidow \cite{Da}
for $m=-1, p \equiv \pm 1 \bmod 6$.
We wish to thank Daniel Ruberman for
numerous encouraging,
enlightening and clarifying conversations.
\section{A reduction of Theorem \ref{thm.1}}
\lbl{sec.skein}
Before we get involved in calculations, we should mention that
the proof of Theorem \ref{thm.1} is an application of {\em skein theory}
and the following two properties, together with their philosophical proof:
\begin{itemize}
\item[\bf{P1}]
The Casson-Walker-Lescop invariant satisfies a 3-term relation.
This holds since the Casson-Walker-Lescop invariant is a finite type
3 invariant (at least restricted to the set of rational homology 3-spheres,
\cite{GO}).
\item[\bf{P2}]
A crossing change or a smoothening of a link $L$, results in (two) surgeries
along the same knot in $S^3$. This holds since
the 2-fold branched cover of a disk $D^2$ (branched along
two points) is an annulus, thus the 2-fold branched cover of $D^2 \times I$
branched along two arcs is a solid torus $T$.
\end{itemize}
All links and 3-manifolds in this paper are {\em oriented}.
With an eye in equation \eqref{eq.blah}, we define
for a link $L$ in $S^3$,
$$ \alpha(L) \overset{\text{def}}{=} 1/6 J'_L(-1), \text{\hspace{0.2cm}} \beta(L)
\overset{\text{def}}{=} i^{\sigma(L)+\nu(L)}\lambda(\Sigma^2_L), \text{\hspace{0.2cm} and \hspace{0.2cm}}
\gamma(L) \overset{\text{def}}{=} 1/4 J_L(-1) \sigma(L).$$
A triple of links $(L^+, L^-, L^0)$
is called {\em bordered} if there is an embedded disk $D^3$
in $S^3$ that locally intersects them as in figure \ref{crossing}.
\begin{figure}[htpb]
$$ \eepic{crossing}{0.03} $$
\caption{A bordered triple of links $(L^+, L^-, L^0)$.}\lbl{crossing}
\end{figure}
For a bordered triple $(L^+, L^-, L^0)$, the skein
relation $tJ_{L^+}(t) - t^{-1}J_{L^-}(t) =(t^{1/2}-t^{-1/2}) J_{L^0}(t)$
of the Jones polynomial
implies that:
\begin{equation}
\lbl{eq.a}\begin{split}
\alpha(L^+) - \alpha(L^-) = - 2 i \alpha(L^0)
+ \frac{J_{L^+}(-1)}{6} + \frac{J_{L^-}(-1)}{6}
\end{split}
\end{equation}
Thus, the following claim:
\begin{claim}
\lbl{claim1}
For a bordered triple $(L^+, L^-, L^0)$, we have:
\begin{equation}
\begin{split}
\beta(L^+) - \beta(L^-) + 2 i \beta(L^0)= \gamma(L^+) - \gamma(L^-) + 2 i \gamma(L^0)
+ \frac{J_{L^+}(-1)}{6} + \frac{J_{L^-}(-1)}{6}
\end{split}
\end{equation}
\end{claim}
\noindent
together with the initial condition $\alpha(\text{unknot})=\beta(\text{unknot})
-\gamma(\text{unknot})=0$,
proves Theorem \ref{thm.1}. The rest of the paper is devoted to the proof
of the above claim.
\begin{remark}
\lbl{rem.vj}
Jones and Mullins use a
similar but different skein theory for the Jones polynomial, namely
$t^{-1}V_{L^+}(t) - tV_{L^-}(t) =(t^{1/2}-t^{-1/2}) V_{L^0}(t)$.
The polynomials $V_L$ and $J_L$ are easily seen to be related by:
$J_L(t)=(-1)^{|L|-1}V_L(t^{-1})$, where $|L|$ is the number of components of
$L$. With our choice, it turns out that
$J_L(1)=2^{|L|-1}$, a positive integer, which is natural from the point of
view of quantum groups and perturbative Chern-Simons theory.
Furthermore,
Mullins is evaluating $V_L$ at $-1$ with the the rather nonstandard convention
that $\sqrt{-1}=-i$, whereas we are evaluating $J_L$ at $-1$ with
the convention that $\sqrt{-1}=i$.
\end{remark}
\section{Some linear algebra}
We begin by reviewing three important invariants of symmetric matrices $A$.
All matrices considered have real entries, and $B^T$ denotes the transpose
of the matrix $B$. Two matrices $B$ and $B'$ are called {\em similar}
if $B'=P B P^T$, for a nonsingular matrix $P$. Given a symmetric matrix $A$,
we denote by $\nu(A)$, $\sigma(A)$ its {\em nullity} and {\em signature}
respectively. A lesser known invariant, the {\em sign} $\text{sgn}'(A)$
of $A$, can be obtained as follows:
bring $A$ to the form
$PAP^T=\tByt {A'} 0 0 0 $
where $A', P$ are
nonsingular, \cite{Ky}. Then, we can define $\text{sgn}'(A)=\text{sgn}'(\text{det}(A'))$,
with the understanding that the sign of the determinant of a $0 \times 0$
matrix is 1.
It is easy to see that the result is independent of $P$;
moreover, it coincides with Lescop's
definition \cite[Section 1.3]{Le}. Notice that the signature, nullity
and sign of a matrix do not change under similarity transformations.
We call a triple of symmetric matrices $(A_+, A_-, A_0)$ {\em bordered} if
$$ A_+ = \tByt {a} {\rho} {\rho^T} {A_0} \text{ and }
A_- = \tByt {a+2} {\rho} {\rho^T} {A_0},
$$ for a row vector $\rho$.
The signatures and nullities of a bordered triple are related as follows,
\cite{C}:
\begin{equation}
| \nu(A_{\pm})-\nu(A_0)| + | \sigma(A_{\pm})-\sigma(A_0)| =1
\end{equation}
Thus, in a bordered triple, the nullity determines the signature, up to
a sign. A more precise relation is the following:
\begin{lemma}
\lbl{lem.sign}
The sign, nullity and signature in a bordered triple are related
as follows:
\begin{eqnarray}
\lbl{eq.s}
\sigma(A_{\pm})-\sigma(A_0) =
\begin{cases}
0 & \text{ if\hspace{0.2cm} } | \nu(A_{\pm})-\nu(A_0)|=1 \\
\text{sgn}'(A_{\pm})\text{sgn}'(A_0) & \text{ otherwise. }
\end{cases} \\ \lbl{eq.n}
\nu(A_{\pm})-\nu(A_0) =
\begin{cases}
0 & \text{ if\hspace{0.2cm} } | \sigma(A_{\pm})-\sigma(A_0)|=1 \\
\text{sgn}'(A_{\pm})\text{sgn}'(A_0) & \text{ otherwise. }
\end{cases}
\end{eqnarray}
Moreover, if $\epsilon_x \overset{\text{def}}{=} \text{sgn}'(A_x) i^{\sigma(A_x)+\nu(A_x)}$ for
$x \in \{ +, -, 0 \}$, then we have:
\begin{equation}
\lbl{eq.eta}
\epsilon_+=\epsilon_-= i \epsilon_0.
\end{equation}
\end{lemma}
\begin{proof}
By similarity transformations, we can assume that:
$$
A_+= \tByt {a} {\rho} {\rho^T} {0} \oplus D,
A_+= \tByt {a+2} {\rho} {\rho^T} {0} \oplus D, A_0= D \oplus [0]^r
$$
where $D$ is a nonsingular, diagonal matrix, $[0]^r$ is the zero $r \times
r$ matrix,
$\rho$ is a $1\times r$ vector and $a$ a real number.
Since the nullity, signature and sign
of the matrix $\tByt {a} {\rho} {\rho^T} {0}$ are given by:
\begin{center}
\begin{tabular}{|l|c|c|c|}
\hline
& $\rho=a=0$ & $\rho=0, a \neq 0$ & $ \rho \neq 0$ \\ \hline
nullity & $r+1$ & $r$ & $r-1$ \\ \hline
signature & $0$ & $\text{sgn}'(a)$ & $0$ \\ \hline
sign & $1$ & $\text{sgn}'(a)$ & $-1$ \\ \hline
\end{tabular}
\end{center}
\noindent
the result follows by a case-by-case argument.
\end{proof}
\begin{remark}
\lbl{rem.det}
For future reference, we mention that the determinants of a bordered triple
of matrices are related by:
\begin{equation}
\lbl{eq.det}
\text{det}(A_+) - \text{det}(A_-) + 2 \text{det}(A_0) =0.
\end{equation}
This follows easily by expanding the first two determinants along the
first column.
\end{remark}
Given an oriented link $L$ in $S^3$, choose of Seifert surface of it, together
with a basis for its homology and and consider the associated Seifert matrix
$E_L$. Recall that the {\em nullity} $\nu(L)$, {\em signature}
$\sigma(L)$ and sign $\text{sgn}'(E_L)$ of $L$ are defined as the nullity, signature
and sign of the symmetrized Seifert matrix $E_L + E_L^T$. It turns out that
the signature and nullity of a link are independent of the Seifert surface
chosen, and that $\nu(L)=\beta_1(\Sigma^2_L)$, where $\beta_1$ is the first betti number.
On the other hand, $\text{sgn}'(E_L)$ depends on the Seifert matrix.
It is easy to see that given a bordered triple $(L^+, L^-, L^0)$ of links,
one can construct a triple of
Seifert matrices so that the associated triple of symmetrized Seifert
matrices is bordered, \cite{C}.
\section{Proof of Theorem \ref{thm.1}}
\lbl{sec.lescop}
\subsection{The Casson-Walker-Lescop invariant of 3-manifolds}
\lbl{sub.gen}
Given an (integrally) framed oriented r-component link $\mathcal L$ in $S^3$
(with ordered components),
let $S^3_{\mathcal L}$ denote the closed 3-manifold obtained by Dehn surgery
on $\mathcal L$. Its linking matrix, $F(\mathcal L)$ gives a presentation
of $H_1(S^3_{\mathcal L}, \mathbb Z)$. Notice that $\nu(F(\mathcal L))= \beta_1(S^3_{\mathcal L})$.
The Casson-Walker-Lescop invariant $\lambda$ of $S^3_{\mathcal L}$ is defined by:
$$
\lambda(S^3_{\mathcal L})=\text{sgn}'(F(\mathcal L))(D(\mathcal L)+ H_0(\mathcal L)+ H_1(\mathcal L) + H_2(\mathcal L)), \text{ where }
$$
\begin{eqnarray*}
D(\mathcal L) & = & \sum_{\emptyset \neq \mathcal L' \subseteq \mathcal L}
\text{det}(F(\mathcal L \setminus \mathcal L')) \zeta(\mathcal L'), \\
H_0(\mathcal L) & = & \frac{\text{det}(F(\mathcal L))}{4}\sigma(F(\mathcal L)) \\
H_1(\mathcal L) & = & - \frac{1}{6} \sum_{j=1}^r \text{det}(F(\mathcal L \setminus {j})) \\
H_2(\mathcal L) & = & \frac{1}{12} \sum_{\emptyset \neq \mathcal L' \subseteq \mathcal L}
\text{det}(F(\mathcal L \setminus \mathcal L')) (-1)^{|\mathcal L'|} L_8(\mathcal L'),
\end{eqnarray*}
$\zeta(\mathcal L)$ is a special value of (a derivative of) the multivariable
Alexander polynomial of $\mathcal L$ and $L_8(\mathcal L)$ is a polynomial in the linking
numbers $l_{ab}$ ($a,b=1 \dots r$) of $\mathcal L$ given explicitly by:
$$
L_8(\mathcal L)=\sum_{j=1}^r\sum_{\sigma \in \text{Sym}_r}
l_{i\sigma(1)} l_{\sigma(1)\sigma(2)} \dots l_{\sigma(r-1)\sigma(r)}l_{\sigma(r)i},
$$
where $\text{Sym}_r$ is the symmentric group with $r$ letters.
Notice also that since the links $\mathcal L$ that we consider will be
integrally framed, the Dedekind sums
appearing in \cite[definition 1.4.5]{Le} vanish.
\subsection{A construction of 2-fold branched covers}
\lbl{sub.branched}
In this section we review the details of some well known construction
of 2-fold branched covers of links in $S^3$. For a general reference,
see \cite{Ka,AK}. Given an (oriented) link $L$ in $S^3$,
choose a Seifert surface $F_L$ of $L$, and a basis of its first
homology and let $E_L$ be its Seifert matrix.
Push a bicolar of $F_L$ in the interior of $D^4$
(the 4-manifold obtained is still diffeomorphic to $D^4$), and glue
two copies of the obtained 4-manifold along $F_L$ according to the pattern of
\cite[p. 281]{Ka}; let $N_{F_L}$ denote the resulting 4-manifold, which
is a 2-fold cover of $D^4$ branched along $F_L$. Its boundary is
$\Sigma^2_L$, the 2-fold cover of $S^3$ branched along $L$.
In \cite[section 2]{AK}, Akbulut-Kirby showed that $N_{F_L}$ is a
4-dimensional handlebody (i.e. the result of attaching 2-handles along $D^4$),
and that the intersection form with respect to some basis of these
2-handles is the symmetrized Seifert matrix $E_L + E_L^T$ of $L$.
Let $\mathcal L$ denote the cores in $S^3$ of the 2-handles. Thus, $\mathcal L$ is
a framed link in $S^3$ with linking matrix $E_L + E_L^T$, such that
Dehn surgery on $\mathcal L$ is $\Sigma^2_L$. Of course, the link $\mathcal L$ depends on
the choice of Seifert surface of $L$ as well as on a choice of basis
on its homology. Akbulut-Kirby \cite{AK} describe an algorithm for
drawing $\mathcal L$ and implement it with beautiful pictures, however we will not
need the precise picture of the link $\mathcal L$!
Assume now that $(L^+, L^-, L^0)$ is a bordered triple with
admissible Seifert surfaces.
Property {\bf P1} of Section \ref{sec.skein} implies that
there is a solid torus in
$\Sigma^2_{L^0}$ and three simple curves $\alpha^+, \alpha^-, \alpha^0$ in its boundary
so that $\Sigma^2_{L^x}$ is diffeomorphic to surgery on the solid torus $T$
in $\Sigma^2_{L^0}$ along $\alpha_x$, for $x \in \{+,-,0 \}$. Using the argument
of \cite[page p.429]{Mu}, it follows that there is
a choice of a standard symplectic basis $\{ x_1, x_2 \}$ for $H_1(\partial T,
\mathbb Z)$ so that
$\alpha^+= x_2, \alpha^-= 2 x_1 + x_2, \alpha^0= x_1$. In other words, we have:
$\langle \alpha^0, \alpha^+ \rangle =1$ and $\alpha^-= 2 \alpha^0 + \alpha^+$, where $\langle \cdot, \cdot \rangle$
is the intersection form.
From the above discussion, it follows that there is a triple
of framed
oriented links (not necessarily bordered!)
$(\mathcal L^+, \mathcal L^-, \mathcal L^0)$ in $S^3$ an oriented knot $K$ and an integer $n$
so that $\Sigma^2_{L^x}=\Sigma^3_{\mathcal L^x}$ for all $x \in \{ +, -, 0 \}$,
and so that $\mathcal L^+$ (resp. $\mathcal L^-$)
is the disjoint union of $\mathcal L^0$ with the framed knot
$(K,n)$ (resp. $(K, n+2)$). Thus, the triple
$(F(\mathcal L^+), F(\mathcal L^-), F(\mathcal L^0))$ of linking matrices
is bordered, with nullity, signature and sign equal to that of
the triple $(L^+, L^-, L^0)$.
\subsection{Proof of Claim \ref{claim1}}
Let us define $\epsilon_x=\text{sgn}'(\mathcal L^x)i^{\sigma(L^x)+\nu(L^x)}$ for $x \in \{ -, +, 0 \}$,
Since the $\sigma(\mathcal L^x)=\sigma(L^x)$,
Lemma \ref{lem.sign} implies that $\epsilon_{\pm}= i \epsilon_0$.
We can now calculate as follows:
\begin{eqnarray*}
\beta(L^+)-\beta(L^-)+2i\beta(L^0) & = &
\big\{ \epsilon_+ D(\mathcal L^+) - \epsilon_- D(\mathcal L^-) +2i \epsilon_0 D(\mathcal L^0) \big\} \\
& + &\sum_{k=0}^2
\big\{ \epsilon_+ H_k(\mathcal L^+) - \epsilon_- H_k(\mathcal L^-) +2i \epsilon_0 H_k(\mathcal L^0) \big\}
\end{eqnarray*}
\noindent
We claim that:
\begin{eqnarray}
\lbl{eq.1}
\epsilon_+ D(\mathcal L^+) - \epsilon_- D(\mathcal L^-) +2i \epsilon_0 D(\mathcal L^0) & = & 0 \\ \lbl{eq.2}
\epsilon_+ H_0(\mathcal L^+) - \epsilon_- H_0(\mathcal L^-) +2i \epsilon_0 H_0(\mathcal L^0) & = &
\gamma(L^+) - \gamma(L^-) +2 i \gamma(L^+) \\ \lbl{eq.3}
\epsilon_+ H_1(\mathcal L^+) - \epsilon_- H_1(\mathcal L^-) +2 \epsilon_0 H_1(\mathcal L^0) & = & 0 \\ \lbl{eq.4}
\epsilon_+ H_2(\mathcal L^+) - \epsilon_- H_2(\mathcal L^-) +2 \epsilon_0 H_2(\mathcal L^0) & = &
\frac{J_{L^+}(-1)}{6} + \frac{J_{L^-}(-1)}{6}
\end{eqnarray}
Before we show the above equations, we let
$\mathcal L^0 \cup K^{\pm}= \mathcal L^{\pm}$, and $I$ denote an arbitrary nonempty
sublink of $\mathcal L^0$ with complement $I' \overset{\text{def}}{=} \mathcal L^0 \setminus I$.
Using equation \eqref{eq.eta}, it follows that
the left hand side of \eqref{eq.1} equals to:
\begin{eqnarray*}
\sum_{I} & & \big\{ \epsilon_+ \text{det}(F(I' \cup K^+)) - \epsilon_-
\text{det}(F(I' \cup K^-)) +2 i \epsilon_0 \text{det}(F(I')) \big\} \zeta(I) + \\
\sum_{I} & &
\big\{ \epsilon_+ \text{det}(F(I')) - \epsilon_-
\text{det}(F(I')) \big) \big\} \zeta(I \cup K)
\end{eqnarray*}
Using equation \eqref{eq.eta},
and the fact that $F(I \cup K^+), F(I \cup K^-), F(I \cup K^0))$
is a bordered triple of matrices, it follows by Remark \ref{rem.det}
that the first and second sum shown above vanishes,
thus showing equation \eqref{eq.1}.
In order to show equation \eqref{eq.2}, use the fact
that for a link $L$ in $S^3$
we have:
$$|J_L(-1)|=|H_1(\Sigma^2, \mathbb Z)|= i^{-\sigma(L)- 2 \nu(L)}J_L(-1)$$
(compare with
\cite[Theorem 2.4]{Mu},
and with Kauffman \cite{Ka}, with the understanding that the order
of an infinite group is $0$, and keeping in mind Remark \ref{rem.vj}). Thus,
\begin{equation}
\lbl{eq.h1}
\epsilon_x \text{det}(F(\mathcal L^x)) = i^{\sigma(L^x) + \nu(L^x)} |H_1(\Sigma^2, \mathbb Z)|=
i^{-\nu(L^x)} J_{L^x}(-1) = J_{L^x}(-1)
\end{equation}
for all $x \in \{ +, -, 0 \}$ (where the last equality above follows
from the fact that if $\nu(L^x) \neq 0$, then $J_{L^x}(-1)=0$).
Since $\sigma(\mathcal L^x)=\sigma(L^x)$ for all $x$, it follows
that $\epsilon_x H_0(\mathcal L^x) = \gamma(L^x)$ for all $x$, which proves equation
\eqref{eq.2}.
Equation \eqref{eq.3} follows in the same way as equation \eqref{eq.1}
shown above.
Using \eqref{eq.eta} and the definition of $H_2$, it follows that
the left hand side of equation \eqref{eq.4} equals to:
\begin{eqnarray*}
& i \epsilon_0 & (H_2(\mathcal L^+) - H_2(\mathcal L^0) + 2 H_2(\mathcal L^0)) \\
= & \frac{i \epsilon_0}{12} & \big\{
\sum_I \text{det}(F(I')) (-1)^{|I\cup K|} L_8(I \cup K^+)
+ \sum_I \text{det}(F(I' \cup K^+)) (-1)^{|I|} L_8(I) \\
& - &
\sum_I \text{det}(F(I')) (-1)^{|I\cup K|} L_8(I \cup K^-)
- \sum_I \text{det}(F(I' \cup K^-)) (-1)^{|I|} L_8(I) \\
& + &
2 \sum_I \text{det}(F(I')) (-1)^{|I|} L_8(I) \big\} \\
= & \frac{i \epsilon_0}{12} & \big\{
\sum_I \text{det}(F(I')) (-1)^{|I|} ( - L_8(I \cup K^+) + L_8(I \cup K^+))
\big\}
\end{eqnarray*}
\noindent
It is easy to see that if $(l^{\pm}_{ab})=F(\mathcal L^{\pm})$ and that $K^{\pm}$
is the first ordered component of $\mathcal L^{\pm}$, then
\begin{eqnarray*}
- L_8(I \cup K^+) + L_8(I \cup K^+))
&
= &
2 \sum_{\sigma''}
l^+_{1\sigma''(1)}l^+_{\sigma''(1)\sigma''(2)} \dots l^+_{\sigma''(r-1)\sigma''(r)} \\
& + & 2 \sum_{\sigma'}
l^-_{\sigma'(1)\sigma'(2)} l^-_{\sigma'(2)\sigma'(3)} \dots l^-_{\sigma'(r+1)1}
\end{eqnarray*}
where the summation is over all $\sigma', \in \text{Sym}_{r+1}$ (resp. $\sigma''$)
such that $\sigma'(1)=1$ (resp. $\sigma''(r+1)=1$). Combined with the above,
and with \eqref{eq.eta}, \eqref{eq.h1},
the left hand side of equation \eqref{eq.4} equals to
\begin{eqnarray*}
\frac{i \epsilon_0}{6}(\text{det}(F(\mathcal L^+))+\text{det}(F(\mathcal L^-))=
\frac{1}{6}(\epsilon_+ \text{det}(F(\mathcal L^+)) + \epsilon_- \text{det}(F(\mathcal L^-))=
\frac{1}{6}(V_{L^+}(-1)+ V_{L-}(-1))
\end{eqnarray*}
which concludes the proof of equation \eqref{eq.4} and of Claim
\ref{claim1}.
Corollary \ref{cor.1} follows from the fact that the Casson-Walker-Lescop
invariant of a manifold with betti number at least 4, vanishes, \cite{Le}.
Remark \ref{rem.mullins} follows from Theorem \ref{thm.1} and
\eqref{eq.h1}.
\begin{remark}
The forth roots of unity in \eqref{eq.eta} work out in such a way
as to obtain
the cancellations in equations \eqref{eq.1}, \eqref{eq.2}, \eqref{eq.3}
and \eqref{eq.4}.
The philosophical reason for the cancelation in
equation \eqref{eq.1} is property {\bf P2} of Section \ref{sec.skein}.
\end{remark}
\section{Proof of Theorem \ref{thm.2}}
Fix a knot $K$ in $S^3$ and integers $p,m,n$. Our first goal is to give a
Dehn surgery description of the $p$-fold branched cover
$\Sigma^p_{D_mK}$ of the $m$-twisted double $D_mK$ of $K$.
\begin{figure}[htpb]
$$ \eepic{Dehn}{0.03} $$
\caption{A Dehn twist along a $+1$ framed unknot
intersecting two arcs (in an arbitrary
3-manifold) on the left
hand side gives a diffeomorphic 3-manifold with a different
embedding of the two arcs.}\lbl{Dehn}
\end{figure}
We begin with a definition: a link $L$ in $S^3$ is called $K$-{\em unknotting}
if it is unit-framed, algebraically split
(i.e., one with linking numbers zero and framing $\pm 1$ on each component),
lying in a standard solid torus in $S^3$,
such that Dehn surgery $S^3_L$ on $L$ is diffeomorphic to $S^3$ and such
that the image of a meridian of the solid torus in $S^3_L$ is isotopic to
the knot $K$ in $S^3$.
A $K$-uknotting link $L$ can be obtained by projecting $K$ to
a generic plane, choosing a set of crossings that unknot $K$,
and placing an unknot with framing $\pm 1$ around each crossing. The union
of these unknots is the desired link $L$, as follows from Figure \ref{Dehn}.
We represent a $K$-unknotting link $L$
in the standard solid torus in $S^3$ with
the left hand side of Figure \ref{Dehn}. On the right hand side of the
same figure is shown the $m$-twisted double of $K$.
\begin{figure}[htpb]
$$ \eepic{step1}{0.03} $$
\caption{On the left, a $K$-unknotting link $L$.
On the right, the $m$-twisted double $D_mK$. In the box marked $X$ are
$m$ twists.}
\end{figure}
Next, we construct an $D_mK$ uknotting link:
using Figure \ref{Dehn}, we introduce an extra unit-framed unknot $C$
to unknot $D_mK$ as shown on the left hand side of Figure \ref{step2},
and we isotope the result as shown on the right hand side of the figure.
Then, $L \cup C$ is a $D_mK$-unknotting link.
\begin{figure}[htpb]
$$ \eepic{step2}{0.03} $$
\caption{Two isotopic views of $L \cup C$, which is
$D_mK$-unknotting.}\lbl{step2}
\end{figure}
Cutting the meridian disk that $D_mK$ bounds on the right hand side of
Figure \ref{step2}, and gluing $p$ of them side by side as in Figure
\ref{pfold}, gives a framed link $L(p,m)$ in $S^3$, which is a
Dehn surgery presentation of $\Sigma^p_{D_mK}$.
\begin{figure}[htpb]
$$ \eepic{step4}{0.03} $$
\caption{The framed link $L(4,m)$.}\lbl{pfold}
\end{figure}
Next, we review some well-known facts about the LMO invariant, and its
cousin, the Aarhus integral, \cite{BGRT}. We will assume some familiarity
with the standard definitions of finite type invariants of links and
3-manifolds, \cite{B-N,Ko,LMO,BGRT}. Both the LMO invariant and
the Aarhus integral of a 3-manifold obtained by Dehn surgery of a link
in $S^3$ are defined in terms of the Kontsevich integral of the link.
For rational homology 3-spheres, the LMO invariant equals, properly
normalized, to the Aarhus integral, where the normalization of the factor
is the order of the first homology group with integer coefficients.
The manifolds in question, namely $\Sigma^p_{D_mK}$, are rational homology
3-spheres, and the order of the first homology group depends only on
$m$ and $p$ (since the first homology is given in terms of the Alexander
polynomial of $D_mK$, which depends only on $m$, evaluated at $p$th
roots of unity).
Thus, it suffices to show that the degree at most $n$
part of the Aarhus integral
$Z_n^A(\Sigma^p_{D_mK})$ of $\Sigma^p_{D_mK}$ depends on the degree at most $2n$
part of the Kontsevich integral $Z_{2n}^K(K)$ of $K$.
Since the degree at most $d$ part of the Kontsevich integral is the universal
Vassiliev invariant of type $d$ \cite{Ko, B-N}, it suffices to show that
the knot invariant $ K \to Z^{A}_n(\Sigma^p_{D_mK})$ is a Vassiliev
invariant of type $2n$.
A crossing change on $K$ corresponds to an extra
component to the $K$-unknotting link $L$ and $p$ extra components to the
link $L(p,m)$, thus the alternating sum of
$2n+1$ double points on $K$, corresponds to
the alternating sum of a link $L_{\text{alt}} \cup L(p,m)$, where
$L_{\text{alt}}$ has $(2n+1)p$ components,
and where we alternate by considering
including or not each of the $p$ components. We then consider the Kontsevich
integral of $L_{\text{alt}} \cup L(p,m)$, expressed in terms of disjoint union
of uni-trivalent graphs, the legs of which we glue according to the
definition of the Aarhus integral. Since the linking numbers between
the components of $L_{\text{alt}}$ and $L_{\text{alt}} \cup L(p,m)$ are zero,
a standard counting
argument (compare with \cite{L}, \cite[part II, Section 4, proof of
Theorem 2]{BGRT}
and with
\cite[proof of Theorem 3]{GH})shows that after alernating $2n+1$ terms,
the degree $n$ part of $Z^A_n(L_{\text{alt}} \cup L(p,m))$ vanishes.
Corollary \ref{cor.n=1} follows immediately from Theorem \ref{thm.2},
using the fact that the degree $1$ part of the LMO invariant equals to
the Casson-Walker-Lescop invariant, \cite{LMMO,LMO}.
\ifx\undefined\leavevmode\hbox to3em{\hrulefill}\,
\newcommand{\leavevmode\hbox to3em{\hrulefill}\,}{\leavevmode\hbox to3em{\hrulefill}\,}
\fi
| proofpile-arXiv_065-8727 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\noindent
The Casimir effect \cite{CAS} is one of the fundamental effects
of Quantum Field Theory. It tests the importance of the zero point
energy. In principle, one considers two conducting infinitely
extended parallel plates at the positions $x_3=0$ and $x_3= a$.
These conducting plates change the vacuum energy of Quantum
Electrodynamics in such a way that a measurable attractive
force between both plates can be observed
\cite{EXP}. This situation does not essentially change, if
a nonvanishing temperature \cite{MF} is taken into account.
The thermodynamics
of the Casimir effect \cite{BML} \cite{GREIN}
and related problems
\cite{BARTO}
is well investigated.\\
Here we shall treat the different regions separately.
We assume a temperature $T$ for the space between
the plates and a temperature $ T' $ for the space outside the plates.
Thereby we consider the right plate at $ x_3=a $ as movable, so
that different thermodynamic processes such as isothermal or
isentropic motions, can be studied.
At first we investigate the thermodynamics of the space between
the two plates by setting $T'=0$. This can be viewed as the black
body radiation
(BBR) for a special geometry. The surprising effect is, that for
vanishing distance ($a\rightarrow 0$) in isentropic processes
the temperature approaches a finite value, which is completely
determined by the fixed entropy. This is in contrast to the
expected behaviou of the standard BBR, if the
known expression derived for a large volume is extrapolated
to a small volume.
For large values of $a$ the BBR takes the standard form.
As a next topic we consider the Casimir pressure assuming
that the two physical regions, i.e. the spaces between and
outside the two plates possess different temperatures.
Depending on the choices of $T$ and $T'$
a different
physical behaviour is possible.
For $T'<T$ the external pressure is reduced in comparison with the
standard case $T'=T$. Therefore we expect the existence of
an equilibrium point, where the pure Casimir
attraction ($T=0$ effect ) and the
differences of the radiation pressures compensate each other.
This point is unstable, so that for isothermal processes
the movable plate moves either to
$a\rightarrow 0$ or to $a \rightarrow \infty$. However, an
isentropic motion reduces the internal radiation pressure
for growing distances, so that in this case
there is an additional stable equilibrium point.
\section{Thermodynamic Functions}
The thermodynamic functions are already determined by different
methods \cite{MF} \cite{BML}. We recalculate them by
statistical mechanics including the zero-point energy and cast
it in a simpler form which can be studied in detail \cite{MR}.
For technical reasons the
system is embedded in a large cube (side L). As space between the
plates
we consider the volume $L^2a$, the region outside is given by
$L^2(L-a)$. All extensive thermodynamic functions are defined per area.
\\
Free energy $\phi = F/L^2$:
\begin{eqnarray}
\label{1}
\phi_{int} &=& [\frac{\hbar c \pi^2}{a^4}(-\frac{1}{720} + g(v))
+\frac{3\hbar c}{\pi^2} \frac{1}{\lambda^4} ]a ,\\
\label{2}
\phi_{ext} &=& [\frac{3\hbar c}{\pi^2} \frac{1}{\lambda^4}
-\frac{\hbar c \pi^6}{45}(\frac{v'}{a})^4](L-a).
\end{eqnarray}
Energy $e = E/L^2$:
\begin{eqnarray*}
e_{int} &=& [\frac{\hbar c \pi^2}{a^4}(-\frac{1}{720} + g(v)
-v \partial_v g(v))
+\frac{3\hbar c}{\pi^2} \frac{1}{\lambda^4} ]a ,\\
e_{ext} &=& [\frac{3\hbar c}{\pi^2} \frac{1}{\lambda^4}
+\frac{3 \hbar c \pi^6}{45}(\frac{v'}{a})^4](L-a).
\end{eqnarray*}
Pressure:
\begin{eqnarray}
\label{3}
p_{int} &=& [\frac{\hbar c \pi^2}{a^4}(-\frac{1}{240} + 3g(v)
-v\partial_v g(v))
-\frac{3\hbar c}{\pi^2} \frac{1}{\lambda^4} ],\\
\label{4}
p_{ext} &=& [\frac{3\hbar c}{\pi^2} \frac{1}{\lambda^4}
-\frac{\hbar c \pi^6}{45}(\frac{v'}{a})^4].
\end{eqnarray}
Entropy $\sigma = S/(k L^2)$:
\begin{eqnarray}
\label{5}
\sigma_{int} = -\frac{ \pi}{a^3} \partial_v g(v) a;\;\;\,
\sigma_{ext} = \frac{4\pi^5}{45} (\frac{v'}{a})^3 (L-a),
\end{eqnarray}
$\lambda$ regularizes ($\lambda \rightarrow 0 $)
the contributions from the zero-point
energy. The thermodynamics is governed by the function $g(v)$.
We list two equivalent expressions:
\begin{eqnarray}
\label{6}
g(v) = -v^3[\frac{1}{2}\zeta (3) + k(\frac{1}{v})] =
\frac{1}{720} -\frac{\pi^4}{45}v^4 -
\frac{v}{4\pi^2}[\frac{1}{2} \zeta(3) + k(4\pi^2 v)].
\end{eqnarray}
The function $k(x)$ is given by
\begin{eqnarray}
\label{7}
k(x) = (1- x\partial_x)\sum_{n=1}^{\infty}
\frac{1}{n^3}\frac{1}{exp(nx) - 1}.
\end{eqnarray}
It is strongly damped for large arguments. $v$ is the known variable
$v = a T k/(\hbar \pi c)$, the variable $v'$ contains
the temperature $T'$ instead of $T$.
\section{Black Body Radiation}
\noindent
As a first topic we consider the space between the two plates as
a generalization of the usual black body radiation (BBR) for a
special geometry $L \times L \times a $. Contrary to the
standard treatment we include here both, the internal and
external the zero point energy.
Thereby parameter-dependent divergent contributions compensate
each other, whereas the physically irrelevant
term $ ~ L/{\lambda^4}$ can be omitted \cite{MR}.
If we approximate the function $g$ for large $v$
by $g \simeq {1}/{720} - (\pi^4/45) v^4
- \zeta(3)/(8\pi^4) v $, we obtain
\begin{eqnarray}
\label{8}
\phi_{as} &=& \frac{\pi^2 \hbar c}{a^3}[-\frac{\pi^4}{45}v^4
-\frac{\zeta(3) }{8\pi^2} v],\;\;\;
\sigma_{as} = \frac{\pi}{a^2} [\frac{4\pi^4}{45}v^3
+\frac{\zeta(3) }{8\pi^2} ],\\
\label{9}
p_{as}&=& \frac{\pi^2 \hbar c}{a^4} [\frac{\pi^4}{45}v^4
-\frac{\zeta(3) }{8\pi^2} v],\;\;\;\
e_{as}= \frac{\pi^2 \hbar c}{a^3} \frac{3\pi^4}{45}v^4.
\end{eqnarray}
These expressions contain the large-volume
contributions
corresponding to the standard BBR (first term)
and corrections.
In the other limit of small $v$,
we have to use $g(v) = - v^3 \zeta(3)/2 $ and get
\begin{eqnarray}
\label{10}
\phi_{o} &=& \frac{\pi^2 \hbar c}{a^3}[-\frac{1}{720}
-\frac{\zeta(3) }{2} v^3],\;\;\;
\sigma_{o} = \frac{\pi}{a^2}
\frac{3 \zeta(3) }{2} v^2, \\
\label{11}
p_{o}&=& \frac{\pi^2 \hbar c}{a^4} [-\frac{1}{240}],\;\;\;\
e_{o}= \frac{\pi^2 \hbar c}{a^3} [-\frac{1}{720}
+\zeta(3) v^3].
\end{eqnarray}
In this case the contributions of the zero point energy
dominate. It is known that nondegenerate vacuum states
do not contribute to the entropy, which indeed vanishes at
$T=0$.\\
Let us now consider isentropic processes. This means
that we fix the values of the entropy for the internal
region (\ref{5}) during the complete process.
Technically we
express this fixed value according to
the value of the variable $v$ either through the approximation
(\ref{8}) or (\ref{10}).
Large distances
and/or
high temperatures lead to large values of $v$ so we have to
use $\sigma_{as}$. Constant entropy means
\begin{eqnarray}
\label{12}
\sigma = {\rm const.} =
\sigma_{as} = \frac{4\pi^2k^3}{45(\hbar c)^3} a T^3
+\frac{\zeta(3)}{8\pi} \frac{1}{a^2 }.
\end{eqnarray}
Asymptotically this is the standard relation
BBR $S = L^2 \sigma_{as}
= {\rm const.} \times V T^3 $, here valid for large $T $ and
$V$. If we now consider smaller values of $a$, then, because of
eq.(\ref{5}),
also $-\partial_v g(v)$
takes smaller values. It is possible to prove \cite{MR}
the inequalities
$ g <0 $, $ \partial_v g(v) <0 $ and $ (\partial_v)^2 g(v) <0 $.
This monotonic behaviour of $\partial_v g(v)$ leads to the
conclusion that also the corresponding values of $v$ become
smaller. Consequently, we have to apply the
other represention (\ref{10}) for small $v$
and obtain
\begin{eqnarray}
\label{13}
\sigma =\sigma_{as}=\sigma_{o}
=\frac{k^2}{\hbar^2 c^2 \pi}
\frac{3 \zeta(3) }{2} T^2.
\end{eqnarray}
This means that for $ a \rightarrow 0 $ the temperature
does not tend to infinity, but approaches the finite value
\begin{eqnarray}
\label{14}
T = \left(\sigma \,\, 2 \hbar^2 c^2 \pi/(3 \zeta(3) k^2)
\right)^{1/2}.
\end{eqnarray}
This is in contrast to the expectation: if we apply the
standard expression of BBR, fixed entropy implies
$VT^3 = {\rm const.} $, so that the temperature tends to
infinity for vanishing volume.
However this standard expression for BBR,
derived for a continuous frequency spectrum, is not valid
for small distances. The reduction of the degrees of freedom,
i.e. the transition from a continuous frequency spectrum to
a discrete spectrum, is the reason for our result.
\section{Equilibrium Points of the Casimir Pressure}
\noindent
The Casimir pressure results from the contributions of
the internal and the external regions acting on the right
movable plate.
\begin{eqnarray}
\label{15}
P(a,T,T') = P_{ext}(T') + P_{int}(a,T)
= \frac{\pi^2 \hbar c}{a^4}p(v)
+\frac{\pi^2 k^4}{45 (\hbar c)^3}(T^4 - {T'}^4),
\end{eqnarray}
where
\begin{eqnarray}
p(v) = -\frac{1}{4\pi^2} v[\zeta(3) +(2 - v\partial_v)k(4\pi^2v)]
=-\frac{1}{240} +3g(v) - v\partial_{v}g(v)
-\frac{\pi^4}{45} v^4.\nonumber
\end{eqnarray}
Usually one considers the case $T=T'$, so that the Casimir
pressure is prescribed by $p(v)$ alone.
It is known, that
$P(a,T,T'=T)$ is a negative but monotonically rising function
from $-\infty$ (for $ a\rightarrow 0 $) to $ \; 0\; $ (for
$a\rightarrow \infty $).
It is clear, that the addition of a positive pressure
$ \frac{\pi^2 k^4}{(\hbar c)^3}(T^4 - {T'}^4) $ for $T>T'$
stops the Casimir attraction at a finite value of $ v$.
The question is whether this equilibrium point may be
stable or not? The answer
follows from the monotonically rising behaviour of the standard
Casimir pressure.
\begin{eqnarray}
\label{16}
\frac{d}{da}P(a,T,T') =\frac{d}{da}P(a,T,T'=T) >0.
\end{eqnarray}
Consequently this equilibrium point is unstable
(see also \cite{MR}). \\
Next we consider the space between the two plates not for fixed
temperature but as a thermodynamically closed system with
fixed entropy. In the
external region we assume again a fixed
temperature $T'$.
To solve this problem in principle, it is sufficient to discuss
our system for large $v$ (as large $v$ we mean such
values of $v$ for which the asymptotic approximations (\ref{8}),
(\ref{9}) are valid; this region starts at
$ v> 0.2 $ ).
Using our asymptotic
formulae (\ref{8}),(\ref{9}) we write the Casimir pressure
as
\begin{eqnarray}
\label{17}
P(a,v,T') = \frac{\pi^2 \hbar c}{a^4}[\frac{\pi^4}{45}v^4
-\frac{\zeta(3)}{4\pi^2 }v
- \frac{\pi^4}{45}{v'}^4 ],
\end{eqnarray}
with $v' = aT' k/(\hbar c \pi)$ where $v$ has to be determined
from the condition $ \sigma_{as}=\sigma = {\rm const.} $ or
\begin{eqnarray}
\label{18}
\pi v^3 = [ a^2 \sigma - \zeta(3)/(8\pi^2)] 45/(4\pi^4).
\end{eqnarray}
Then we may write
\begin{eqnarray}
\label{19}
P(a,v,T') = \frac{\pi^2 \hbar c}{a^4}[\frac{\sigma a^2}
{4\pi} -\frac{9\zeta(3)}{32\pi^2 }]
\{\frac{45}{4\pi^4}(\frac{\sigma a^2}{4\pi}
- \frac{\zeta(3)}{8\pi^2}) \}^{3/2}
-\frac{\pi^2 \hbar c}{a^4} \frac{\pi^4}{45}{v'}^4.
\end{eqnarray}
At first we consider the case $T'=0$.
We look for the possible
equilibrium points $P(a,v,T'=0) =0$. The result is
$ v^3 = 45\zeta(3)/(4 \pi^6)$. This corresponds to $v=0.24$.
For this value of $v$ the used approximation is not very good, but
acceptable.
A complete numerical estimate \cite{MR} gives the same value.
Now we express the temperature $ T$ included in $v$ with the
help of the equation for isentropic motions
(\ref{18}) and obtain
$ a^2 = 9\zeta(3)/(8 \pi \sigma)$.
The instabiliy of this point can be directly seen by looking
at
\begin{eqnarray}
\label{20}
\frac{d}{da}P(a,T,T'=0) &=& - 4 P(a,T,T'=0)
+ \frac{\pi^2 \hbar c}{a^4}
[\frac{4 \pi^4}{45}v^3
-\frac{\zeta(3)}{8\pi^2 }]
(\frac{dv}{da })_{\sigma} |_{P=0} \nonumber\\
&= & \frac{\pi^2 \hbar c}{a^4}\frac{3\zeta(3)}{4\pi^2}
(\frac{dv}{da })_{\sigma}.
\end{eqnarray}
It is intuitively clear that
$(\frac{dv}{da })_{\sigma}$ is positive; an explicit proof
is given in \cite{MR}.
So it is clear, that this point is unstable
as in the isothermal case. If we consider, in
eq.(\ref{17}), the variable $v= aTk/(\hbar c \pi) $ at
fixed $T$, there is no further equilibrium point.
This result for isothermal processes is, however, not
valid for isentropic processes. In this case we obtain
according to eq.(\ref{19}) a second trivial equilibrium point
at $a \rightarrow \infty $ for vanishing external temperature
($v'=0$).
Between both zeroes we have one maximum. So we conlude:
For isentropic processes there must be
two equilibrium points; the left one is unstable, the
right one at $ a \rightarrow \infty $ corresponds to a
vanishing derivative. If we now
add a not too high external pressure with the help of an external
temperature $T'$, then this second equilibrium point
- present for isentropic processes - becomes stable.
So, in principle we may observe oscillations
at the position of the second equilibrium point.
\section*{Acknowledgments}
We would like to thank C. B. Lang and N. Pucker
for their constant support and K. Scharnhorst, G. Barton and
P. Kocevar for discussions on the present topic.
\section*{References}
| proofpile-arXiv_065-8729 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec.introduction}
The dynamics of gauge theories is a very wide area of research because
many fundamental physical theories are gauge theories. The basic
ingredients are the variational principle, which derives the dynamics
out of variations of an action functional, and the gauge principle,
which is the driving principle for determining interactions based on a
Lie group of internal symmetries. The gauge freedom exhibited by the
complete theory becomes a redundancy in the physical description. The
classical treatment of these systems was pioneered by Dirac (1950,
1964), Bergmann (1949), and Bergmann and Goldberg (1955). Dirac's
classical treatment in phase space (the cotangent bundle for
configuration space) has been shown (Gotay and Nester 1979, 1980,
Batlle \etal 1986) to be equivalent to the Lagrangian formulation in
configuration-velocity space (the tangent bundle). One ends up with a
constrained dynamics with some gauge degrees of freedom. One may
choose, as is customary in many approaches (Pons and Shepley 1995)
to introduce new constraints in the formalism to eliminate these
unwanted --- spurious --- degrees of freedom. This is the gauge
fixing procedure.
There are approaches other than gauge fixing. For instance, the
method of Faddeev and Jackiw (1993)and Jackiw (1995) is to
attempt to reduce the system to its physical degrees of
freedom by a process of directly substituting the constraints
into the canonical Lagrangian. It has been proved
(Garc\'\i{}a and Pons 1997) that, as long as ineffective constraints
--- functions that vanish in the constraint surface and whose
differentials also vanish there --- are not present, the
Faddeev-Jackiw method is equivalent to Dirac's.
A reduction procedure (Abraham and Marsden 1978, Sniatycki 1974,
Lee and Wald 1990) which seems to be particularly appealing from
a geometric point of view consists in quotienting out the kernel of
the presymplectic form in configuration-velocity space in order to get
a reduced space, the putative physical space, with a deterministic
dynamics in it, that is, without gauge degrees of freedom. One must
be careful that these techniques do not change the physics, for
example by dropping degrees of freedom, and that they are applicable
in all situations of physical interest. For example, we know of no
treatment of this technique which applies to the important case when
there are secondary constraints --- one purpose of this paper is to
provide this treatment.
In this paper we present a complete reduction method based on
quotienting out the kernel of the presymplectic form. We establish a
systematic algorithm and prove its equivalence with Dirac's method,
but only so long as ineffective constraints do not appear. Our
procedure turns out to be equivalent to Dirac's extended method, which
enlarges the Hamiltonian by including all first class constraints. It
differs from the ordinary Dirac method (supplemented by gauge fixing)
when ineffective constraints occur. Since the ordinary Dirac method
is equivalent to the Lagrangian formalism, it is to be preferred in
classical models.
We will consider Lagrangians with gauge freedom. Thus they must be
singular: The Hessian matrix of the Lagrangian, consisting of its
second partial derivatives with respect to the velocities, is
singular, or equivalently, the Legendre transformation from
configuration-velocity space to phase space is not locally invertible.
Singular also means that the pullback under this map of the canonical
form $\bomega$ from phase space to configuration-velocity space is
singular.
In order to proceed, we first compute, in section \ref{sec.kernel}, in
a local coordinate system, a basis for the kernel of the presymplectic
form. Our results will be in general local; global results could be
obtained by assuming the Lagrangian to be almost regular (Gotay and
Nester 1980). In section \ref{sec.obstructions}, we will single out
the possible problems in formulating the dynamics in the reduced space
obtained by quotienting out this kernel. In section
\ref{sec.physical} we will solve these problems and will compare our
results with the classical Dirac method. It proves helpful to work in
phase space here, and we end up with a reduced phase space complete
with a well-defined symplectic form. In section \ref{sec.example} we
illustrate our method with a simple example (which contains
ineffective constraints). We draw our conclusions in section
\ref{sec.conclusions}.
\section{The kernel of the presymplectic form}
\label{sec.kernel}
We start with a singular Lagrangian $L(q^i,\dot q^i)$
$(i=1,\cdots,N)$. The functions
\[\hat p_i(q,\dot q):=\partial L/\partial\dot q^i
\]
are used to define the Hessian $W_{ij}=\partial\hat p_i/\partial\dot
q^j$, a singular matrix that we assume has a constant rank $N-P$. The
Legendre map ${\cal F}\!L$ from configuration-velocity space (the
tangent bundle) $TQ$ to phase space $T^*\!Q$, defined by $p_i=\hat
p_i(q,\dot q)$, defines a constraint surface of dimension $2N-P$.
The initial formulation of the dynamics in $TQ$ uses the Lagrangian
energy
\[ E_{\rm L} := \hat p_i\dot q^i-L
\]
and {\bf{}X}, the dynamical vector field on $TQ$,
\begin{equation}
i_\mathbf{X} \bomega_{\rm L} = \mathbf{d}(E_{\rm L}) \ ,
\label{x}
\end{equation}
where
\[ \bomega_{\rm L} := \mathbf{d}q^s \wedge \mathbf{d}\hat p_{s}
\]
is the pullback under the Legendre map of the canonical form
$\bomega=\mathbf{d}q^{s}\wedge\mathbf{d}p_{s}$ in phase space.
$\bomega_{\rm L}$ is a degenerate, closed two-form, which is called
the presymplectic form on $TQ$. In fact there is an infinite number
of solutions for equation \eref{x} if the theory has gauge freedom,
but they do not necessarily exist everywhere (if there are Lagrangian
constraints). {\bf{}X} must obey the second order condition for a
function:
\[ \mathbf{X} q^{i} = \dot q^{i}\ \Longleftrightarrow \mathbf{X}
= \dot q^{s}{\partial\over\partial q^{s}} + A^{s}(q,\dot q)
{\partial\over\partial \dot q^{s}}\ ,
\]
where $A^{s}$ is partially determined by equation \eref{x}.
At first sight, the kernel of $\bomega_{\rm L}$ describes, in
principle, the arbitrariness in the solutions {\bf{}X} of equation
\eref{x}. Therefore it is tempting to think that in order to
construct a physical phase space, we must just quotient out this
kernel. The complete implementation of this procedure, which we
are presenting in this paper is, first, far from obvious, and
second, as we will see, fraught with danger.
Let us first determine a basis for
\[ {\cal K} := {\rm Ker}(\bomega_{\rm L})
\]
in local coordinates. We look for vectors {\bf{}Y} satisfying
\begin{equation}
i_\mathbf{Y} \bomega_{\rm L} = 0 \ .
\label{kern}
\end{equation}
With the notation
\[ \mathbf{Y} = \epsilon^k {\partial\over\partial q^k} +
\beta^k {\partial\over\partial\dot q^k} \ ,
\]
equation \eref{kern} implies
\numparts\label{ab}
\begin{eqnarray}
\epsilon^i W_{ij} & = & 0\ ,
\label{a} \\
\epsilon^i A_{ij} - \beta^i W_{ij} & = & 0\ ,
\label{b}
\end{eqnarray}
\endnumparts
where
\[ A_{ij} := {\partial\hat p_i\over\partial q^j}
- {\partial \hat p_j\over\partial q^i}\ .
\]
Since {\bf W} is singular (this causes the degeneracy of
$\bomega_{\rm L}$), it possesses null vectors. It is very
advantageous to this end to use information from phase space. It is
convenient to use a basis for these null vectors, $\gamma^i_\mu$,
$(\mu=1,\dots,P)$, which is provided from the knowledge of the $P$
primary Hamiltonian constraints of the theory, $\phi^{(1)}_\mu$.
Actually, one can take (Batlle \etal 1986),
\begin{equation}
\gamma^i_\mu\
= {\cal F}\!L^*
\left({\partial\phi^{(1)}_\mu\over\partial p_i}\right)
= {\partial\phi^{(1)}_\mu\over\partial p_i}(q,\hat p) \ ,
\label{gamma}
\end{equation}
where ${\cal F}\!L^*$ stands for the pullback of the Legendre map
${\cal F}\!L: T Q \longrightarrow T^*\!Q$. According to equation
\eref{a}, $\epsilon^i$ will be a combination of these null vectors:
$\epsilon^i = \lambda^\mu \gamma^i_\mu$. Notice that we presume that
these primary constraints are chosen to be effective.
To have a solution for $\beta^i$ we need, after contraction of
equation \eref{b} with the null vectors $\gamma^j_\nu$,
\[ 0 = \lambda^\mu \gamma^i_\mu A_{ij} \gamma^j_\nu
= \lambda^\mu {\cal F}\!L^*
\left({\partial\phi^{(1)}_\mu\over\partial p_i}\right)
\left({\partial\hat p_i\over\partial q^j}
- {\partial\hat p_j\over\partial q^i}\right)
{\cal F}\!L^*
\left({\partial\phi^{(1)}_\nu\over\partial p_j}\right)\ ,
\]
which is to be understood as an equation for the $\lambda^\mu$s.
We then use the identity
\begin{equation}
{\cal F}\!L^*
\left({\partial\phi^{(1)}_\mu\over\partial p_j}\right)
{\partial{\hat p}_j\over \partial q^i}
+ {\cal F}\!L^*
\left({\partial\phi^{(1)}_\mu\over\partial q^i}\right)
= 0\ ,
\label{idenx}
\end{equation}
which stems from the fact that $\phi^{(1)}_\mu (q,\hat p)$ vanishes
identically; we get
\begin{eqnarray}
0 & = &
\lambda^\mu {\cal F}\!L^*
\left({\partial\phi^{(1)}_\mu\over\partial p_i}
{\partial\phi^{(1)}_\nu\over\partial q^i}
- {\partial\phi^{(1)}_\mu\over\partial q^j}
{\partial\phi^{(1)}_\nu\over\partial p_j}\right)
\nonumber \\
& = & \lambda^\mu {\cal F}\!L^*
\left(\{\phi^{(1)}_\nu,\phi^{(1)}_\mu\}\right) \ .
\label{fc}
\end{eqnarray}
Condition \eref{fc} means that the combination
$\lambda^\mu\phi^{(1)}_\mu$ must be first class. Let us split the
primary constraints $\phi^{(1)}_\mu$ between first class
$\phi^{(1)}_{\mu_1}$ and second class $\phi^{(1)}_{\mu'_1}$ at the
primary level, and we presume that second class constraints are second
class everywhere on the constraint surface (more constraints may
become second class if we include secondary, tertiary, etc,
constraints). They satisfy
\begin{equation}
\{ \phi^{(1)}_{\mu_1},\,\phi^{(1)}_{\mu} \} = pc\ , \quad
\det |\{\phi^{(1)}_{\mu'_1},\phi^{(1)}_{\nu'_1} \}| \not= 0\ ,
\label{fc-sc}
\end{equation}
where $pc$ stands for a generic linear combination of primary
constraints. Equations \eref{fc} simply enforce
\[ \lambda^{\mu'_1} = 0 \ .
\]
Consequently a basis for the $\epsilon^i$ will be spanned by
$\gamma_{\mu_1}$, so that
\[ \epsilon^i=\lambda^{\mu_1}\gamma^i_{\mu_1}
\]
for $\lambda^{\mu_1}$ arbitrary. Once $\epsilon^i$ is given,
solutions for $\beta^i$ will then be of the form
\[ \beta^i
= \lambda^{\mu_1}\beta^i_{\mu_1} + \eta^\mu \gamma^i_\mu\ ,
\]
where the $\eta^\mu$ are arbitrary functions on $TQ$. We will now
determine the $\beta^j_{\mu_1}$.
To compute $\beta^j_{\mu_1}$ it is again very convenient to use
Hamiltonian tools. Consider any canonical Hamiltonian $H_{\rm c}$
(which is not unique), that is, one satisfying $E_{\rm L} = {\cal
F}\!L^*(H_{\rm c})$. Since we know from the classical Dirac analysis
that the first class primary constraints $\phi^{(1)}_{\mu_1}$ may
produce secondary constraints,
\[ \phi^{(2)}_{\mu_1}= \{\pi^{(1)}_{\mu_1},H_{\rm c} \}\ ,
\]
we compute (having in mind equation \eref{b})
\begin{eqnarray}
\gamma^i_{\mu_1} A_{ij} +
{\partial\phi^{(2)}_{\mu_1}\over\partial p_i}(q,\hat p) W_{ij}
&=& \gamma^i_{\mu_1} A_{ij}
+ {\partial\phi^{(2)}_{\mu_1}\over\partial p_i}(q,\hat p)
{\partial\hat p_i\over\partial \dot q_j}
\nonumber\\
&=& \gamma^i_{\mu_1} A_{ij}
+ {\partial{\cal F}\!L^*(\phi^{(2)}_{\mu_1})
\over\partial\dot q_j}
\nonumber\\
&=& \gamma^i_{\mu_1} A_{ij}
+ {\partial (K\phi^{(1)}_{\mu_1})\over\partial\dot q_j}\ ,
\label{nozero}
\end{eqnarray}
where we have used the operator $K$
defined (Batlle \etal 1986, Gr\`acia and Pons 1989) by
\[ K f := \dot q^i {\cal F}\!L^*
\left({\partial f\over\partial q^i}\right)
+ {\partial L \over \partial q^i}
{\cal F}\!L^*
\left({\partial f\over\partial p_i}\right) \ .
\]
This operator satisfies (Batlle \etal 1986, Pons 1988)
\begin{equation}
K f = {\cal F}\!L^*
\left(\{f,H_{\rm c} \}\right)
+ v^\mu(q,\dot q) {\cal F}\!L^*
\left(\{f,\phi^{(1)}_\mu \}\right) \ ,
\label{prop}
\end{equation}
where the functions $v^\mu$ are defined through the identities
\begin{equation}
\dot q^i = {\cal F}\!L^* \left(\{q^i,H_{\rm c}\}\right)
+ v^\mu(q, \dot q)
{\cal F}\!L^*\left(\{q^i, \phi_\mu^{(1)} \}\right)\ .
\label{propv}
\end{equation}
Property \eref{prop} implies, for our first class constraints,
\[ K \phi^{(1)}_{\mu_1}
= {\cal F}\!L^*\left(\phi^{(2)}_{\mu_1}\right) \ ,
\]
which has been used in equation \eref{nozero}. Let us continue with
equation \eref{nozero}:
\begin{eqnarray}
\gamma^i_{\mu_1} A_{ij} +
{\partial (K\phi^{(1)}_{\mu_1})\over\partial\dot q_j} & = &
- {\cal F}\!L^*
\left({\partial\phi^{(1)}_{\mu_1}\over\partial q^j}\right)
- {\cal F}\!L^*
\left({\partial\phi^{(1)}_{\mu_1}\over\partial p_i}\right)
{\partial\hat p_j\over\partial q^i}
\nonumber \\
& & + {\partial\over\partial\dot q^j}
\left(\dot q^{i}{\cal F}\!L^*
\left({\partial \phi^{(1)}_{\mu_1}\over
\partial q^{i}}\right)
+{\partial L \over \partial q^{i}}{\cal F}\!L^*
\left({\partial \phi^{(1)}_{\mu_1}\over
\partial p_{i}}\right)\right)
\nonumber\\
&= & W_{ij} K {\partial\phi^{(1)}_{\mu_{1}}\over
\partial p_{i}} \ ,
\label{ppprop}
\end{eqnarray}
where we have omitted some obvious steps to produce the final
result. We can read off from this computation the solutions for
equation \eref{b}:
\begin{equation}
\beta^j_{\mu_1} = K {\partial\phi^{(1)}_{\mu_1}\over\partial p_j}
- {\cal F}\!L^*\left({\partial\phi^{(2)}_{\mu_1}
\over\partial p_j}\right) \ .
\end{equation}
Therefore, a basis for $\cal K$ is provided by:
\numparts\label{nul}
\begin{equation}
\bGamma_\mu := \gamma^j_\mu
{\partial\over\partial{\dot q}^j}
\label{nul1}
\end{equation}
and
\begin{equation}
\bDelta_{\mu_1} := \gamma^j_{\mu_1}
{\partial\over\partial q^j}
+\beta^j_{\mu_1} {\partial\over\partial{\dot q}^j} \ .
\label{nul2}
\end{equation}
\endnumparts Vectors $\bGamma_\mu$ in equation \eref{nul1} form a
basis for ${\rm Ker}(T{\cal F}\!L)$, where $T{\cal F}\!L$ is the
tangent map of ${\cal F}\!L$ (also often denoted by ${\cal
F}\!L_{*}$). They also span the vertical subspace of $\cal K$: ${\rm
Ker}(T{\cal F}\!L) = {\rm Ver}({\cal K})$. This is a well known
result (Cari\~{n}ena \etal 1988), but as far as we know equations
(\ref{nul1}, \ref{nul2}) are the first explicit local expression for
$\cal K$ itself.
All other results (Cari\~{n}ena 1990), obtained on geometrical
grounds, for $\cal K$ are obvious once the basis for this kernel is
displayed, as it is in equations (\ref{nul1}, \ref{nul2}). For
instance, it is clear that $dim\,{\cal K}\leq2\,dim\,{\rm Ver}({\cal
K})$. Also, defining the vertical endomorphism
\[ \mathbf{S} = {\partial\over\partial\dot q^i}
\otimes \mathbf{d}q^i\ ,
\]
we have $\mathbf{S}({\cal K)} \subset {\rm Ver}({\cal K})$. The case
when
\[ \mathbf{S}({\cal K}) = {\rm Ver}({\cal K})\ ,
\]
corresponds, in the Hamiltonian picture, to the case when all primary
constraints are first class (indices $\mu$ = indices ${\mu_1}$).
These are the so-called Type II Lagrangians (Cantrjin \etal 1986).
$\mathbf{S}({\cal K}) = \emptyset $ corresponds to the case when all
primary constraints are second class (indices $\mu$ = indices
${\mu'_1}$).
\Eref{nul1} implies, for any function $f$ on $T^{*}\!Q$,
\begin{equation}
\bGamma_{\mu}\left({\cal F}\!L^{*}(f)\right) = 0\ .
\end{equation}
The corresponding equation for $\bDelta_{\mu_{1}}$ is:
\begin{equation}
\bDelta_{\mu_{1}}\left({\cal F}\!L^*(f)\right)
= {\cal F}\!L^*\left(\{f,\phi^{(1)}_{\mu_1}\}\right) \ .
\label{delprop}
\end{equation}
Since we will make use of this property below, we now prove this
result. The action of $\bDelta_{\mu_{1}}$ is
\begin{eqnarray*}
\bDelta_{\mu_{1}}\left({\cal F}\!L^*(f)\right)
& = &\gamma^j_{\mu_1} \left({\cal F}\!L^*
\left({\partial f\over\partial q^{j}}\right)
+ {\partial\hat p_{i}\over\partial q^{j}}
{\cal F}\!L^*
\left({\partial f\over\partial p_{i}}\right)\right) \\
& & \quad +\beta^j_{\mu_1} W_{ji}{\cal F}\!L^*
\left({\partial f\over\partial p_{i}}\right)\ .
\end{eqnarray*}
We use equations \eref{b}, \eref{gamma}, and \eref{idenx} to get
\begin{eqnarray*}
\bDelta_{\mu_{1}}\left({\cal F}\!L^*(f)\right)
& = & {\cal F}\!L^*
\left({\partial\phi^{(1)}_{\mu_{1}}\over\partial p_{j}}
{\partial f\over\partial q^{j}}\right)
- {\cal F}\!L^*
\left({\partial\phi^{(1)}_{\mu_{1}}\over\partial q^{j}}
{\partial f\over\partial p_{j}}\right) \\
& = & {\cal F}\!L^*\left(\{f,\phi^{(1)}_{\mu_{1}}\}\right) \ .
\end{eqnarray*}
The commutation relations (Lie Brackets) for the vectors in equations
(\ref{nul1}, \ref{nul2}) are readily obtained, and we present these
new results here for the sake of completeness. We introduce the
notation
\begin{eqnarray*}
\{ \phi_{\mu_1},\phi_{\mu} \} & = &
A_{{\mu_1}\mu}^\nu\phi_{\nu} \ , \\
\{ \phi_{\mu_1},\phi_{\nu_1} \} & = &
B_{{\mu_1}{\nu_1}}^{\rho_1}\phi_{\rho_1}
+{1\over2}B_{{\mu_1}{\nu_1}}^{\rho \sigma}
\phi_{\rho}\phi_{\sigma}
\end{eqnarray*}
(commutation of first class constraints is also first class). We
arrive at
\numparts\label{comm22}
\begin{eqnarray}
~[\bGamma_\mu,\bGamma_\nu] & = & 0 \ ,
\label{comm22a} \\
~[\bGamma_\mu,\bDelta_{\mu_1}] & = &
{\cal F}\!L^*
\left(A_{{\mu_1}\mu}^\nu\right) \bGamma_{\nu} \ ,
\label{comm22b} \\
~[\bDelta_{\mu_1},\bDelta_{\nu_1}] & = &
{\cal F}\!L^* \left(B_{{\nu_1}{\mu_1}}^{\rho_1}\right)
\bDelta_{\rho_1}
+ v^{\delta'_1}{\cal F}\!L^*
\left(B_{{\nu_1}{\mu_1}}^{\rho{\sigma'_1}}
\{\phi_{\sigma'_1},\phi_{\delta'_1} \}\right)
\bGamma_\rho \ ,
\label{comm22c}
\end{eqnarray}
\endnumparts where the $v^{\delta'_1}$ are defined in equation
\eref{propv}. Observe that the number of vectors in equations
(\ref{nul1}, \ref{nul2}) is even because $|\mu'_1| = |\mu| - |\mu_1|$
is the number of second class primary constraints (at the primary
level), which is even.
Because the algebra of $\cal K$ is closed, the action of $\cal K$ on
$TQ$ is an equivalence relation. We can form the quotient space
$TQ/{\cal K}$ and the projection
\[ \pi : TQ \longrightarrow TQ/{\cal K}\ .
\]
$TQ/{\cal K}$ is endowed with a symplectic form obtained by
quotienting out the null vectors of $\bomega_{\rm L}$ (that is,
$\bomega_{\rm L}$ is projectable to $TQ/{\cal K}$). The space
$TQ/{\cal K}$ is not necessarily the final physical space, however,
because we have not yet formulated the dynamics of the system: We
now turn to the question of the projectability of the Lagrangian
energy.
\section{Obstructions to the projectability of the Lagrangian energy}
\label{sec.obstructions}
In order to project the dynamical equation \eref{x} to $TQ/{\cal K}$,
we need $E_{\rm L}$ to be projectable under $\pi$. However, in order
for $E_{\rm L}$ to be projectable we must check whether it is constant
on the orbits generated by $\cal K$ as defined by the vector fields of
equations (\ref{nul1}, \ref{nul2}). Indeed $\bGamma_\mu (E_{\rm
L})=0$, but from equation \eref{delprop},
\[ \bDelta_{\mu_1} (E_{\rm L})
= - {\cal F}\!L^*\left(\phi^{(2)}_{\mu_1}\right) \ ,
\]
where
\[ \phi^{(2)}_{\mu_1}:=\{\phi^{(1)}_{\mu_1},H_{\rm c}\}\ .
\]
If ${\cal F}\!L^*(\phi^{(2)}_{\mu_1})\neq 0$ for some $\mu_1$, then
$\phi^{(2)}_{\mu_1}$ is a secondary Hamiltonian constraint. As a side
remark, note that in this case ${\cal F}\!L^*(\phi^{(2)}_{\mu_1}$) is
a primary Lagrangian constraint. In fact it can be written (Batlle
\etal 1986) as
\[ {\cal F}\!L^*\left(\phi^{(2)}_{\mu_1}\right)
= [L]_i \gamma^i_{\mu_1} \ ,
\]
where $[L]_i$ is the Euler-Lagrange functional derivative of $L$.
We see that there is an obstruction to the projectability of
$E_{\rm L}$ to $TQ/{\cal K}$ as long as there exist secondary
Hamiltonian constraints or equivalently if there exist
Lagrangian constraints.
One way to remove this problem (Ibort and Mar\'{\i}n-Solano 1992),
Ibort \etal 1993) is to use the coisotropic embedding theorems
(Gotay 1982), Gotay and Sniatycki 1981) and look for an extension of
the tangent space possessing a regular Lagrangian that extends the
singular one and leads to a consistent theory once the extra degrees
of freedom are removed. This method is equivalent to Dirac's, but
only if there are no secondary Hamiltonian constraints. However, if
there are, which is precisely our case, the dynamics becomes modified
and thus changes the original variational principle. Instead of using
this technique we will try to preserve the dynamics.
\section{Physical space}
\label{sec.physical}
In the cases where secondary Hamiltonian constraints do exist (for
instance, Yang-Mills and Einstein-Hilbert theories), we must find an
alternative reduction of $TQ$ in order to obtain the projectability of
$E_{\rm L}$.
The initial idea was to quotient out the orbits defined by equations
(\ref{nul1}, \ref{nul2}). Since \hbox{$\bGamma_\mu (E_{\rm L})=0$} we
can at least quotient out the orbits defined by equation \eref{nul1}.
But this quotient space, ${TQ/{\rm Ker}(T{\cal F}\!L)}$, is already
familiar to us: It is isomorphic to the surface $M_{1}$ defined by the
primary constraints in $T^*\!Q$. In fact, if we define $\pi_1$ as the
projection
\[ \pi_1: TQ \longrightarrow {TQ/{\rm Ker}(T{\cal F}\!L)}\ ,
\]
we have the decomposition of the Legendre map
${\cal F}L = i_1 \circ \pi_1$, where
\[ i_1 : {TQ\over{\rm Ker}(T{\cal F}\!L)} \longrightarrow T^*\!Q\ ,
\]
with
\[ i_{1}\left({TQ\over{\rm Ker}(T{\cal F}\!L)}\right)=M_{1}\ .
\]
Now we can take advantage of working in $M_1\subset T^*\!Q$. Let us
project our original structures on $TQ$ into $M_1$. Consider the
vector fields $\bDelta_{\mu_1}$. \Eref{delprop} tells
us that the vector fields $\bDelta_{\mu_1}$ are projectable to
$M_1$ and that their projections are just $\{-,\phi^{(1)}_{\mu_1}\}$.
In fact these vector fields $\{-,\phi^{(1)}_{\mu_1}\}$ are vector
fields of $T^*\!Q$, but they are tangent to $M_1$ because
$\phi^{(1)}_{\mu_1}$ are first class (among the primary constraints
defining $M_1$). Incidentally, note that the vector fields
$\{-,\phi^{(1)}_{\mu'_1}\}$ associated with the second class primary
constraints in $T^*\!Q$ are not tangent to $M_1$.
Formulation in $M_1$ of the dynamics corresponding to equation
\eref{x} uses the pre-symplectic form $\bomega_1$ defined by
$\bomega_1 = i^*_1 \bomega$, where $\bomega$ is the
canonical form in phase space, and the Hamiltonian
$H_1$ defined by $H_1 = i^*_1 H_{\rm c}$, with
$H_{\rm c}$ such that ${\cal F}\!L^*(H_{\rm c})= E_{\rm L}$. The
dynamic equation in $M_1$ will be:
\begin{equation}
i_{\mathbf{X}_1} \bomega_1 = \mathbf{d}H_1\ .
\label{x1}
\end{equation}
The null vectors for $\bomega_1$ are $\{-,\phi^{(1)}_{\mu_1}\}$
(more specifically, their restriction to $M_1$). (This result
is general: The kernel of the pullback of the symplectic
form to a constraint surface in $T^{*}\!Q$ is locally
spanned by the vectors associated, through the Poisson
Bracket, with the first class constraints among the constraints which
define the surface.) To project the dynamics of equation
\eref{x1} to the quotient of $M_1$ by the orbits defined by
$\{-,\phi^{(1)}_{\mu_1}\}$:
\begin{equation}
{\cal P}_{1} := {M_{1} \over (\{-,\phi^{(1)}_{\mu_{1}}\})} \ ,
\label{P.1}
\end{equation}
we need the projectability of $H_1$ to this quotient manifold. To
check this requirement it is better to work in $T^*\!Q$. Then
projectability of $H_1$ to ${\cal P}_{1}$ is equivalent to requiring
that $\{\phi^{(1)}_{\mu_1},H_{\rm c}\}|{}_{{}_{M_1}}=0$.
Here lies the obstruction we met in the previous section, for it is
possible that $\{\phi^{(1)}_{\mu_1},H_{\rm c}\}|{}_{{}_{M_1}}\neq 0$
for some constraints $\phi^{(1)}_{\mu_1}$. Let us assume that this is
the case. As we did before, we define
\[ \phi^{(2)}_{\mu_1}:=\{\phi^{(1)}_{\mu_1},H_{\rm c}\} \ .
\]
These constraints may not be independent, some of them may vanish
on $M_1$, and some previously first class constraints may become
second class. Those that do not vanish are secondary constraints and
allow us to define the new surface $M_2 \subset M_1$ (we define the
map $i_2: M_2 \longrightarrow M_1$) by $\phi^{(2)}_{\mu_1} = 0$.
We can now form the projection of $H_2:= i_2^*H_1$ to
$M_2/(\{-,\phi^{(1)}_{\mu_1}\})$, but the projection of $\bomega_2:=
i_2^*\bomega_1$ can be still degenerate in this quotient space, since
$\bomega_2$ may have acquired new null vectors (and may have lost some
of the old ones). In fact, once all constraints are expressed in
effective form, ${\rm Ker}(\bomega_2)$ is generated under the Poisson
Bracket associated with $\bomega$ by the subset of effective
constraints that are first class with respect to the whole set of
constraints defining $M_2$. If there is a piece in this kernel that
was not present in ${\rm Ker}(\bomega_1)$, then new conditions for the
projectability of $H_2$ will appear.
The dynamic equation in $M_2$ is
\begin{equation}
i_{\mathbf{X}_2} \bomega_2 = \mathbf{d}H_2 \ .
\label{x2}
\end{equation}
It is still convenient to work with structures defined in $T^*\!Q$.
Suppose that $\phi^{(2)}_{\mu_2}$ is any secondary, first class,
effective constraint in $M_2$; therefore $\{-,\phi^{(2)}_{\mu_2}\}
\in {\rm Ker}(\bomega_2)$ but $\{-,\phi^{(2)}_{\mu_2}\}
\notin {\rm Ker}(\bomega_1)$. The new projectability condition for
$H_2$ induced by $\phi^{(2)}_{\mu_2}$ is
\[ \{\phi^{(2)}_{\mu_2} ,H_{\rm c}\}|{}_{{}_{M_2}} = 0\ .
\]
This means that we might find new constraints if this
condition is not satisfied. A new surface $M_3$
will appear, and a new kernel for a new $\bomega_3$ should be
quotiented out, and so on. We will not go further because we are just
reproducing Dirac's algorithm in phase space (Dirac 1950, 1964,
Batlle \etal 1986, Gotay \etal 1978). We do have a shift of
language, however: What in Dirac's standard algorithm is regarded as a
condition for the Hamiltonian vector field to be tangent to the
constraint surface is here regarded as a projectability condition for
the Hamiltonian to a quotient space.
To summarize: The constraint surface $M_{1}$ is defined by the
primary constraints $\phi^{(1)}_{\mu}$, a subset of which are the
first class constraints $\phi^{(1)}_{\mu_{1}}$. These first class
constraints are used in the formation of the quotient space
\[ {\cal P}_{1} = {M_{1}\over\{-,\phi^{(1)}_{\mu_{1}}\}} \ .
\]
The projectability condition for $H_{1}$ (the pullback of $H_{\rm c}$
to $M_1$) to ${\cal P}_{1}$ may be expressed as the condition
$\{H_{\rm c},\phi^{(1)}_{\mu_{1}}\}|_{M_{1}}=0.$ If this condition
holds, we have found the final physical space. If it doesn't, there
are new, secondary constraints $\phi^{(2)}_{\mu_{1}}$, and these
constraints along with the initial set of primary constraints
$\phi^{(1)}_{\mu}$ are used to define a constraint surface $M_{2}$.
Among the set of constraints used to define $M_{2}$ are first class
constraints, including a subset of the first class constraints
associated with $M_{1}$, which we denote $\phi^{(1)}_{\mu_{2}}$, and a
subset of the secondary constraints, which we denote
$\phi^{(2)}_{\mu_{2}}$. These first class constraints are used in the
formulation of the quotient space
\[ {\cal P}_{2} := {M_{2}\over(\{-,\phi^{(1)}_{\mu_{2}}\},
\{-,\phi^{(2)}_{\mu_{2}}\})} \ .
\]
Again we must require projectability of the Hamiltonian; eventually,
the final phase space is
\begin{equation}
{\cal P}_f := {M_f \over
(\{-,\phi^{(1)}_{\mu_{f}}\},
\{-,\phi^{(2)}_{\mu_{f}}\},
\dots,
\{-,\phi^{(k)}_{\mu_{f}}\}) } \ ,
\label{phys.space}
\end{equation}
where $\phi^{(n)}_{\mu_{f}}$ are the final first class $n$-ary
constraints, all of which are taken in effective form. ${\cal P}_f$
is endowed with a symplectic form which is the projection of the form
$\bomega_f$ in $M_f$, which is the final constraint surface.
The dimension of ${\cal P}_f$ is $2N-M-P_f$,
where $N$ is the dimension of the initial configuration space, $M$ is
the total number of constraints, and $P_f$ is the number of final
first class constraints. Observe that we end up with the standard
counting of degrees of freedom for constrained dynamical systems:
First class constraints eliminate two degrees of freedom each, whereas
second class constraints eliminate only one each. The final result is
an even number because the number of second class constraints is even.
In order to use the technique we've presented, the constraints are
presumed to be effective (for example, see equation \eref{gamma}) ---
if ineffective constraints occur, they can always be made effective
for use with this technique; in that sense, the technique is actually
geometrical. One might ask whether such modification of ineffective
constraints can cause problems. It turns out that if ineffective
constraints occur, then their presence may modify the gauge fixing
procedure used in conjunction with the original Dirac method in such a
way that the counting of degrees of freedom differs from the one
presented above. In the next section we discuss a simple example that
shows the difference between Dirac's original treatment, supplemented
by gauge fixing, and the quotienting method we've outlined here, which
corresponds to Dirac's extended method.
Dirac's extended method, which is equivalent to the one we've
presented here, produces a final phase space which is always even
dimensional. Dirac's original procedure, supplemented by gauge
fixing, has the superiority of being equivalent to the Lagrangian
variational principle. Therefore, in spite of the fact that this
latter method may result in a system with an odd number of degrees of
freedom (as in the example in the following section), it is to be
preferred for classical models.
\section{Example}
\label{sec.example}
Consider the Lagrangian
\begin{equation}
L = {1\over2}{\dot x}^2 + {1\over2z}{\dot y}^2\ ,
\label{lag.example}
\end{equation}
where $z\neq 0$. The Noether gauge
transformations are
\[ \delta x=0\ ,\ \delta y= {\epsilon{\dot y}\over z}\ ,\
\delta z={\dot \epsilon}\ ,
\]
where $\epsilon$ is an arbitrary function.
First, we analyze this system from a Lagrangian point of view. The
equations of motion are
\begin{equation}
{\ddot x}=0\ ,\ {\dot y}=0\ .
\end{equation}
The $z$ variable is completely arbitrary and is pure gauge. These
equations define a system with three degrees of freedom in tangent
space, parameterized by $x(0),{\dot x}(0),y(0)$. Notice that the
gauge transformation $\delta y$ vanishes on shell, so $y$ is a weakly
gauge invariant quantity.
Let us now analyze this system using Dirac's method. The Dirac
Hamiltonian is
\begin{equation}
H_D = {1\over2} p_x^2 +{1\over2} z p_y^2
- \lambda p_z \ ,
\end{equation}
where $\lambda$ is the Lagrange multiplier for the primary constraint
$p_z=0$. The stabilization of $p_z=0$ gives the ineffective
constraint $p_y^2=0$, and the algorithm stops here. The gauge
generator (Batlle \etal 1989, Pons \etal 1997) is
\begin{equation}
G = \dot\epsilon p_z + {\epsilon\over2} p_y^2\ ,
\end{equation}
with $\epsilon$ an arbitrary function of time.
The gauge fixing procedure, Pons and Shepley (1995), has in general
two steps. The first is to fix the dynamics, and the second is to
eliminate redundant initial conditions. Here, to fix the dynamics we
can introduce the general gauge-fixing $z-f(t)=0$ for $f$ arbitrary.
Stability of this condition under the gauge transformations sets
$\dot\epsilon = 0$. Since the coefficient of $\epsilon$ in $G$ is
ineffective, it does not change the dynamical trajectories, and so the
gauge fixing is complete. Notice that this violates the standard
lore, for we have two first class constraints, $p_z=0$ and $p_y=0$ but
only one gauge fixing. This totals three constraints that reduce the
original six degrees of freedom to three: $x(0),p_x(0),y(0)$, the same
as in the Lagrangian picture.
Instead, if we apply the method of quotienting out the kernel of the
presymplectic form, we get as a final reduced phase space
\[ {\cal P}_f = {M_2 \over{(\{-,p_z\},\{-,p_y\})}} \ ,
\]
where $M_2$ is the surface in phase space defined by $p_z =0,p_y=0$.
We have $\bomega_2=\mathbf{d}x\wedge\mathbf{d}p_x$ and
$H_2={1\over2}p_x^2$. The dimension of ${\cal P}_f$ is 2.
This result, which is different from that using Dirac's method,
matches the one obtained with the extended Dirac's Hamiltonian, where
all final first class constraints (in effective form) are added with
Lagrange multipliers to the canonical Hamiltonian. Dirac's conjecture
was that the original Dirac theory and the extended one were
equivalent. We conclude that when Dirac's conjecture holds, the
method of quotienting out the kernel is equivalent to Dirac's, whereas
if Dirac's conjecture fails, it is equivalent to the extended Dirac's
formalism.
\section{Conclusions}
\label{sec.conclusions}
In summary, we have the following.
1) We have obtained a local basis for
${\cal K} = {\rm Ker}(\bomega_{\rm L})$ in configuration-velocity
space for any gauge theory. This is new and allows for trivial
verifications of the properties of $\cal K$ given in the literature.
To get these results it has been particularly useful to rely on
Hamiltonian methods.
2) We have obtained as the final reduced phase space the quotient of
the final Dirac's constraint surface in canonical formalism by the
integral surface generated by the final first class constraints in
effective form. We find the constraint surface ($M_{f}$ in equation
\eref{phys.space}) through a projectability requirement on the
Lagrangian energy (or equivalently, on the Hamiltonian)
rather than through imposing tangency conditions on
the Hamiltonian flows. Let us emphasize this point: We do not talk of
stabilization of constraints but rather projectability of structures
which are required to formulate the dynamics in a reduced physical
phase space.
3) We have compared our results with Dirac's procedure. An agreement
exists in all the cases when no ineffective Hamiltonian constraints
appear in the formalism. If there are ineffective constraints whose
effectivization is first class, then our results disagree with
Dirac's, and it turns out that the quotienting algorithm agrees with
the extended Dirac formalism. When there are disagreements, the
origin is in the structure of the gauge generators. Sometimes the
gauge generators contain pieces that are ineffective constraints, and
they, contrary to the usual case, do not call for any gauge fixing.
Essentially, the variables that are canonically conjugate to these
first class ineffective constraints are weakly (on shell) gauge
invariant. The quotienting reduction method, as well as Dirac's
extended formulation, eliminates these variables and yields a phase
space whose variables are strictly (on and off shell) gauge invariant.
This is the difference with Dirac's original method, supplemented with
gauge fixing, which is able to retain the weakly gauge invariant
quantities. For this reason we feel that this latter technique is
superior to the quotienting algorithm in these circumstances --- at
least for classical models.
4) We have produced a simple example that illustrates the failure of
Dirac's conjecture in the presence of ineffective constraints. This
example also shows that, in Dirac's analysis, it is possible to have
Hamiltonian formulations with an odd number of physical degrees of
freedom. We must remark that in Dirac's approach (supplemented with
gauge fixing) it is not always true that every first class constraint
eliminates two degrees of freedom: This does not happen if there are
first class constraints that appear in the stabilization algorithm in
ineffective form.
5) It is worth mentioning that other reduction techniques,
specifically the Faddeev and Jackiw method, may also fail to reproduce
Dirac's theory (Garc\'\i{}a and Pons 1998) if the formalism contains
ineffective constraints.
6) Of course, one should not forget quantum mechanics. The canonical
approach to quantum mechanics involves a (nonsingular) symplectic
form, Isham (1984). In this method, it is therefore required that
phase space be even-dimensional. This argument would tend to favor
the quotienting algorithm. However, it may be that other approaches
to quantum mechanics, possibly the path integration approach, do not
need such a requirement. And in any case, it is not strictly
necessary that a model which is acceptable as a classical model be
quantizable. It is for these reasons that we say that an approach to
Hamiltonian dynamics which results in a phase-space picture equivalent
to the tangent space picture --- the original Dirac method
supplemented with gauge fixing --- is favored for classical models.
\ack
We are pleased to thank C\'ecile DeWitt-Morette for her advice.
JMP and DCS would like to thank the Center for Relativity of The
University of Texas at Austin for its hospitality. JMP acknowledges
support by CIRIT and by CICYT contracts AEN95-0590 and GRQ 93-1047
and wishes to thank the Comissionat per a Universitats i Recerca
de la Generalitat de Catalunya for a grant. DCS acknowledges
support by National Science Foundation Grant PHY94-13063.
\pagebreak
\References
\item[]
Abraham R and Marsden J E 1978
{\it Foundations of Mechanics} 2nd edn
(Reading MA: Benjamin-Cummings)
\item[]
Batlle C, Gomis J, Gr\`acia X and Pons J M 1989
Neother's theorem and gauge transformations: Application to the
bosonic string and $C\!P^{n-1}_{2}$
\JMP {\bf 30} 1345-50
\item[]
Batlle C, Gomis J, Pons J M and Roman N 1986
Equivalence between the Lagrangian and Hamiltonian formalism
for constrained systems
\JMP {\bf 27} 2953-62
\item[]
Bergmann P G 1949
Non-linear field theories
\PR {\bf 75} 680-5
\item[]
Bergmann P G and Goldberg I 1955
Dirac bracket transformations in phase space
\PR {\bf 98} 531-8
\item[]
Cantrjin F, Cari\~{n}ena J F, Crampin M and Ibort L A 1986
Reduction of degenerate Lagrangian systems
{\it J.\ Geom.\ Phys} {\bf 3} 353-400
\item[]
Cari\~{n}ena J F 1990
Theory of singular Lagrangians
{\it Forstsch.\ Phys.} {\bf 38} 641-79
and references therein
\item[]
Cari\~{n}ena J F, L\'{o}pez C and Rom\'{a}n-Roy N 1988
Origin of the Lagrangian constraints and their relation with the
Hamiltonian formulation
\JMP {\bf 29} 1143-9
\item[]
Dirac P A M 1950
Generalized Hamiltonian dynamics
{\it Can.\ J.\ Math.} {\bf 2} 129-48
\item[]
\dash 1964
{\it Lectures on Quantum Mechanics}
(New York: Yeshiva University Press)
\item[]
Faddeev L and Jackiw R 1993
Hamiltonian reduction of unconstrained and constrained systems
\PRL {\bf 60} 1692-4
\item[]
Garc\'\i{}a J A and Pons J M 1997
Equivalence of Faddeev-Jackiw and Dirac approaches for gauge
theories
{\it Int.\ J.\ Mod.\ Phys.} A {\bf12} 451-64 hep-th/9610067
\item[]
\dash 1998
Faddeev-Jackiw approach to gauge theories and ineffective
constraints
{\it Int.\ J.\ Mod.\ Phys.} A to be published
\item[]
Gotay M 1982
On coisotropic imbeddings of presymplectic manifolds
{\it Proc.\ Am.\ Math.\ Soc.} {\bf 84} 111-4
\item[]
Gotay M J and Nester J M 1979
Presymplectic Lagrangian systems I: the constraint algorithm and
the equivalence theorem
{\it Ann.\ Inst.\ H.\ Poincar\'e} A {\bf 30} 129-42
\item[]
\dash 1980
Presymplectic Lagrangian systems II: the second-order equation
problem
{\it Ann.\ Inst.\ H.\ Poincar\'e} A {\bf 32} 1-13
\item[]
Gotay M J, Nester J M and Hinds G 1978
Presymplectic manifolds and the Dirac-Bergmann theory of
constraints
\JMP {\bf 19} 2388-99
\item[]
Gotay M and Sniatycki J 1981
On the quantization of presymplectic dynamical systems via
coisotropic imbeddings
{\it Commun.\ Math.\ Phys.} {\bf 82} 377-89
\item[]
Gr\`acia X and Pons J M 1989
On an evolution operator connecting Lagrangian and Hamiltonian
formalisms
{\it Lett.\ Math.\ Phys.} {\bf 17} 175-80
\item[]
Ibort L A, Landi G, Mar\'{\i}n-Solano J and Marmo G 1993
On the inverse problem of Lagrangian supermechanics
{\it Int.\ J.\ Mod.\ Phys.} A {\bf 8} 3565-76
\item[]
Ibort L A and Mar\'{\i}n-Solano J 1992
A geometric classification of Lagrangian functions and the
reduction of evolution space
\JPA {\bf 25} 3353-67
\item[]
Isham C J 1984
Topological and Global Aspects of Quantum Theory
{\it Relativit\'e, Groupes, et Topologie II}
ed DeWitt B S and Stora R
(Amsterdam: North-Holland) pp~1059-290
\item[]
Jackiw R 1995
(Constrained) quantization without tears
{\it Proc. 2nd Workshop on Constraints Theory and
Quantization Methods (Montepulciano, 1993)}
(Singapore: World Scientific) pp~163-75 hep-th/9306075
\item[]
Lee J and Wald R M 1990
Local symmetries and constraints
\JMP {\bf 31} 725-43
\item[]
Pons J M 1988
New relations between Hamiltonian and Lagrangian constraints
\JPA {\bf21} 2705-15
\item[]
Pons J M, Salisbury D C, and Shepley L C 1997
Gauge transformations in the Lagrangian and Hamiltonian
formalisms of generally covariant systems
\PR D {\bf55} 658-68 gr-qc/9612037.
\item[]
Pons J M and Shepley L C 1995
Evolutionary laws, initial conditions and gauge fixing in
constrained systems
\CQG {\bf 12} 1771-90 gr-qc/9508052
\item[]
Sniatycki J 1974
Dirac brackets in geometric dynamics
{\it Ann.\ Inst.\ H.\ Poincar\'e} A {\bf 20} 365-72
\endrefs
\end{document}
| proofpile-arXiv_065-8739 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The Ly$\alpha$\ forest
(Lynds 1971, Sargent {\rm et~al. } 1980, see Rauch 1998 for a review)
arises naturally in cosmological structure formation scenarios
where gravitational instability acts on small initial density perturbations.
In hydrodynamic simulations
of such models
(Cen {\rm et~al. } 1994, Zhang {\rm et~al. } 1995, Hernquist {\rm et~al. } 1996,
Wadsley \& Bond 1996, Theuns {\rm et~al. } 1998, see also the analytical modelling
of e.g., Bi 1993, Reisenegger and Miralda-Escud\'{e} 1995),
most of the absorption seen in high redshift
QSO spectra is generated by residual neutral hydrogen in
a continuous fluctuating photoionized
intergalactic medium.
In such a picture, absorbing structures have a large physical
extent. Observational support for this has come from
comparison of the Ly$\alpha$\ forest in adjacent QSO lines of sight (Bechtold {\rm et~al. }
1994, Dinshaw {\rm et~al. } 1994, 1995, Crotts \& Fang 1998).
For matter in this phase, it is predicted and found in simulations that the
underlying mass density field at a particular point
can be related to the optical depth for Ly$\alpha$\ absorption
(see e.g., Croft {\rm et~al. } 1997)
and hence a directly observable quantity, the transmitted flux in the QSO
spectrum.
Much work has been devoted to studying
the statistical properties of the mass density field and the generic
predictions of the gravitational instability picture.
With the
Ly$\alpha$\ forest as a probe of the density,
we avoid many of the uncertainties associated with the use of the
galaxy distribution to test theories.
In principle, it should be possible, by combining our theoretical knowledge
of gravitational clustering with observations of Ly$\alpha$\ absorption, to
test the Gaussianity of the initial density field, the picture
of Ly$\alpha$\ formation, and the gravitational instability scenario itself.
In this paper, we will concentrate on one-point statistics, namely the
one point probability distribution function (PDF)
of the transmitted flux, and its moments.
We will use as a tool the spherical collapse or shear-free
model for the
evolution of density perturbations which Fosalba \& Gazta\~{n}aga
(1998a hereafter FG98, 1998b) have shown to be a good
approximation to the growth of clustering in the
weakly non-linear regime
and which we find
works well in the density regime appropriate to the study of the
Ly$\alpha$\ forest. Two point statistics, which probe the scale dependence of
clustering will be examined in an accompanying paper (
Gazta\~{n}aga \& Croft 1999, Paper II).
From observations of the Ly$\alpha$\ forest,
we can measure the PDF
of the flux and its moments.
The high resolution spectra of the forest taken by the Keck
telescope (Hu {\rm et~al. } 1995, Lu {\rm et~al. } 1996,
Rauch {\rm et~al. } 1997,
Kirkman \& Tytler 1997,
Kim {\rm et~al. } 1997)
allow us to resolve structure in the flux
distribution, and make high precision, shot noise-free
measurements of these flux statistics.
Here we will use
the statistical properties of the matter distribution,
$\rho(x)$, to predict these
observable quantities.
There are a number of studies which predict the evolution of the clustering
of density fluctuations, and in particular of the PDF.
The Zel'dovich Approximation (ZA) was used by Kofman et al. (1994).
Althought the ZA reproduces important aspects of non-linear
dynamics, it only results in
a poor approximation to the PDF and its moments.
This can be quantified by noticing, for example, that the hierarchical
skewness $S_3=\overline{\xi}_3/\overline{\xi}_2^2$
in the ZA is $S_3=4$ (at leading order in $\overline{\xi}$) instead
of the Perturbation Theory (PT) result
$S_3=34/7$ (see e.g., Peebles 1980).
One way to improve on this is to use the PT cumulants
to derive the PDF from the Edgeworth
expansion (Juszkiewicz {\rm et~al. } 1995, Bernardeau \& Kofman 1995).
In this case the PDF is predicted to an
accuracy given by order of the cumulants involved.
Protogeros \& Scherrer (1997)
introduced the use of a
local Lagrangian mapping (that relates the initial
and evolved fluctuation) as a generic way to
predict the PDF. In this case, the PDF is obtained simply by
applying a change of
variables (the mapping) to the PDF of the initial conditions.
The best of these two approaches is obtained when the Lagrangian mapping
is taken to be that of spherical collapse (FG98), which
recovers the PT cumulants to arbitrary order in the weakly
non-linear regime. There is yet another possibility,
which involves performing a perturbative expansion and
directly relating the moments of the flux to the moments of the mass
(along the lines proposed in a different context by Fry \& Gaztanaga 1993).
This approach does not use the density PDF,
and could incorporate more exact calculations
for the (non-linear) density moments.
Our plan for this paper is as follows.
In Section 2 we outline the physical basis for the relation we adopt
between Ly$\alpha$\ optical depth and the mass distribution.
In Section 3 we describe our model for following the evolution
of the PDF of the density and flux, using non-linear mapping
relations presented in appendix A1. The cumulants of the flux distribution
predicted by fully non-linear dynamics
are described in Section 4, together with
the predictions of perturbation theory. The modelling
of the effects of redshift distortions and thermal broadening
is also described. In Section 5, we compare our analytical results to those
measured from simulated spectra, generated using N-body simulations.
In Section 6, we discuss the effects of non-Gaussian initial conditions,
the redshift evolution of the one-point flux statistics, and the bias between
flux and mass fluctuations. We also compare to other work on
the statistics of the Ly$\alpha$\ forest flux. Our summary and conclusions
form Section 7.
\section{Lyman-alpha absorption and its relation to the mass
distribution}
As mentioned in Section 1, the model we use to relate
Ly$\alpha$\ absorption to the distribution of mass is motivated by
the results of numerical simulations which solve the full equations of
hydrodynamics and gravity, some including star formation in high density
regions. It was found in these simulations
(e.g., Hernquist {\rm et~al. } 1996) that most of the volume of the
Universe at high redshift ($z \simgt2$, see Dav\'{e} {\rm et~al. } 1999
for the situation at later times)
is filled with a warm ($10^{4}$ K), continuous, gaseous ionized medium.
Fluctuations in this intergalactic medium
(IGM) tend to have overdensities within a
factor of 10 of the cosmic mean and resemble
morphologically the filaments, walls and voids seen on larger scales
in the galaxy distribution at lower redshifts.
The dominant physical processes responsible for the state of this
IGM and the Ly$\alpha$\ absorption produced by it were anticipated by
semi-analytic modelling of the Ly$\alpha$\ forest
(e.g., McGill 1990, Bi, Borner \& Chu 1992). For completeness,
we will summarize these processes below.
\subsection{The Fluctuating Gunn-Peterson Approximation}
The physical state of most of the volume of the baryonic
IGM is governed by
the photoionization heating
of the UV radiation background, and the adiabatic cooling caused by
the expansion of the Universe. The competition between these two processes
drives gas elements towards a tight relation between temperature and
density, so that
\begin{equation}
T=T_{o}\rho_{b}^{\alpha}({\bf x}),
\label{rhot}
\end{equation}
where $\rho_{b}({\bf x})$ is the density of baryonic gas
in units of the cosmic mean.
This relation holds well in simulations for $\rho_{b} \mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 10$
(see e.g., Katz, Weinberg \& Hernquist 1996).
Hui \& Gnedin (1997) have explored the relation semi-analytically
by considering the evolution of individual gas elements in the Zel'dovich
Approximation. They find that the value of the parameters in equation
(\ref{rhot}) depend on the history of reionization and
the spectral shape of the radiation background,
and should lie in the narrow range
$4000\;{\rm K} \mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} T_0 \mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 15,000\;{\rm K}$ and $0.3 \mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} \alpha \mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 0.6$.
The optical depth for Ly$\alpha$\ absorption, $\tau$ is proportional to
the density of neutral hydrogen (Gunn \& Peterson 1965). In our case, this
is equal to the gas density $\rho_{b}$ multiplied by a recombination rate
which is proportional to $\rho_{b}T^{-0.7}$. By using equation
(\ref{rhot}), we find that the optical depth is a power law function of the
local density:
\begin{equation}
\tau(x)=A\rho_{b}(x)^{\beta},
\label{tau}
\end{equation}
where $x$ is a distance along one axis, taken to the line-of-sight
towards the QSO (we are working in
real-space for the moment). Because this result is simply a generalisation
of Gunn-Peterson absorption for a non-uniform medium, it has been dubbed
the Fluctuating Gunn-Peterson Approximation (FGPA, see Rauch {\rm et~al. } 1997,
Croft {\rm et~al. } 1998a, Weinberg {\rm et~al. } 1998a). The FGPA amplitude, $A$, is dependent
on cosmology and the state of the gas so that (e.g., Croft {\rm et~al. } 1999),
\begin{eqnarray}
A & = & 0.835
\left(\frac{1+z}{4}\right)^6
\left(\frac{\Omega_b h^2}{0.02}\right)^2
\left(\frac{T_0}{10^{4}\;{\rm K}}\right)^{-0.7} \;\times \nonumber \\
& & \left(\frac{h}{0.65}\right)^{-1}
\left(\frac{H(z)/H_0}{4.46}\right)^{-1}
\left(\frac{\Gamma}{10^{-12}\;{\rm s}^{-1}}\right)^{-1}\; .
\label{afacs}
\end{eqnarray}
Here $\Gamma$ is the
photoionization rate, $h=H_{0}/100 \kms\;{\rm Mpc}^{-1}$, and
$\Omega_{b}$ is the ratio of the baryon density to the critical density.
The FGPA slope, $\beta = 2-0.7 \alpha \simeq 1.6$.
The FGPA has been tested in simulations (Croft {\rm et~al. } 1997,
Weinberg 1999), and the predicted tight
correlation found to hold well.
The analysis in this paper will involve using
equation (\ref{tau}) to relate the optical depth to the
underlying real-space mass density. We will make predictions
for the observable quantity, transmitted flux in a QSO spectrum,
which we label $\phi$:
\begin{eqnarray}
\phi(x) &=& e^{-A \rho_{b}(x)^\beta}.
\label{flux}
\end{eqnarray}
Equation (\ref{flux}) can be thought of as ``local biasing relation''
between the flux and mass distributions.
It can be seen that in this relation, the only spatially varying quantity
is $\rho_{b}(x)$ (ignoring global redshift evolution
and assuming a smooth ionizing background). Given that the
physical processes included in the derivation of the FGPA relation
are the dominant ones, then the clustering properties of the Ly$\alpha$\ forest
should be determined mainly by the statistics of $\rho_{b}$.
The emphasis in this paper is therefore on applying our knowledge of the
behaviour of density perturbations to the Ly$\alpha$\ forest. We will use
analytical results for the non-linear evolution of
density perturbations
in an effort to understand the origin of the values
of Ly$\alpha$\ forest observables. The ultimate aim is that
with this understanding, measurements made from observational
data can be used to directly
test both the gravitional instability hypothesis,
and the picture of the Ly$\alpha$\ forest outlined above,
as well as throwing light on the nature of the primordial density
fluctuations.
An alternative to the approach we adopt here would be
to use the local relation Eq[\ref{flux}] to directly
reconstruct the density field, rather than to predict its cumulants
or the cumulants of the flux.
This reconstructed field could then be used
to estimate the statistical properties of the density (e.g., cumulants)
in a straightforward way.
This is not however simple to do in practice because
of the saturation of flux in high density regions.
Although large changes in high density regions
have little effect on the statistics of the flux (i.e. the cumulants),
they will totally change the statistics of the density.
Any reconstruction technique will therefore have to deal with this missing
information somehow. One approach for dealing with this
problem has been presented by Nusser \& Haehnelt (1998).
In the present paper, we make use of the important
fact that the power-law
and exponential weighting of the density in the FGPA relation
results in a flux distribution whose statistical
properties are dominated by the small fluctuations, i.e. the linear or weakly
non-linear regime.
\subsection{Additional complications}
There are a number of assumptions concerning the relationsip
between flux and mass which we should discuss before
proceeding. First, the above equations apply to the density of gas,
$\rho_{b}$ rather than the total density of matter, $\rho$, which
will be dominated by a dark matter component in the models we are considering.
At the relatively low densities of interest here, the distribution of gas in
simulations does however trace the dark matter well.
Pressure forces on gas elements tend to be small compared to the
gravitational forces, and non-hydrodynamical N-body simulations can be used
to produce very similar spectra to the simulations which include these pressure
effects (Weinberg {\rm et~al. } 1999). Simulations do have finite resolution
limitations, though, and clustering
in a dissipationless dark matter distribution with
power extending to small scales cannot be followed with infinite resolution.
The N-body only calculations so far used (e.g., Croft {\rm et~al. } 1998a)
have a resolution comparable to the small scale smoothing produced by pressure
effects. Hydrodynamical simulations at high resolution (e.g., Bryan
{\rm et~al. } 1998) can be used to study this smoothing. In the case of
analytic work, one can first consider the linear regime.
In this case, the power spectrum
of fluctuations in the gas density, $P_{g}(k)$ is a smoothed version
of the dark matter power spectrum, $P_{\bf DM}(k)$, so that
\begin{equation}
P_{g}(k)=\frac{P_{\bf DM}(k)}{[1+(k/k_{j})^{2}]^{2}}
\end{equation}
where $k_{j}$ is the Jean's wavenumber (see e.g.,
Peebles 1993). In tests of this result,
Gnedin \& Hui (1998) have shown that after reionization, the effective
smoothing length is generally smaller, and modelling with a different
(Gaussian) filter
tends to give better results when compared with simulations.
The situation in the non-linear regime will be more complicated.
The Jeans
length scales as $(\rho_{b})^{-0.5}$, but
due to the temperature density relation
of equation \ref{rhot}, denser regions also tend to have higher
temperatures, more thermal pressure, and more smoothing,
so that the overall density dependence of the Jean's length should
be weak. Gnedin \& Hui (1998) show that filtering the initial
conditions of a dissipationless simulation
with a single scale gives reasonable results compared to
the full hydrodynamic case (although worse than their ``Hydro-PM''
technique, which involves adding a pressure term to the dissipationless
simulation calculations).
In our case, the analytic approximation for
gravitational collapse which we use allows for filtering
the evolved density with a top-hat
filter in real-space (see the next sections and Appendix A1.2).
It may be possible to vary the smoothing length as a function of
density, but for reasons of simplicity we use a constant smoothing radius
for now. Another possibility for the future might be
self-consistent modelling of the
hydrodynamic effects when following the evolution of
density perturbations. This has been done numerically in 1D simulations
of spherical collapse by Haiman {\rm et~al. } (1996).
Second, the FGPA itself will break down in regions of
high density, because of shock heating of gas, collisional ionization,
star formation, and other processes.
We can quantify this by appealing to the results of hydrodynamic
simulations. As stated above, the relation has been directly tested by
Croft {\rm et~al. } (1997), who find that it works well at high redshifts, $z\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 2$,
on a point by point basis, for $\rho_{b} \mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 10$.
When we consider statistics that we might want to measure from the flux
distribution, the situation is even better. For example, we can see using the
numbers given for $A$ above at $z=3$ and equation (\ref{flux}),
that optical depth will saturate ($\phi \mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 0.05$)
for $\rho \mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 3$. The physical processes occuring in regions
with $\rho \simgt3$
are therefore not likely to directly affect what we can measure.
Of course, there will be indirect effects, for example, supernova
winds may inhomogeneeously heat the IGM out into lower density regions.
Also, the reionization of HeII, which is expected to occur around
$z \sim 3$, may cause inhomogeneous heating if it is patchy enough
(Miralda-Escud\'{e} and Rees 1994).
Although we expect the volume occupied by regions which do follow the FGPA
to be overwhelming in the high-z Universe, the statistical properties
of the absorption predicted by analytical gravitational instability theory
should be useful in testing the validity of this assumption.
They should also help us decide if their is any appreciable
contribution to clustering from spatial variations in the photoionization
rate $\Gamma$, due to the inhomogeneity in the UV background. Any such
variations are expected to be small in amplitude and to occur
only on scales larger than we can probe directly at present (see e.g.,
Zuo 1992, Fardal \& Shull 1993, Croft {\rm et~al. } 1999).
Third, we have so far only dealt with the density field in real-space,
whereas measurements from QSO spectra are made in redshift-space. Both peculiar
velocities and thermal broadening of absorption features should affect
the statistics of $\phi$ to some degree. We will include both these effects in
our predictions.
\section{The probability distribution}
We would like to make predictions for the one-point PDF
of the flux $\phi$ and its moments.
The one-point PDF of a given field $\phi$ is defined so that
the probability of finding, at a random position $x$, a value $\phi(x)$
in the infinitesimal interval $\phi$ to $\phi+d\phi$, is $P(\phi)d\phi$
To make these predictions we will first derive the corresponding
probabilities $P(\rho)$, for the local matter overdensity $\rho=1+\delta$,
where $\rho$ is in units of the mean density $\rho(x)=n(x)/\langle n\rangle$.
We will make indiscriminate use of either $\delta(x)$ or $\rho(x)$ as
variables when describing density fluctuations. The second
step is the assumption of a local relation with the form $\phi=f(\rho)$,
(motivated by equation [\ref{flux}]).
The PDF of the flux
will then simply be obtained by performing a change of variables
from the PDF of the density.
We start by assuming a (Gaussian) form for the PDF of the initial
conditions, and then follow its
evolution . As we will see, in our approach
it is not necessary to assume Gaussian initial conditions,
and this
procedure can be
extended to some other non-Gaussian models.
We will do this with a model starting from $\chi^{2}$ initial
conditions in Section 6.1.
One important point to note about our predictions is that we are not creating
artificial spectra but instead using an analytical model to evolve the
density PDF and then predict the PDF of the flux directly.
Our predictions for the
density distribution will depend on only two parameters, equivalent to the
slope and amplitude of the linear correlation function on
the smoothing scale, $\gamma$, and $\sigma^{2}_{L}$
(see Sections 3, 4 and Appendix A1). This will allow us to cover
a wide range of possiblities, and make predictions that are as generic
as possible.
In this section we will also test the effects of varying the two parameters
in the FGPA relation, $A$ and $\beta$, which (equation [\ref{afacs}])
contain information about
the cosmic baryon density, ionizing background and reionization history of
the Universe.
\subsection{The PDF of the initial conditions}
In the limit of early times, we assume an nearly homogeneous distribution with
very small fluctuations (or seeds), with given statistical properties.
We will concentrate on the case where the statistics of the initial density are
well described by Gaussian initial conditions, which correspond to a
general class of models for the initial conditions.
The one-point Gaussian probability distribution of
an initial field $\delta$ is given by:
\begin{equation}
P_{IC}(\delta)~ = ~{1\over{\sqrt{2\pi \sigma^2_0}}}~\exp{\left(-{1\over{2}}
\left[{\delta\over{\sigma_0}}\right]^2\right)}
\label{p1gic}
\end{equation}
As the overdensity must be positive, $\rho>0$, we have that $\delta >-1$,
and a Gaussian PDF only makes physical sense
when the initial variance is small: $\sigma_0 \rightarrow 0$.
\subsection{The evolved mass PDF}
Because of gravitational growth,
the evolution of $\delta$
will change the PDF from its initial form.
For small fluctuations linear theory provides a simple
way of predicting the time evolution
of $\delta(t,x)= D(t) \delta_0(x)$, where $D(t)$ is the growth factor
(equal to the scale factor $D=a$ for $\Omega=1$),
and $\delta_0(x)$ is the initial field. We will denote
this linear prediction by $\delta_L$. For Gaussian initial conditions
the linear PDF is
also Gaussian with a variance $\sigma^2_L$, given by scaling the initial
variance $\sigma^2_0$ by $D^2$, so that $\sigma^2_L= D^2 \sigma^2_0$.
As mentioned in the introduction,
there are a number of studies which predict the evolution of the PDF
beyond linear theory.
Here we will consider a generic class of
local mappings along the lines introduced by Protogeros \& Scherrer (1997).
The idea is for us to relate the non-linear fluctuation $\delta(q)$
(in Lagrangian space) with the corresponding
linear fluctuation $\delta_L(q) \equiv D \delta_0(q)$
using a universal (local) function. To simplify notation we choose
to express this mapping as a relation between the
non-linear overdensity $\rho=\delta+1$
and the linear fluctuation $\delta_L$, so that
\begin{equation}
\rho(q) = {\cal G}[\delta_L(q)].
\label{eq:mapping}
\end{equation}
One such mapping is the spherical collapse model (SC)
or shear-free approximation.
For Gaussian IC, the SC approximation happens to
give the exact statistical properties of the density
(cumulants of arbitrary order)
at leading order in perturbation theory
(as found by Bernardeau 1992 in the context of
the cumulant generating function),
and provides a very good approximation to higher orders
(see FG98). Physically,
this mapping corresponds to taking
the limit where shear is neglected. In this case the
equations for the growth of $\delta$, in Lagrangian space,
are identical to those given by spherical collapse. So, in the
perturbative regime, the SC is the best mapping possible, given the
local assumption made in equation (\ref{eq:mapping}). The local
transformation naturally occurs in Lagrangian space
$q$ (comoving with the initial fluid element).
The important point to notice
here is that although the local mapping is not the exact
solution to the evolution of $\delta$ (which is in general
non-local), it does give the correct clustering
properties in the weakly non-linear regime.
In the Appendix we give some specific examples for
the transformation ${\cal G}$.
The one-point PDF induced by the above transformation,
in terms of the initial one-point PDF $P_{IC}$, is
\begin{equation}
P_L(\rho) =
P_{IC}(\delta_L) \left|{d\delta_L\over{d\rho}}\right|,
\end{equation}
where $\delta_L={\cal G}^{-1}[\rho]$. As mentioned before,
the above expression corresponds to the probability distribution
of the evolved field in Lagrangian space, $q$.
To relate Lagrangian and Eulerian probabilities we
use the law of mass conservation: $d\delta(q)= \rho~ d\delta(x)$, where
$\rho(x)=1+\delta(x)$ is the overdensity in Eulerian coordinates.
We therefore have
\begin{equation}
P(\rho) = {1\over{N}}
{P_{IC}(\delta_L)\over{\rho}} \left|{d\delta_L\over{d\rho}}\right|,
\label{nlpdf}
\end{equation}
where $N$ is a normalization constant.
We will show some of these predictions in Section \ref{sec:sim} (e.g.,
Fig. \ref{pdf2}).
\subsection{The PDF of the flux}
\label{sec:bias}
We next assume a local transformation which relates the
underlying smoothed overdensity to some observable
quantity $\phi$:
\begin{equation}
\phi = f(\rho)
\label{bk}
\end{equation}
This quantity can further be related to the linear density field, so that
\begin{equation}
\phi = f[{\cal G}(\delta_L)].
\end{equation}
The PDF of $\phi$ will then related to that of the
density by a simple change of variable:
\begin{equation}
P(\phi) = P(\rho) \left|{d\rho\over{d\phi}}\right| =
{1\over{N}} {P_{IC}(\delta_L)\over{\rho}}
\left|{d\delta_L\over{d\phi}}\right|,
\label{fluxpdf}
\end{equation}
where $\delta_L={\cal G}^{-1}[f^{-1}[\phi]]$
and $\rho=f^{-1}[\phi]$. Thus, given the transformations
$f$ and ${\cal G}$, the above equations provide us with analytical (or
maybe numerical) expressions for the PDF of $\phi$.
In the present work, we concentrate on the cumulants
(see Section \ref{sec:pdf} for a definition) of the flux, rather
than the PDF itself. The reason for this is that,
given the local assumption, the cumulants are more accurately determined
(see Section \ref{sec:pdf} for more details). Nevertheless, we will
see in Section \ref{sec:sim} that the above prediction gives a good
qualitative description of the PDF (see e.g.,
Fig. \ref{fpdf}).
\subsection{Redshift-space distortions}
The smoothed flux and its corresponding optical depth $\tau=-\ln{\phi}$
has been assumed
to be a local function of the {\it smoothed} non-linear
density $\rho$. The optical depth $\tau$
at a given real-space position along
the line of sight $r$ will lie at a redshift-space
position $s$ in a QSO spectrum:
\begin{equation}
s = r + v_r/H,
\end{equation}
where $v_r$ is the component of the smoothed peculiar velocity
along the line of sight at $r$. Note that the
redshift distortion is of the smoothed field, where the smoothing,
due to finite gas pressure, occurs in real-space.
The redshift mapping will conserve optical depth,
$\tau_s ds = \tau dr$, so that we have:
\begin{equation}
\tau_s = \, \tau \, \left|{dr\over{ds}}\right| = \, \tau \, \left| 1+ {1\over{H}} {dv_r\over{dr}} \right|^{-1}.
\end{equation}
In general, the relation between $dv_r/dr$ and $\rho$ will be complex.
However, in the SC model, spherical symmetry leads to
a great simplification:
\begin{equation}
{dv_r\over{dr}} = {1\over{3}} \nabla \cdot v \equiv \, {H\over{3}} \, \theta ,
\end{equation}
as, by symmetry, derivatives are the same
in all directions (this idea has also been used
by FG98 and by Scherrer \& Gazta\~{n}aga 1998).
We can now again use the local mapping to relate
velocity divergence to the linear field:
$\theta={\cal G}_v[\delta_L]$, as in equation (\ref{theta}).
Thus we have that the redshift optical depth is
given by a different mapping:
\begin{equation}
\tau_s(\rho) = \tau(\rho) \, \left|1+ {1\over{3}} \, \theta(\rho)\right|^{-1}.
\label{tauz}
\end{equation}
The redshift-space flux is simply:
\begin{equation}
\phi_s= exp[-\tau_s(\rho)],
\label{phiz}
\end{equation}
and its PDF can be computed with a simple change of variables:
\begin{equation}
P(\phi_s) = {1\over{N}}
{P_{IC}(\delta_L)\over{\rho}}
\left|{d\delta_L\over{d\phi_s}}\right|.
\label{fluxpdfz}
\end{equation}
\section{The cumulants}
\subsection{Definitions}
Consider a generic field, $\phi$, which could be either
the measured flux in a 1D spectrum, $\phi=\phi(\rho)$, or the mass density in 3D
space $\phi=\rho$. The $J$th-order (one-point) moments of this field are
defined by (note the subscript ``c'' for connected):
\begin{equation}
m_{J} = \langle\phi^J\rangle = \int P(\phi) ~ \phi^J ~ d\phi.
\label{mij}
\end{equation}
Given the above relations we can choose to calculate the moments
by integrating over $\delta_L$:
\begin{equation}
m_J = \int d\delta_L \, {P_{IC}[\delta_L]\over{\rho(\delta_L)}} \,
\phi^J(\rho(\delta_L)),
\end{equation}
or over the non-linear overdensity $\rho$:
\begin{equation}
m_J = \int d\rho \, {P_{IC}[\delta_L(\rho)]\over{\rho}} \,
\left|{d\delta_L\over{d\rho}}\right| \, \phi^J(\rho).
\end{equation}
Here $\phi$ can refer either to real-space fields or fields
which have been distorted into redshift-space
(e.g., equation [\ref{phiz}]).
The $J$th order {\it reduced} one-point moments, or
cumulants $k_J$, of the field $\phi$ are defined by:
\begin{equation}
k_{J} \equiv \langle\phi^J \rangle_c = \left. {{\partial{ \log M(t)}}
\over{\partial t}}\right|_{t \rightarrow 0},
\label{kij}
\end{equation}
where $M(t)=\langle\exp(\phi t)\rangle$
is the generating functional of the
(unreduced) moments:
\begin{equation}
m_{J} \equiv \langle\phi^J \rangle = \left. {{\partial{M(t)}}
\over{\partial t}}\right|_{t \rightarrow 0}.
\end{equation}
The first reduced moments are:
\begin{eqnarray}
k_{1} &=& m_{1} \\
k_{2} &=& m_{2} - m_{1}^2 \nonumber \\
k_{3} &=& m_{3} -3 m_{1} m_{2}+ 2 m_{1}^3 \nonumber \\
k_{4} &=& m_{4}- 6 m_{1}^4 +12 m_{1}^2m_{2} - 3 m_{2}^2 -4 m_{1}m_{3} \nonumber
\end{eqnarray}
and so on. Note that even when we normalise the flux so that the
mean is zero ($m_1=0$), the cumulants are different
from the central moments in that the lower order
moments have been subtracted from them, so that $k_{4} = m_{4} -3 m_{2}^2$
for $m_{1}=0$.
It is interesting to define the following one-point hierarchical
constants:
\begin{equation}
S_J = {k_{J}\over{k_{2}^{J-1}}} ~~~~~~~~ J>2
\label{sj}
\end{equation}
This quantities turn out to be roughtly constant under
gravitational evolution from Gaussian initial conditions
(see e.g., Gazta\~{n}aga \& Baugh 1995 and references therein).
\subsection{Fully non-linear predictions}
We now use the one-point flux PDF obtained from equation (\ref{fluxpdf})
to predict the one-point
moments of the flux.
As mentioned in Section 3,
to make the predictions we need
the value of the linear variance $\sigma^2_L$, and
the local relation equation (\ref{eq:mapping}), which only
depends on the smoothing slope $\gamma$
(see equation \ref{slopedef} for the definition of $\gamma$).
For non-linear mapping
relations, we try each of the two cases
introduced in the Appendix. In the following figures we will
only plot results for the Spherical Collapse (SC) mapping because
they coincide perfectly with the results for the Generalized
Zel'dovich Approximation (GZA) model in equation (\ref{GZA})
with $\alpha=21/13$.
We are interested primarily in the evolution of the density PDF
and the weighting which the FGPA relation gives to
the clustering properties of the density field.
In order to separate the effects of redshift distortions from those
of density evolution, we will present results in real-space
first.
\subsubsection{The mean flux}
Figure \ref{meanf} shows the mean flux
$\langle\phi\rangle$ ($=m_{1}$ in equation [\ref{mij}])
as a function of $\sigma^2_L$ for several values of
$A$, $\beta$ (the parameters in the FGPA relation) and $\gamma$.
For small $\sigma_L$ all predictions tend to
$\langle\phi\rangle= exp(-A)$ which corresponds to the flux
at the mean overdensity: $\phi \rightarrow exp(-A)$
as $\rho \rightarrow 1$. We will see in next subsection
that this is the leading order PT prediction.
For larger $\sigma_L$ the mean flux becomes larger, as
expected, but it is flatter as a function
of $\sigma^2_L$ when there is less smoothing,
i.e., when $\gamma$ is less negative. Less smoothing of the density
will correspond to larger non-linearities, at least
in PT (see e.g., FG98). This seems to indicate that
the effect of non-linearities is to {\it reduce}
the mean flux, competing with
linear growth, which {\it increases} the mean flux.
\begin{figure}
\centering
\centerline{\epsfysize=8.truecm
\epsfbox{meanf.ps}}
\caption[junk]{Mean flux $\langle\phi\rangle$ as a function of the linear
variance $\sigma^2_L$ for different values of $A$
and $\beta$. The dotted, short-dashed and long-dashed
lines show the predictions for $\gamma=0,-1$ and $-2$
respectively.
}
\label{meanf}
\end{figure}
\subsubsection{The variance of the flux}
We define the variance using the normalized flux:
\begin{equation}
\sigma^2_\phi \,=\, \langle\left({\phi-\langle\phi\rangle\over{\langle\phi\rangle}}\right)^2\rangle_c.
\label{eq:varf}
\end{equation}
The overall normalization by $\langle\phi\rangle$ is a convention,
in analogy to what is done for density fluctuations.
Figure \ref{varf} shows the predicted variance
$\sigma^2_\phi$ as a function of $\sigma^2_L$
for several values of $A$, $\beta$ and $\gamma$.
For small $\sigma^2_L$ these results reproduce
the linear relation: $\sigma^2_\phi= b_1^2 \,\sigma^2_L $
(see equation [\ref{eq:ptflux}]). Deviation from this power-law relation
(of index 1) occurs as $\sigma^{2}_{L}$ is
increased, and occurs sooner for larger values of $A$ and
$\beta$. For $\gamma >0$, when $\sigma^2_L$ reaches $\sim 1$,
$\sigma^2_\phi$ seems to reach a maximum
and then decreases again like a power-law for large $\sigma^2_L$.
We can again see that the predictions become flatter as a function
of $\sigma^2_L$ when there is less smoothing,
i.e., for less negative $\gamma$.
\begin{figure}
\centering
\centerline{\epsfysize=8.truecm
\epsfbox{varf.ps}}
\caption[junk]{
The variance of the
flux $\sigma_\phi^2$ as a function of $\sigma^{2}_{L}$.
The meaning of the line types is the same as in Figure \ref{meanf}.
}
\label{varf}
\end{figure}
\subsubsection{The skewness of the flux}
In a similar way, we define the (normalized hierarchical)
skewness of the flux as:
\begin{equation}
S_3(\phi) \,=\, {\langle\left({\phi/{\langle\phi\rangle}}-1\right)^3\rangle_c\over{\sigma_\phi^4}}.
\label{eq:s3f}
\end{equation}
Figure \ref{s3f} shows the predicted skewness, as a function
of $\sigma^2_L$ for several values of
$A$, $\beta$ and $\gamma$.
Because of the dependence of flux on density in the FGPA relation
(more density, less flux), while the density distribution
is positively skewed, the skewness of the flux tends to be negative
for most values of the parameters.
\begin{figure}
\centering
\centerline{\epsfysize=8.truecm
\epsfbox{s3f.ps}}
\caption[junk]{
The Hierarchical skewness of the
flux, $S_3(\phi)$, as a function of $\sigma^{2}_{L}$.
The meaning of the line types is the same as in Figure \ref{meanf}.
}
\label{s3f}
\end{figure}
For small $\sigma^2_L$ the results tend to a constant
as expected in leading order PT
(see equation [\ref{eq:ptflux}]).
We will examine the PT relations in more detail in Section 4.3.
For the moment, we note that
there again seems to be less variation in $S_3({\phi})$ as a function
of $\sigma^2_L$ for cases with less smoothing
(less negative $\gamma$).
\subsubsection{The kurtosis of the flux}
The (normalized hierarchical)
kurtosis of the flux is defined in a similar fashion:
\begin{equation}
S_4(\phi) \,=\, {\langle\left({\phi/{\langle\phi\rangle}}-1\right)^4\rangle_c\over{\sigma_\phi^6}}.
\end{equation}
Figure \ref{s4f} shows the predicted kurtosis, as a function
of $\sigma^2_L$ for several values of
$\beta$ and $\gamma$. For clarity we only show one value of $A$.
\begin{figure}
\centering
\centerline{\epsfysize=8.truecm
\epsfbox{s4f.ps}}
\caption[junk]{Same as Figure \ref{varf} for
the Hierarchical kurtosis. For clarity only $A=0.6$ is shown.}
\label{s4f}
\end{figure}
For small $\sigma^2_L$ these results tend to the constant value
predicted by leading order PT
(see equation [\ref{eq:ptflux}]).
Being a fourth moment, $S_4{\phi}$ is extremely sensitive to deviations from
Gaussianity which occur as the density field evolves and
$\sigma^2_L$ increases. This sensitivity seems to be larger for lower
values of $\beta$, presumably because when $\beta$ is high, high density parts
of the PDF which might contribute heavily to the kurtosis of the
density field have little weight in the statistics in $\phi$.
\subsection{Perturbative predictions}
An alternative to using the PDF is to calculate the cumulants directly
from the perturbative expansion along the lines suggested
(in the context of galaxy biasing) by Fry \& Gazta\~naga (1993).
That is we take:
\begin{eqnarray}
\phi &=& e^{-A \rho^\beta}= b_1 \sum_{k=0} \, {c_k\over{k!}} \, \delta^k \\
b_1 &=& - A \, \beta \, e^{-A}\nonumber \\
c_0 &=& -{1\over{A\beta}} \nonumber \\
c_1 &=& 1 \nonumber \\
c_2 &=& -1 +\beta -A\beta \nonumber \\
c_3 &=& 2-3 \beta+\beta^2 + 3 A\beta - 3 A\beta^2 +A^2\beta^2 \nonumber \\
\end{eqnarray}
and so on.
From this expansion one can simply estimate the moments
by taking mean values to the powers of $\phi$.
The leading order terms in $\sigma^2_L$ are:
\begin{eqnarray}
\langle\phi\rangle &=& e^{-A} \\
\sigma_\phi^2&=& b_1^2 \sigma^2_L \nonumber \\
S_3(\phi) &=& {S_3 + 3 c_2\over{b_1}} \nonumber \\
S_4(\phi) &=& {S_4 + 12 S_3 c_2 +12 c_2^2+4 c_3\over{b_1^2}}\nonumber , \\
\label{eq:ptflux}
\end{eqnarray}
where $S_3$ and $S_4$ are the leading order
(hierarchical) skewness and kurtosis for the density field.
For Gaussian initial conditions they are:
$S_3=34/7+\gamma$ and $S_4=60712/1323+62/3\gamma
+7/3\gamma^2$ (both in the SC model and PT theory).
For non-Gaussian initial conditions, one would have to
add the initial contribution, e.g., $S_3= S_3^0+34/7+\gamma$.
\begin{figure}
\centering
\centerline{\epsfysize=8.truecm
\epsfbox{ptflux.ps}}
\caption[junk]{
Perturbative predictions for the one-point flux moments compared to the
fully non-linear predictions.
Mean flux $\langle\phi\rangle$ (top), variance $\sigma^2_\phi$
(middle panel) and skewness $S_3(\phi)$ (bottom)
are shown as a function of the linear
variance $\sigma^2_L$ for $A=0.6$
and $\beta=1.0$. The dotted and long-dashed
lines show the non-linear predictions
(Section 4.2) for $\gamma=0$ and $\gamma=-2$ smoothing
respectively. The straight continuous lines show the
corresponding leading order perturbative prediction (Section 4.3),
valid for $\sigma_L \rightarrow 0$.
The solid curve in the top panel is the perturbative prediction
for $\langle\phi\rangle$
including the effect of a higher order (loop) correction (see
equation [\ref{meanpt}]).
}
\label{ptflux}
\end{figure}
These predictions are shown as a straight continuous lines
in Figure \ref{ptflux}, where they are compared to the
full (non-perturbative) calculation in the SC model for two
values of the smoothing slope, $\gamma=0$ (long-dashed line)
and $\gamma=-2$ (dotted line).
We can see that the expressions only converge on the
correct result asymptotically as $\sigma_L \rightarrow 0$.
The relative performance depends on $\gamma$, with steeper slopes giving
better results.
It is easy to calculate higher order (loop) corrections
(see e.g., FG98). For example:
\begin{equation}
\langle\phi\rangle = e^{-a} \left[1+{A\beta\over{2}}(1-\beta+A\beta)\, \sigma^2_L +
{\cal O}(\sigma^4_L) \right],
\label{meanpt}
\end{equation}
This prediction for the mean flux is shown as a curved continuous line
in the top panel of Figure \ref{ptflux}, and seem to work up to scales where
$\sigma_L \simeq 1$.
We find however that the agreement becomes worse for larger values
of $A$ and $\beta$. A similar tendency is found for other moments.
We have seen that even when high-order corrections are included,
this perturbative approach only works well for small values
of $\sigma_L$, $A$ and $\beta$. It is likely to have only a limited
applicablity when we consider the situation in the observed Universe,
where typical values of $\sigma_L \geq 1$ are expected, (at
least for redshifts $z \mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 4$).
Given that we have the possibility of implementing the SC model (or
the GZA model) to arbitrary order, one could ask why we bother
with a perturbative approach.
The first obvious reason is that it gives us
compact analytical predictions, simple formulae which are
functions of the input variables ($A$, $\beta$, $\gamma$
and $\sigma^2_L$). A second reason is that by using this
approach, it may be possible to introduce the PT solutions.
As mentioned before, PT
only differs from the SC model through the shear contributions, and
although these are typically small (as will be shown) they might still
be relevant for higher precision comparisons. It is
not clear nevertheless than one could obtain higher
accuracy at $\sigma_L \mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 1$,
given the limitations of a perturbative approach
(i.e., convergence of the series).
\subsection{Predictions in redshift-space}
\begin{figure}
\centering
\centerline{\epsfysize=8.truecm
\epsfbox{zflux.ps}}
\caption[junk]{
The effect of redshift-space distortions on the one-point moments.
Mean flux $\langle\phi\rangle$ (top), variance $\sigma^2_\phi$
and skewness $S_3(\phi)$ (bottom) are plotted as a function of the linear
variance $\sigma^2_L$ for $\gamma=1$, $A=0.8$
and $\beta=1.6$. The short-dashed and continous
lines show the predictions in real and redshift-space
(peculiar velocities only) respectively.
The long-dashed lines correspond to predictions in redshift-space with an
additional velocity dispersion component on small scales (added in
order to match simulation results - see text).
The dot-dashed lines show the predictions in redshift-space with thermal
broadening as well as peculiar velocities (and the extra small-scale
dispersion).
In modelling the
thermal broadening component to the redshift distortion
we have assumed that the temperature depends on the
density, as predicted by equation (\ref{rhot})
(see Section 4.4).
The points show results from
simulated spectra (set [a])
described in Section 5.
The triangles, circles and squares represent spectra in
real space, redshift-space with no thermal broadening,
and redshift-space with thermal-broadening, respectively.
}
\label{zflux}
\end{figure}
We now turn to the more observationally realistic case
where redshift distortions are included.
Fig. \ref{zflux} shows how the (fully non-linear)
predictions change when given in redshift-space.
We use the formalism of Section 3.4 (e.g., equation [\ref{phiz}]),
results which allow one to estimate the effects of peculiar velocity
distortions. The redshift distortions caused by thermal broadening can
be treated in a similar way, and we will also do this below.
For clarity we only show a single value of $A$, $\beta$
and $\gamma$, but similar effects are found if we use
other values.
We have also plotted some simulation points on Fig. \ref{zflux}.
The simulations will be decribed fully in the next section.
For the moment, it is only necessary to mention here that, for purposes
of comparison, the simulation
density PDFs can be decribed by the two parameters,
$\gamma$ and $\sigma^{2}_{L}$
(evaluated from the power spectrum of initial
fluctuations used to set up the simulation).
The transformation from density into
flux in the simulations has also been carried out using the FGPA relation. We
plot points both with and without
including redshift distortions from peculiar
motions and thermal broadening.
If we concentrate on the mean flux first, and ignore thermal
broadening, we can see that
$\langle\phi\rangle$ in redshift-space (continous line in the
top panel) seems to converge to same value as the real-space mean flux
(short-dashed line) for small $\sigma_L$. For $\sigma_L \mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 1$
the mean flux is larger in redshift-space.
The variance in the flux, $\sigma_\phi^2$, defined
by equation (\ref{eq:varf}) is shown in the middle panel of Fig.
\ref{zflux}. We can see that this quantity is larger
in redshift-space than in real-space for $\sigma_L \mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 1$, a trend that is
reversed for $\sigma_L \mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 1$. The former is presumably due to the
same ``squashing effect'' that is seen in studies of
the density field, where infall of matter into high
density regions enhances clustering (Kaiser 1987). The latter
effect can be attributed to a relative decrease in the level of
redshift-space clustering caused by high velocity dispersion along
the line of sight.
The skewness, defined
by equation (\ref{eq:s3f}) is shown in the bottom panel of Fig.
\ref{zflux}. For small $\sigma_L$, the redshift-space
(continous line) $S_3(\phi)$ seems to match the real-space value
(dashed line). For $\sigma_L \mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 1$,
$S_3(\phi)$ is smaller in redshift-space.
The simulation points on the plot in real space
(triangle) agree with predictions in the real space.
In redshift-space (open circles), although the
sign of the change caused by the peculiar velocity distortions is correct,
the predictions do not agree in detail. Our interpretation of this
is that the SC model does not predict enough random non-linear
velocity dispersion. We have therefore added a
velocity dispersion by hand, by adding in a velocity
divergence term, $\theta_{\rm disp}$
to equation (\ref{phiz}):
\begin{equation}
\tau_s(\rho) = \tau(\rho) \, \left|1+ {1\over{3}} \, [\theta(\rho)+
\theta_{\rm disp}(\sigma^{2}_L)] \right|^{-1}.
\label{tauztb}
\end{equation}
In order that the asymptotic behaviour of $\theta_{\rm disp}$ be satisfied
(non-linear dispersion $\rightarrow 0$ as $\sigma^2_L \rightarrow 0$),
we have adopted the functional form $\theta_{\rm disp}=C\sigma^2_L$,
Predictions from the SC model including the effect of
this term are shown in fig. \ref{zflux} as a long dashed line.
We have adjusted the constant $C$ so that the predictions go through the
simulation point in redshift-space.
Next, we include thermal broadening in our predictions.
For the moderate optical depths
we are concerned with here, the relevant
Voigt profile can well be approximated by
a Gaussian velocity dispersion. The width
of this Gaussian profile,
$\sigma_T \simeq \sigma_{T_0} (T/T0)^{1/2}$, where
$\sigma_{T0} \simeq 13/\sqrt{2} {\rm \;km\;s^{-1}}$ for $T_0 \simeq 10^4 K$.
From equation (\ref{rhot}) we have $T \propto \rho^{0.6}$, so that
$\sigma_T \simeq \sigma_{T0} \rho^{0.3}$. We can think of
thermal broadening as resulting in the addition of
a thermal velocity component, $\theta_T$, to the
divergence field $\theta$ in equation (\ref{tauz}).
We will model this thermal component in a similar way to the extra
non-linear dispersion term ($\theta_{\rm disp}$)
defined above, which can be thought of
as a ``turbulent'' broadening term.
We simply model the additional thermal dispersion
using its rms value, so that,
\begin{equation}
\theta_T(\rho) \simeq 3 \, {\sigma_T\over{H\Delta}}
\simeq {\sigma_{T0}\over{H\Delta}} \, \rho^{0.3},
\end{equation}
where $\Delta$ is distance in the QSO spectrum
which corresponds to the scale of Jean's smoothing.
This density-dependent term then enters the RHS of equation (\ref{tauztb}),
alongside $\theta_{\rm disp}$.
The long-dashed line in Fig. \ref{zflux}
shows the effect of thermal broadening using this prescription.
We have used the value $\sigma_{T0}$
appropriate to the simulation (see Section 5.2 for details),
whose thermally broadened results are plotted as a solid square.
As can be seen, thermal broadening results in more flux being
absorbed, and yields a lower value of $\sigma_\phi^2$.
This is as we should expect, given that the distribution of optical
depth has effectively been smoothed out by the addition of a dispersion.
It is evident from these results that
the one-point moments depend fairly
strongly on the details of redshift distortion modelling, which are likely
to be poorly constrained a priori in our approximate treatment.
Fortunately, much of the uncertainty can be removed
by setting the mean flux, $\langle\phi\rangle$, to be equal to some
(observed) value.
We will show later (Section 5.5) that by doing this, the moments can be made
insensitive to the inclusion of redshift distortions.
\section{Comparison to simulations}
\label{sec:sim}
As the emphasis of this paper is on
the role of gravitational evolution of the density
field, we now test the analytic techniques
we have employed against numerical simulations of gravitational clustering.
The N-body only simulations which we use do not allow
us to perform tests of the validity of the
model we have assumed for relating the mass distribution
to an optical depth distribution
(the FGPA, Section 2).
Any difference between the statistical properties
of the Ly$\alpha$\ forest we predict and those observed in nature could therefore
stem from a misapplication of the FGPA, rather than
from problems with the underlying density field.
Tests performed in other contexts show that this is unlikely,
as a dissipationless
approach to simulating the Ly$\alpha$\ forest
can perform well (see e.g. Croft {\rm et~al. } 1998a,
Weinberg {\rm et~al. } 1999) in comparison with the full hydrodynamic case.
Approximate methods should neverthless be tested on a case by case basis,
and we reserve comparisons with hydrodynamic simulations for future work.
\subsection{Simulations}
The simulations we use have all been run with a P$^{3}$M N-body code
(Efstathiou \& Eastwood 1981, Efstathiou {\rm et~al. } 1985).
The softening length of the PP interaction was made large (1 mesh cell),
for simulation sets (a-c) because high spatial resolution is not needed.
We have not attempted
to simulate any particular favoured cosmological models or even to make sure
that cases are likely to be compatible with
expectations for the nature of
the density field at high redshift. We are more interested in spanning a wide
range of values of $\sigma^{2}_{L}$, and $\gamma$, the parameters which
determine
the nature of the evolved density field in the SC model.
To this end, we use outputs from three different sets of simulations.
The initial conditions for all runs were Gaussian random fields
with CDM-type power spectra of the form specified by Efstathiou, Bond \& White
(1992), with a shape parameter $\Gamma$, so that
\begin{equation}
P(k) \propto \frac{k}{[1+(ak+(bk)^{3/2}+(ck)^{2})^{\nu}]^{2/\nu}}
\end{equation}
where $\nu=1.13$, $a=6.4/\Gamma \;h^{-1}{\rm Mpc}$, $b=3.0/\Gamma \;h^{-1}{\rm Mpc}$,
$c=1.7/\Gamma \;h^{-1}{\rm Mpc}$. There are
five realizations with different random
phases in each set of simulations, which are described below.
(a) A set with a box-size $40 \;h^{-1}{\rm Mpc}$ and shape parameter $\Gamma=0.5$.
The model was run to $z=3$ with an Einstein-de-Sitter cosmology.
At that redshift $\sigma^{2}_{L}=2.0$.
at the smoothing radius
(see below), which was $0.31 \;h^{-1}{\rm Mpc}$ (comoving). The linear slope
at the smoothing scale, $\gamma=-0.8$.
These simulations were run with $200^{3}$ particles and $256^3$ cells.
(b) A set with box size $22.22 \;h^{-1}{\rm Mpc}$, $128^{3}$ particles,
$\Gamma=0.5$, and an Einstein-de-Sitter cosmology.
Simulated spectra were made from
this set assuming that $z=3$, but several different outputs were used with
varying amplitudes of mass fluctuations.
The smoothing radius was $0.31 \;h^{-1}{\rm Mpc}$ (comoving),
on which scale the linear slope was $\gamma=-0.8$. The value of
$\sigma^{2}_{L}$ on this scale ranged from $0.02$ to $7.5$ for the different
outputs.
(c) A set the same as (b), except with
$\Gamma=10$.
The smoothing radius was again $0.31 \;h^{-1}{\rm Mpc}$ (comoving),
on which scale the linear slope was $\gamma=-1.8$. The value of
$\sigma^{2}_{L}$ on this scale ranged from $0.03$ to $10$ for the different
outputs.
(d) A set with box size $20 \;h^{-1}{\rm Mpc}$, $126^{3}$, particles and
$\Gamma=3.8$. This set
was originally used as ensemble G in Baugh, Gazta\~{n}aga
\& Efstathiou 1995, but the box size was taken to be larger, and
hence $\Gamma$ was lower. The smoothing scale is $0.37 \;h^{-1}{\rm Mpc}$, where
$\gamma\simeq -1.5$, and $\sigma^{2}_L=0.25$ (at $z=3$).
The cosmology assumed has $\Omega=0.2$ and $\Omega_{\Lambda}=0.8$
at $z=0$.
We have also run a single simulation with the same parameters as those in set
(a), except with a box of size $11.11 \;h^{-1}{\rm Mpc}$, and $128^{3}$ particles,
so that the mass resolution is increased by a factor of $\sim 12$.
\subsection{Simulated spectra}
To make simulated spectra from the N-body outputs, we use the following
procedure (see also Hui, Gnedin \& Zhang 1997, Croft {\rm et~al. } 1998a).
The particle density and momentum
distribution is assigned to a ($256^{3}$) grid using a TSC (triangular
shaped clouds, Hockney \& Eastwood 1981) scheme.
The resulting fields are smoothed in Fourier space with a filter, which
in our case is a top-hat with radius given in Section 5.1 above.
We also use a very narrow Gaussian filter (with $\sigma$ 0.1 times
the top-hat radius)
in order to ensure that density is non-zero everywhere.
The velocity fields are computed by dividing the momentum by the density
everywhere. Spectra are selected as lines-of-sight through the box, parallel
to one of the axes (we select the axis randomly for each line-of sight).
These one-dimensional density fields are then converted to an optical depth
distribution using equation (\ref{tau}). We also
compute the temperature distribution of the gas
using equation (\ref{rhot}),
with $\alpha=0.6$ and $T_{0}=10^{4}$K. The optical depths are then
converted from real-space to redshift-space by convolution with the
line-of-sight velocity field and with a Gaussian filter with the appropriate
thermal broadening width.
We estimate the one-point statistics of the flux in the spectra
without any additional smoothing using counts-in-cells
and the estimators of Section 4.2.
We extract 5000 spectra from each simulation realization
and estimate statistical errors from the scatter in results
between the 5 realizations. We find that the resulting
error bars are typically much smaller
than the symbol size and so we do not plot them in the figures.
Small systematic errors will arise because of the additional smoothing
involved in using a mass assignment scheme, also because
of shot noise from particle discreteness.
\subsection{The PDF of density and flux}
\label{sec:pdf}
\begin{figure}
\centering
\centerline{\epsfysize=8.truecm
\epsfbox{pdf2.ps}}
\caption[junk]{
The one-point PDF of the density field
$\rho$ measured from N-body simulations (continuous
line, set [a], see Section 5.1).
The simulation density field was
smoothed with top-hat cell on a scale with linear
variance $\sigma_L^2 \simeq 2$ and slope $\gamma=-1$.
The prediction of two approximations to PT
are also shown: the Spherical Collapse model (short-dashed line)
and the GZA in $\alpha=21/13$ dimensions (long-dashed line).
}
\label{pdf2}
\end{figure}
We first compare the PDF of the density in the simulations with
the analytical predictions, both in real space.
The one-point density PDF estimated from simulation lines-of-sight
(using simulation set [a])
is shown
in Fig. \ref{pdf2} as a continuous line. The analytical
predictions are shown as a short-dashed (SC) and long-dashed (GZA) line.
In evaluating the predictions we have used $\gamma\simeq -1$ and
$\sigma_L^2 \simeq 2$ which correspond to the appropriate linear theory
values for simulation set (a).
Note that the spherical collapse model is only an approximation to
PT, so we do not expect to recover the PDF exactly, even
close to $\delta \simeq 0$.
To carry out the exact recovery we would have to include non-local (tidal)
effects. The mean tidal effects vary in proportion to the
linear variance and the leading contribution (when variance goes to zero)
is only local (but non-linear). Tidal effects seem to
distort the PDF, turning some $\delta=0$ fluctuations into either voids
$\delta \simeq -\sigma$ or overdensities $\delta \simeq \sigma$,
so that the real peak in the PDF
is slightly lower than SC or GZA.
This distortion is significant, given that the statistical errors
on the simulation result are small.
The overall shape is however similar,
as we can see from Fig. \ref{pdf2}. The lower order moments of the
density are
also fairly similar: tidal effects tend to cancel out.
When the variance is small (and the PDF tends to a
Gaussian), the PDF can be defined one-to-one by its moments (e.g.,
with an Edgeworth
expansion). In this limit tidal effects vanish and both the PDF and
the moments are given exactly by the SC model.
Thus, for the density distribution, the statistics of
the moments are dominated by the contribution of local dynamics to the PDF
and tidal effects are subdominant (they tend to
cancel out when taking the mean).
It is plausible that a similar sort of cancellation will happen for the
statistics of the flux.
In the next sections, we will therefore
concentrate on predictions for the moments of the flux.
\begin{figure}
\centering
\centerline{\epsfysize=8.truecm
\epsfbox{fpdf2.ps}}
\caption[junk]{PDF of the flux evaluated using the same density distribution
plotted in Fig. \ref{pdf2}. Three different values of
the FGPA parameters $A$ and $\beta$ are used as shown in each panel.
The one-point PDF in the simulations is shown as continuous lines and is
compared with the predictions of the two PT approximations,
the Spherical Collapse model (short-dashed line),
and the GZA in $\alpha=21/13$ dimensions (long-dashed line).
All results are in real-space.
}
\label{fpdf}
\end{figure}
Different density regimes will be prominent if we consider
the PDF of the flux, as it is a transformed version of the density PDF.
In Fig. \ref{fpdf}
we apply the FGPA relation (ecuation 4) in real-space to the density PDF from
Fig. \ref{pdf2}.
The resulting flux distributions are rather flat, with the high density
tail being confined to a small region of flux space near $\phi=0$.
Varying the parameters $A$ and $\beta$ produces most notable differences in
the fraction of spectra which show little absorption. We can see that
the lower $\beta$ is,
the less likely it is that any pixels will be seen with
values near the unabsorbed QSO flux. The specific case
with $\beta=1.0$ is not realistic, though, as $\beta$ is expected to
be $\simgt1.6$ for all reasonable reionization histories
(Hui \& Gnedin 1997).
In Fig. \ref{fpdfz} we show the effect of redshift distortions on the flux
PDF. We have used the prescription of Section 4.4 for making the
predictions, including thermal broadening.
The effect of the distortions is to evacuate the low density regions,
as we might expect. The effect is very noticeable, as the peak of the
PDF is raised substantially.
Note that to to make this plot, we have not added an additional
small scale velocity dispersion, as we did previously
to obtain
a closer match to the one-point moments
in simulations (Fig. \ref{zflux}).
We can see that the
agreement of the predictions and simulations for the PDF
in Figure \ref{fpdfz} is still good, showing that our redshift modelling
is at least a good qualitative approximation
to the underlying physical processes.
\subsection{Cumulants of the flux}
\begin{figure}
\centering
\centerline{\epsfysize=8.truecm
\epsfbox{fpdf2z.ps}}
\caption[junk]{
PDF of the flux evaluated from the same density distribution
used in Fig. \ref{pdf2} (simulation set [a]).
The one-point PDF in the simulations is
shown as a solid line. Real-space results are plotted in the
bottom panel, and results in redshift-space with thermal broadening
in the top panel.
These results are compared with the appropriate analytical predictions,
using the Spherical Collapse model (short-dashed line)
and the GZA in $\alpha=21/13$ dimensions (long-dashed line).
}
\label{fpdfz}
\end{figure}
\begin{figure}
\centering
\centerline{\epsfysize=8.truecm
\epsfbox{testac80.ps}}
\caption[junk]{
The first 4 moments of the flux in simulation set (a)
(squares) and set (d) (triangles) compared to the appropriate
analytical predictions (lines),
as a function of $A$. The top panel
shows the mean flux $\langle\phi\rangle$ (closed figures)
and the variance $\sigma_f^2$ (open figures).
The bottom panel shows the hierarchical skewness $S_3(\phi)$
(closed figures) and the kurtosis $S_4(\phi)$ (open figures).
All results are in real-space.
}
\label{testac80}
\end{figure}
In Fig. \ref{testac80} we compare predictions and simulation
results (for simulation sets [a] and [d]) for the mean
flux $\langle\phi\rangle$, the variance $\sigma_\phi^2$, skewness
$S_3(\phi)$ and kurtosis $S_4(\phi)$. In all cases we have
used $\beta=1.6$, and we show the moments as a function
of the value of $A$.
All results in this fig. have been evaluated in real-space.
Squares correspond to
simulation set (a), in which the one-point linear variance
of the density field, $\sigma_L^2=2$
(the non-linear value is $\sigma^2 \simeq 4$)
and the linear slope on the smoothing scale is
$\gamma=-1$. Triangles correspond
to set (c), for which $\sigma_L^2=0.25$ and $\gamma=-1.5$.
We can see that the predictions (from the SC model)
are in good overall agreement with the simulations. It is
encouraging that the agreement
is not noticeably worse for set (a), which has a much higher amplitude
of mass fluctuations on the relevant scale.
We can also compare the predictions of the SC model with simulations
for different values of the amplitude of mass fluctuations. In Figures
\ref{vssig1} and \ref{vssig2} we have done this, by plotting the flux moments
for several different simulation output times. We can see that for most of
the range, the predictions work well, for both sets of simulations with
different linear slopes ($\gamma$).
At very low amplitudes $\sigma^{2}_{L}$, the simulations will
still be dominated by the effect of the initial particle grid, so that
the differences we see for the lowest amplitude output are not surprising.
For high values of $\sigma^{2}_{L}$, the SC predictions start to break
down, with $S_3$ and $S_4$ being most affected. This shows that we must be
careful when intepreting results which appear to indicate high values
of $\sigma^{2}_{L}$.
Ultimately, the real test of how far we should trust our
analytic methods should come from comparisons with
hydrodynamic simulations, and in particular
those that been run at resolutions high enough to resolve the Jean's scale.
In such simulations (e.g., Bryan et al 1998, Theuns et al 1998) it is found
that quantities such as the mean effective optical depth are indeed sensitive
to resolution. In the SC model computations,
although the smoothing scale does not enter directly as a parameter,
the quantities which do enter, $\gamma$ and $\sigma^{2}_{L}$ are dependent
on it. A smaller Jean's scale will yield higher values of $\sigma^{2}_{L}$,
and as we can see from the N-body only tests of Figs. \ref{vssig1}
and \ref{vssig2}, the accuracy of our approximations can vary widely.
The first three moments of the flux are recovered within $\sim 10-20\%$, for
$\sigma^{2}_{L}\simeq 4$ and below (for which the non-linear variance in
the density field, $\sigma^{2} \sim 10$). The kurtosis of the flux has a much
larger error, although as we will see later some non-Gaussian models
have such a different $S_{4}$, that it should still be detectable.
A more direct test involves consideration of results
from the higher resolution version of the simulations from set (a).
This simulation has a higher mass resolution by a factor of $\sim 12$.
When we use the same filter size as for set (a), we find that the flux
moments do not change.
If we decrease the filter size by $12^{\frac{1}{3}}$ to $0.13 \;h^{-1}{\rm Mpc}$,
(at which scale $\sigma^{2}_{L}=3.8$), the SC predictions for the mean flux
and variance are still accurate to better than $10 \%$
, but the prediction for $S_{3}(\phi)$ is
$-2.3$, when the simulation value is $-3.0$, and the SC $S_{4}(\phi)$
is $-1.0$ when the simulation gives $+2.4$.
\begin{figure}
\centering
\centerline{\epsfysize=8.truecm
\epsfbox{vssig1.ps}}
\caption[junk]{
The mean flux (top panel), variance (middle), $S_3$ (triangles), and
$S_4$ (squares) for simulation set (b), as a function of $\sigma^{2}_{l}$.
We have used $A=0.8$ and $\beta=1.6$ to generate the spectra.
The corresponding predictions of the spherical collapse model are shown as
lines. All results are in real space.
}
\label{vssig1}
\end{figure}
\begin{figure}
\centering
\centerline{\epsfysize=8.truecm
\epsfbox{vssig2.ps}}
\caption[junk]{
The mean flux (top panel), variance (middle), $S_3$ (triangles), and
$S_4$ (squares) for simulation set (c), as a function of $\sigma^{2}_{l}$.
We have used $A=0.8$ and $\beta=1.6$ to generate the spectra.
The corresponding predictions of the spherical collapse model are shown as
lines. All results are in real space.
}
\label{vssig2}
\end{figure}
One way of looking at these statistics is as
constraints on the unknown parameters
which describe the density field, $\gamma$ and $\sigma^2_{L}$.
If we return to Fig. \ref{testac80} we
note that the results for $\sigma_\phi^2$ are quite
similar for both sets of simulations. This can be
also seen in Fig. \ref{varf}, if we look
at the results for different
values $\beta$ and $A$.
The mean flux or the skewness
seem to respond more sensitively to $\sigma_L$.
We can also see this by examining Figures \ref{meanf} and \ref{s3f},
where for small $\sigma_L$ the skewness of the flux
is a much better indicator of $\gamma$ than
the mean flux. This then changes at $\sigma_L^2 \simeq 2$
where there is a degeneracy in $S_3(\phi)$ for different
values of $\gamma$, while $\langle\phi\rangle$ seems to give different
predictions.
It is interesting that, although one model has
a larger amplitude of mass fluctuations, the variance in the flux
is not systematically higher or lower in one model than the other, but
instead which is higher depends on $A$.
We also note that the kurtosis becomes quite large
for small values of $A$.
Although the accuracy of our predictions
for $S_{4}$ decreases as $\sigma_L^2$ becomes large (see above), when
$\sigma_L^2$ is moderate, the
rapid increase in $S_{4}$ seen for small values of $A$ is reproduced,
even for values as large as $S_{4}\simeq100$.
\subsection{Comparisons with the mean flux held fixed}
We have seen in Fig. \ref{zflux} that if we choose a specific value of $A$,
the flux statistics can change fairly drastically if we add or remove the
effects of peculiar velocities or thermal broadening.
When working with observational data, the value of $A$ is at best only known
from estimates of the individual parameters in equation (\ref{afacs}),
$\Omega_{b}$, $\Gamma$, $H(z)$ to within a factor of 2. It would therefore
be a good idea if we could fix $A$ directly using Ly$\alpha$\ observations.
In principle, when working within the formalism we have adopted in this paper,
there are 4 unknown quantities, $\gamma$, $\sigma_{l}$, $A$ and $\beta$.
It has already been found in Croft {\rm et~al. } (1998a) that a convenient
way of effectively determining the correct value of $A$
to use in numerical simulations is to choose the value which yields
the observed mean flux $\langle\phi\rangle$. We carry out the
same procedure here,
so that when evaluating our predictions we make sure that
$\langle\phi\rangle$
is a fixed value. In Fig. \ref{fixm} we show results for three different
values of $\langle\phi\rangle$, 0.6, 0.7, and 0.8, which are in the range
measured from observations at $z=2-4$. In the top panel, we show the value of
$A$ required for each value of $\langle\phi\rangle$, as a function of
$\sigma^{2}_{l}$. All results are in redshift-space, except for
$\langle\phi\rangle=0.7$, which we also show in real-space. We can see that once the mean
flux is held fixed, the one-point moments of the flux change only little
with the addition of redshift distortions.
In particular we find that the normalized hierarchical
moments, $S_{3}(\phi)$ and
$S_{4}(\phi)$ are practically identical in real and redshift-space.
\begin{figure}
\centering
\centerline{\epsfysize=8.truecm
\epsfbox{fixmflux.ps}}
\caption[junk]{
Variation of the SC predictions (variance, middle panel and
skewness, bottom panel) with $\sigma_L^{2}$
for a fixed mean flux.
In the top panel we show the value of $A$ needed to give
mean flux $\langle\phi\rangle=0.6$ (dotted line), $0.7$ (short-dashed line), and
$0.8$ (long-dashed line).
All results are in redshift-space (with thermal broadening) except for
the continuous line which represents
results for
$\langle\phi\rangle=0.7$ in real-space.
}
\label{fixm}
\end{figure}
In Fig. \ref{fix07g} we concentrate on $\langle\phi\rangle=0.7$ and show the
variation with $\gamma$, again in redshift-space. We can see that when
the fluctuations in the underlying density field are large,
the variance and skewness tend to asymptotic values. As expected, more
negative values of $\gamma$, will produce spikier,
more saturated absorption features and so give a larger variance and skewness.
To see the maximum values that this tendency
will produce, we can consider the fact that $\phi$
can only lie between 0 and 1. There is therefore a limit to the level
of clustering, given a specific value of the $\langle\phi\rangle$.
This will occur if a spectrum contains only pixels
which have either $\phi=1$ or $\phi=0$ (either no absorption
or saturated). If the a fraction $f$ of the spectrum has $\phi=1$,
and the rest, a fraction $(1-f)$, has $\phi=0$, then $\langle\phi\rangle=f$.
Using the definition of $\sigma^{2}_\phi$ and
$S_{3}(\phi)$ in equations (\ref{eq:varf}) and (\ref{eq:s3f}),
we find that the variance in this case is
\begin{equation}
\sigma^{2}_\phi=\frac{1}{f}-1=\frac{1}{\langle\phi\rangle}-1,
\end{equation}
and the normalized hierarchical skewness is given by
\begin{equation}
S_{3}(\phi)=
\langle\phi\rangle\left(\frac{1}{\langle\phi\rangle}-1\right)
+\frac{(\langle\phi\rangle-1)}{([1/\langle\phi\rangle]-1)^{2}}.
\end{equation}
For $\langle\phi\rangle=0.7$,
as in Fig. \ref{fix07g}, the maximum possible $\sigma^{2}\phi=0.43$
and $S_{3}(\phi)=-1.33$.
\begin{figure}
\centering
\centerline{\epsfysize=8.truecm
\epsfbox{fixmflux07.ps}}
\caption[junk]{
Variation of the SC prediction
for the flux moments, with a fixed mean
flux, as in Fig. \ref{fixm}, except this time for
three different values of the
linear slope $\gamma$.
The dotted, short-dashed and long-dashed
lines show the predictions for $\gamma=0,-1$ and $-2$
respectively. All results in
redshift-space, and $\langle\phi\rangle=0.7$ for all curves.
}
\label{fix07g}
\end{figure}
\bigskip
\section{Discussion}
The methods we have introduced in this paper are primarily
meant to be used as tools in the study of structure formation.
The predictive techniques for the
one-point statistics should be most useful
when combined with information on clustering as a function of scale
(two-point statistics), which we explore in a separate paper.
As far as applying our formalism to observations is concerned,
the one-point statistics we have discussed should in principle
require data which resolves the structure in the forest, at least
at the level of the Jean's scale, or the thermal broadening width.
This would include Keck HIRES data (e.g., Kirkman \& Tytler 1997)
or other data with a spectral resolution better than $\sim10-15 {\rm \;km\;s^{-1}}$.
Use of lower resolution data effectively involves smoothing along the
line-of-sight, and we leave the treatment of this anisotropic
smoothing window to future work.
At the simplest level, one could use our predictions for the one-point moments
to compare directly to observations. For example, given a measurement
of the mean flux (say $\langle\phi\rangle=0.7$), and a value for
slope $\gamma$, one can use Fig. \ref{fix07g} to infer the
predictions of gravitional instability for $S_{3}(\phi)$ and
$S_{4}(\phi)$, and then check them against the observed values.
This sort of test, while being suitable for checking
whether predictions are generally compatible with Gaussian initial conditions,
and gravitational instability, is unlikely to be useful for discriminating
between popular Gaussian models, which have similar one point flux
PDFs. For example, there is little difference
in the behaviour of the flux moments for
two models with different values of $\gamma$ shown in Figs. \ref{vssig1}
and \ref{vssig2} (see also the small differences between $\Omega=1$ CDM and
$\Omega=0.4$ CDM in Rauch {\rm et~al. } 1997).
We therefore
advertise our one-point analytic predictions as being more suitable
for making wide searches of parameter space in order to ascertain the
broad statistical trends expected in gravitational instability models
(for example, see below for the evolution of the moments with redshift).
Direct comparisons with observational data will be more fruitful
when carried out with two point statistics. We
will explore these, and how the analytic methods we
have developed here can be applied to them, in a future paper.
For the moment, several obvious applications of our
techniques for making one-point predictions
suggest themselves, and we discuss these now.
We also discuss the accuracy of the density evolution predictions, and compare
the present work to that of others.
\subsection{Non-Gaussian initial conditions}
There is a large parameter space of non-Gaussian PDFs to choose
from (see Fosalba, Gazta\~{n}aga \& Elizalde 1998).
Here we will take a conservative approach and choose a model
with {\it mild} non-Gaussianities, with hierarchical correlations,
i.e., constant $S_J$, so that the cumulants $k_J \simeq k_2^{J-1}$
They tend to the Gaussian result as $k_2 \rightarrow 0$ more
quickly than
the dimensional scaling $k_J \simeq k_2^{J/2}$. As an illustrative example, we
will show results for a PDF based on the well known chi-squared
distribution
(see e.g., Fosalba, Gazta\~{n}aga \& Elizalde 1998 for details), also
known as Pearson's Type III (PT3) PDF or Gamma PDF:
\begin{equation}
P(\rho) = {\rho^{1/\sigma^2-1} \over \Gamma(1/\sigma^2)
(\sigma^2)^{1/\sigma^2}}
\exp{\left(-{\rho\over{\sigma^2}}\right)},
\end{equation}
for $\rho\equiv 1+\delta \geq 0$. This PDF
has $S_3=2$ and $S_4=6$ (in general $S_J=[J-1]!$). The number of the
chi-square degrees of freedom, $N$, in the discrete version
of this distribution would
correspond to $N=2/\sigma^2$.
It is not difficult to build a non-Gaussian
distribution with arbitrary values of $S_3$ or $S_4$, but to keep the
discussion simple we will just concentrate on the PT3 model. This model is not
only mildly non-Gaussian, in the sense of being hierarchical,
but also has moderate
values of $S_J$ (e.g., for comparison gravity produces $S_3=34/7$ from
Gaussian initial conditions and unsmoothed fluctuations).
A possible motivation for introduction of the PT3 could be
the isocurvature CDM cosmogony presented recently by Peebles (1998),
which has as initial conditions a one-point unsmoothed PDF given by
a chi-square distribution with $N=1$ (i.e. a PT3 with $\sigma^2=2$). Smoothing
would introduce higher levels of non-Gaussianities through the two-point
function, so this is a conservative approach.
A more generic motivation follows from the arguments presented
in Fosalba, Gaztanaga \& Elizalde (1998), who find that this distribution
plays a central role in non-Gaussian distributions that arise from
combinations of a Gaussian variables.
\begin{figure}
\centering
\centerline{\epsfysize=8.truecm
\epsfbox{ngflux.ps}}
\caption[junk]{
A comparison of Gaussian and non-Gaussian models.
Mean flux $\langle\phi\rangle$ and
variance $\sigma^2_\phi$ (bottom)
are plotted as a function of the linear
variance $\sigma^2_L$ for $\gamma=1$, $A=0.8$
and $\beta=1.6$. The short-dashed and long-dashed
lines show the SC predictions for a Gaussian model
and a non-Gaussian PT3 model respectively.
Symbols corrspond to the Gaussian simulations, with the squares
representing set (a) (which has $\gamma\simeq1$)
and the triangles set (d) (which has $\gamma\simeq-1.5$) .
The continuous lines are the
perturbative predictions of Section 4.3
All results are in real-space.}
\label{ngflux}
\end{figure}
In Figures \ref{ngflux}-\ref{ngs34} we compare the Gaussian and
non-Gaussian PT3
predictions. As can be seen, even in this mildly non-Gaussian model the
predictions are quite different and can be clearly distinguished from
the Gaussian simulations (symbols).
The SC predictions for Gaussian initial conditions
have been evaluated for $\gamma=-1$.
The squares correspond to simulation set (a), which
also have $\gamma\simeq-1$. The triangles are from set (c) and
have $\gamma=-1.5$. This value of $\gamma$ is different from
that used for the SC Gaussian predictions, although
the simulation point for $\sigma^{2}_{\phi}$
is still closer to them than the non-Gaussian predictions.
\begin{figure}
\centering
\centerline{\epsfysize=8.truecm
\epsfbox{ngs34.ps}}
\caption[junk]{
A comparison of Gaussian and non-Gaussian models.
We plot the skewness $S_3$
and kurtosis $S_4$.
The symbols and line types are the same as in Fig. \ref{ngflux}
The continous lines are the
perturbative predictions, which are different for the Gaussian
(thin line) and non-Gaussian (thick line) models.}
\label{ngs34}
\end{figure}
The leading order perturbative predictions, shown as
continuous lines in the Figures, are the same for $\langle\phi\rangle$ and
$\sigma^2_\phi$, because the non-Gaussian model is hierarchical.
However for $S_3$ and $S_4$ the predictions are different.
It is easy to show that to leading order the (hierarchical) non-Gaussian
model predictions are given by equation (\ref{eq:ptflux}) with:
\begin{eqnarray}
S_3 &=& S_3(IC) + S_3(G) \\
S_4 &=& S_4(IC) + S4(G)+ 4 S_3(G) S_3(IC) \nonumber
\label{eq:ptfluxng}
\end{eqnarray}
where $S_3(IC)$ and
$S_4(IC)$ are the initial conditions [$S_J(IC)=(J-1)!$ for PT3] and
$S_3(G)$ and
$S_4(G)$ are the leading order gravitational values:
$S_3(G)=34/7+\gamma$ and $S_4(G)=60712/1323+62/3\gamma +7/3\gamma^2$.
As can be seen in Fig. \ref{ngs34} the perturbative values are only
reached asymptotically as $\sigma_L \rightarrow 0$.
The skewness $S_3$ is significantly lower in the PT3 model and
the kurtosis $S_4$ is much larger, changing from $S_4 \simeq -2$ in the
Gaussian model
to $S_4 \simeq 20$ in the non-Gaussian one.
\begin{figure}
\centering
\centerline{\epsfysize=8.truecm
\epsfbox{fixmflux07ng.ps}}
\caption[junk]{
Comparison between non-Gaussian and Gaussian predictions when $\langle\phi\rangle$ is
held fixed.
We plot the optical depth amplitude $A$ (top), variance $\sigma^2_\phi$
and skewness $S_3(\phi)$ (bottom) as a function of the linear
variance $\sigma^2_L$ for a mean flux $\langle\phi\rangle=0.7$, $\gamma=1$,
and $\beta=1.6$. The short-dashed and continuous
lines show the predictions of the Gaussian and non-Gaussian PT3 models.
These results are in redshift-space.}
\label{fixmflux07ng}
\end{figure}
Fig. \ref{fixmflux07ng}, shows a comparion between
the Gaussian and PT3 models, for a fixed mean flux.
We noted previously (e.g., Fig \ref{fixm}) the useful fact that
there is little difference between the
predictions with or without redshift distortions
when the mean flux is fixed in this way.
Fortunately, this is not the case for non-Gaussianities, as we can see here.
The results change considerably if we assume different initial conditions.
\subsection{Redshift evolution}
We can see from equation (\ref{afacs})
that $A$ will change strongly with redshift,
and that there will be a dependence on
cosmology through the variation in
$H(z)$. We can investigate how the one-point flux statistics
will vary using our formalism. We compute the evolution of $H(z)$ and
$\sigma^{2}_{l}$ and show the results for $\Omega_{0}=1$ and three different
values of $\gamma$ in Fig.
\ref{zevolg}. We can see that both $\sigma^2_{\phi}$ and
$S_{3}(\phi)$ become much smaller as the mean absorption falls.
In these figures we are assuming that $\gamma$ is not changing as a function
of redshift, or equivalently that the smoothing scale is
fixed in comoving $\;h^{-1}{\rm Mpc}$.
In Fig. \ref{zevolc} we plot the results for three different
background cosmologies, two flat models, one with $\Omega=1$
and one with $\Omega_{0}=0.3$ and $\Omega_{\Lambda}=0.7$, as well
as an open model with $\Omega_{0}=0.3$.
All models have the same $\gamma$, $\sigma_{l}$ and the same mean flux
($\langle\phi\rangle=0.66$) at z=3.
We can see that there is virtually
no visible dependence on cosmology in the plot. The main
source of variation in $A$ which we
have not included, is the evolution of the photoionization
rate $\Gamma$, which
will change as the population of sources for the
ionizing background changes. From
Fig. \ref{zevolc} it is evident that inferences about the evolution
of the UV background should be fairly insensitive to the assumed cosmology.
We can see why this is so by looking at Fig. \ref{checkz}, where we
plot the evolution of $A$ and $\sigma^{2}_{l}$ separately.
If both quantities are fixed in the middle of the $z$ range as we have done,
then the changes over the range of validity of the FGPA ($z\simgt2$) are
small.
\begin{figure}
\centering
\centerline{\epsfysize=8.truecm
\epsfbox{fixa12z.ps}}
\caption[junk]{
The variation of the one-point statistics with redshift,
for three different values of $\gamma$, 0 (dotted line), -1 (short-dashed
line), and -2 (long-dashed line).
The parameter $A=1.2$ at $z=3$ in all cases, and $\sigma^{2}_{L}=2$.
Results are
for $\Omega=1$ and are
in redshift-space with
thermal broadening.
}
\label{zevolg}
\end{figure}
\begin{figure}
\centering
\centerline{\epsfysize=8.truecm
\epsfbox{fixa12z1.ps}}
\caption[junk]{
The variation of the one-point statistics with redshift,
for three different cosmologies.
We have plotted results for $\Omega=1$ (solid line),
$\Omega_{0}=0.3, \Omega_{\Lambda}=0$ (short-dashed line),
and $\Omega_{0}=0.3, \Omega_{\Lambda}=0.7$ (long-dashed line).
In all cases, $\sigma^{2}_{L}=2$ and $\langle\phi\rangle=0.66$ at $z=3$.
Results are in redshift-space with thermal broadening.
}
\label{zevolc}
\end{figure}
\begin{figure}
\centering
\centerline{\epsfysize=8.truecm
\epsfbox{checkz.ps}}
\caption[junk]{
Variation with redshift of $A$ and $\sigma^{2}_{l}$.
Results have been plotted for 3 differemt cosmologies,
$\Omega=1$ (solid line),
$\Omega_{0}=0.3, \Omega_{\Lambda}=0$ (short-dashed line),
and $\Omega_{0}=0.3, \Omega_{\Lambda}=0.7$ (long-dashed line).
}
\label{checkz}
\end{figure}
\subsection{The bias between flux and mass fluctuations}
In analogy with galaxy bias, we can define the bias of the flux
with respect to mass fluctuations as
\begin{equation}
b=\sqrt{\frac{\sigma^{2}_{\phi}}{\sigma^{2}_{\rho}}}.
\label{bias}
\end{equation}
Unlike the case with galaxy bias, we can easily predict this quantity
analytically using our formalism. We can choose between
two sorts of bias, either the bias between the linear $\rho$, or the
nonlinear $\rho$. In Fig. \ref{biasc} we have plotted both of these
as a function of the variance in the flux, $\sigma_{\phi}$. As we are dealing
with one-point statistics in this paper, we do not discuss the scale
dependence of bias. However we will do so in Paper II in this series
(Gazta\~{n}aga \& Croft 1999).
We should point out though that
in Croft {\rm et~al. } (1998a), it was found that
the shape of a two-point clustering statistic of the flux (in that case
the power spectrum) follows well that of the linear mass.
The bias between the two was found in that paper by
using a procedure which involved running numerical simulations set
up with the power spectrum shape measured
from observations and comparing the clustering level in simulated
spectra with the observed clustering.
In this paper, we can find the bias level in a simpler fashion.
We note that for
small values of the fluctuation amplitude, $b$ tends towards the values
predicted by perturbation theory (Section 4.3). For larger values, such
as those likely to be encountered in observations, a fully
non-linear treatment,
such as the one presented here, is needed.
\begin{figure}
\centering
\centerline{\epsfysize=8.truecm
\epsfbox{fixmbias.ps}}
\caption[junk]{
The bias between flux and mass fluctuations. We show the bias
(see equation [\ref{bias}]) between the flux
and the linear mass ($\sigma_{l}$) as a dotted line and the bias
between the flux and the non-linear mass ($\sigma_{nl}$) as a solid line.
Both these quantities are plotted as a function of
the observable $\sigma_{\phi}^{2}$. The statistics of $\phi$ have
been computed in redshift space with thermal broadening,
for $\langle\phi\rangle=0.7$, $\sigma^{2}_{L}=2$ and $\gamma=-1$.
}
\label{biasc}
\end{figure}
\subsection{Accuracy of the approximations for density evolution}
We have seen (e.g., Fig. \ref{pdf2}) that the predictions for the PDF
have the right qualitative features
(as a function of $\gamma$ and $\sigma_L$) but do not reproduce it
in all its details, even around
$\delta=0$, because the SC is just a local approximation and does not
include shear. The results of FG98
indicate that the statistics of the weakly non-linear
density moments are dominated by the local dynamical
contribution to the evolution of the PDF,
and shear forces are subdominant (they tend to
cancel out when taking the mean).
We find here that a similar cancellation occurs when considering
the PDF of the flux, $\phi$,
even when $\sigma_L \mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 1$.
Regarding the predictions for the 1-point moments of
the flux, we have checked that the
Spherical Collapse (SC) model yields
almost identical results to the Generalized
Zel'dovich Approximation (GZA), in Eq[\ref{GZA}],
with $\alpha=21/13$. This is true both in real and redshift-space,
and also holds for the prediction of the velocity divergence $\theta$.
This is an interesting result because
although the SC model is better motivated from the theoretical
point of view, the GZA model is much simpler to implement.
In particular the GZA model provides us with analytical
expressions for the PDF (i.e., equation [\ref{GZApdf}] and Fig. \ref{fpdf}),
which can be used in practice to make the predictions.
As shown in Fosalba \& Gazta\~naga (1998b)
the SC approach to modelling non-linearities
does not work as well for $\theta$ as for $\rho$. In particular,
it was found that the next to leading order (or loop)
non-linear corrections are not as accurate, indicating that
tidal effects are more important for $\theta$.
This could partially explain why the redshift distortion modelling
(see Fig. \ref{zflux}) requires the addition of an extra velocity
dispersion in order to match the results of simulations.
\subsection{Comparison to other work}
The first attempts to constrain cosmology using the Ly$\alpha$\ forest
focussed on comparisons between
simulated data generated with
specific cosmological models
and observational data, using traditional line statistics. When
the simulated and observed spectra are decomposed into a superposition of
Voigt-profile lines, the distribution of column densities of these lines and
the distribution of their widths (``b-parameters'') can be reasonably well
reproduced by gravitational instability models (e.g., Dav\'{e} {\rm et~al. } 1997,
although see Bryan {\rm et~al. } 1998).
In the context of these models, the line parameters do
not have a direct physical meaning, as these statistics were intended to
describe discrete thermally broadened lines. It is possible to
use these traditional statistics to characterise the amount of small scale
power in the underlying density field, for example (see e.g.,
Hui, Gnedin \& Zhang, Gnedin 1998). However, statistics which are more
attuned to the continous nature of the flux distribution and the underlying
density field have advantages, as well as promising to be more sensitive
discriminants, continous flux statistics can be designed to be less affected
by noise and choice of technique than profile fitting.
As the modern view of the Ly$\alpha$\ is essentially an outgrowth
of structure formation theory, it makes sense
to borrow statistical analysis techniques used in the study of the galaxy
distribution. Unlike the galaxy distribution, however, the Ly$\alpha$\ forest
offers a truly continuous distribution of flux, with no shot noise (albeit in
1 dimension), and a well motivated
theoretical relationship between the observed flux and the underlying mass.
So far, analysis of spectra using such
continuous flux statistics has mainly involved
specific cosmological models, and direct comparison of simulations
with observations.
The mean flux, $\langle\phi\rangle$ is the most obvious
flux statistic to calculate. Its measurement from observations
has been carried out by several authors (e.g, Press, Rybicki \&
Scheider, 1993, Zuo \& Lu 1993), and there is an extensive
discussion in the literature about
what is usually quoted as the mean flux decrement,
$D_{A}=1-\langle\phi\rangle$, or the mean effective optical depth,
$\overline{\tau}_{\rm eff}=-ln\langle\phi\rangle$
The probability
distribution of the flux has been investigated by Miralda-Escude {\rm et~al. } (1996),
Croft {\rm et~al. } (1996), Cen (1997), Rauch {\rm et~al. } (1997), Zhang {\rm et~al. } (1998),
and Weinberg {\rm et~al. } (1998). Other statistics such as
the two point correlation function of the flux have been introduced
(Zuo \& Bond 1994),
the power spectrum of the flux (Croft {\rm et~al. } 1998a, Hui 1999,
the two point pdf of the flux (Miralda-Escud\'{e} {\rm et~al. } 1997),
and the number of
times a spectrum crosses a threshold per unit length (Miralda-Escud\'{e}
{\rm et~al. } 1996,
Croft {\rm et~al. } 1996, Weinberg {\rm et~al. } 1998).
Methods have been developed to reconstruct
properties of the underlying mass distribution, such as the matter power
spectrum (Croft {\rm et~al. } 1998a, Hui 1999,
using our theoretical assumptions
for the relation between mass and flux. A technique for carrying out a direct
inversion from the flux to the mass distribution has been described by Nusser
and Haehnelt (1998).
In the present paper, we emphasise the use of statistics which have
been used extensively in the study of galaxy clustering, in particular the
higher order moments (e.g., Gazta\~{n}aga \& Frieman 1994).
These statistics, and their behaviour
when used to quantify the evolution of density perturbations in the
quasil-linear regime have been the subject of much attention. It would seem
that extending their use to the study of the Ly$\alpha$\ forest may offer us a
good chance to combine our knowledge of gravitional instability with that of
the IGM and in doing so make progress in both disciplines.
On the predictions side, many pieces of
analytic work have been carried out which incorporate the dominant
physical processes involved in producing high-redshift Ly$\alpha$\ absorption
(processes summarized in Section 2).
The studies'
most important differences have been
in the schemes used to follow the evolution
of density perturbations. These have included linear theory (Bi 1993,
Bi, Ge \& Fang 1995)
the lognormal approximation (Gnedin \& Hui 1996, Bi \& Davidsen 1997),
and the Zel'dovich Approximation (McGill 1990,
Reisennegger \& Miralda-Escud\'{e} 1995, Hui, Gnedin \& Zhang 1997).
Unlike these approximations, the SC model used in this paper
is able to reproduce exactly the perturbation theory results for
clustering. This accuracy makes it useful for calculating
high-order statistics of the flux, in our search for the signatures
of gravitational instability. It is important to realize that
we have not used the SC model to make simulated QSO Ly$\alpha$\ spectra,
but that we have used its predictions for the properties
of the mass to predict the
statistics of the Ly$\alpha$\ flux. With such an approach
(similar to that taken by Reisennegger \& Miralda-Escud\'{e} 1995)
we can quickly and easily vary parameters in order
to explore for example the dependence of a particular statistic on redshift
(Section 6.2). Some tasks which previously required numerical simulations,
such as finding the bias between density and
flux fluctuations (e.g., Croft {\rm et~al. } 1998ab), can be
carried out analytically.
The fully non-linear analysis we have described in this paper will allow
one to carry out many analyses of clustering where the precise
relationship between the statistics of the mass and the flux is
important. This includes attempts to constain the cosmic geometry
from the clustering measured between adjacent QSO lines of sight (e.g.,
McDonald \& Miralda-Escud\'{e} 1999 Hui, Stebbins \& Burles 1999.
In such situations, a non-linear theory of redshift distortions in the
Ly$\alpha$\ forest and of the bias between the fluctuations in the
observed flux and the mass, such as we have presented in this paper
should be very useful.
\section{Summary and conclusions}
We have presented a fully non-linear analytical treatment of the
one-point clustering properties of the high-z Ly$\alpha$\ forest in the gravitational
instability scenario.
The formalism we have presented should prove to be a
useful tool for studying the forest, and has immediate application to
the calculation of two-point statistics (see Paper II).
The two main ingredients we have used
are the Spherical Collapse model (SC) or
shear-free approximation for the evolution of density
perturbations, and the Fluctuating Gunn-Peterson Approximation
for the relation between density and Ly$\alpha$\ optical depth.
The predictions for the one-point clustering of the mass made using the
SC model depend only on two
parameters, $\sigma^{2}_{L}$ and $\gamma$.
These are, respectively, the linear variance of density
fluctuations, and the local slope of the linear correlation function.
In the FGPA, the relation between the mass distribution
and Ly$\alpha$\ forest optical depth
is largely governed by one parameter, $A$, which
can be set by appealing to observational
measurements of the mean flux.
The predictions of the SC model for the density are
typically quite non-linear ($\sigma^{2} \sim 2$ or more).
While these predictions are not expected to be accurate for the
high density tail of the distribution,
the weighting of the FGPA relation means that the
statistics of the flux are governed by the (quasi-linear)
density regime where the SC ${\it is}$ accurate.
The Ly$\alpha$\ forest is therefore well suited to study using
such an approximate analytical technique.
We note that the analytical predictions can be used in tests
of the picture of Ly$\alpha$\ forest formation and
the applicability of the FGPA.
With the extra information afforded by the two-point statistics and
considering the evolution of clustering as a function of redshift,
it will be possible to look for the signatures of any deviation from the
theroretical picture. Consistency tests for the gravitational instability
scenario include checking the evolution of the moments as
a function of redshift, the scaling of the hierarchical moments,
and their dependence on scale.
We plan to use our predictive techniques in future work
to extract information from observations, using both one-point and two-point
statistics. In the present paper, we have concentrated on developing
an analytical framework for studying the clustering of the
transmitted flux
in the forest region of QSO spectra.
Some of the results of our present exploration
of one-point statistics include the following:
\noindent $\bullet$ Using our formalism we are able to
estimate the bias between mass and flux fluctuations
without resorting to simulations.\\
$\bullet$ We can make predictions for the clustering properties of the Ly$\alpha$\
forest flux in both Gaussian and non-Gaussian models. We find large
differences between the two in an example case.\\
$\bullet$ In the limit of small fluctuations, our non-linear analytical
treatment converges to the same
results as those from Perturbation Theory calculations.\\
$\bullet$ For larger fluctuations, where Perturbation Theory is
no longer valid, we find our treatment to give accurate results
compared to statistics evaluated from
N-body simulated spectra. These predictions are most accurate
when the linear variance of the density field is $\sim 4$
and below. For values above this, the qualitative
behaviour of the high order moments is reproduced.\\
$\bullet$ We can follow the evolution of the one-point statistics
of the flux as
a function of redshift. We find that the difference between predictions
for different cosmologies is small, so that comparison with
observations should be useful in constraining the evolution
of the ionizing background intensity.\\
$\bullet$ If we normalise our predictions so that the mean flux
is held fixed (for example to the observed value), we find that the
statistics of the flux are relatively insensitive to the effects of
redshift distortions induced by peculiar velocities or
thermal broadening. This is most valid for the higher order normalised
hierarchical moments.\\
The high-$z$ Ly$\alpha$\ forest is amenable to study using analytical treatments,
a fact which gives it great value as a probe of structure formation.
Application of analytical theory to measurements
from the many observational datasets currently available promises
to reveal much, both about the validity of our theoretical assumptions, and
about cosmology.
\vspace{1cm}
{\bf Acknowledgments}
We thank George Efstathiou for the use of his P$^{3}$M N-Body code.
We also thank Jasjeet Bagla and Pablo Fosalba
for useful discussions, and the anonymous referee for suggestions
which improved the paper.
EG acknowledges support from
CSIC, DGICYT (Spain), project
PB93-0035, and CIRIT, grant GR94-8001 and
1996BEAI300192.
RACC acknowledges support from NASA Astrophysical Theory Grant NAG5-3820
and CESCA for support during a visit to the IEEC.
\section{References}
\def\par \hangindent=.7cm \hangafter=1 \noindent {\par \hangindent=.7cm \hangafter=1 \noindent}
\def ApJ { ApJ }
\defA \& A {A \& A }
\def ApJS { ApJS }
\defAJ{AJ}
\def ApJS { ApJS }
\def MNRAS { MNRAS }
\def Ap. J. Let. { Ap. J. Let. }
\par \hangindent=.7cm \hangafter=1 \noindent Baugh, C.M., Gazta\~{n}aga, E., \& Efstathiou, G.,
1995, MNRAS , 274, 1049
\par \hangindent=.7cm \hangafter=1 \noindent Bi, H.G., Boerner, G., Chu, Y., 1992, A \& A , 266, 1
\par \hangindent=.7cm \hangafter=1 \noindent Bi, H.G. 1993, ApJ , 405, 479
\par \hangindent=.7cm \hangafter=1 \noindent Bi, H., Ge, J., \& Fang, L.-Z. 1995, ApJ , 452, 90
\par \hangindent=.7cm \hangafter=1 \noindent Bi, H.G. \& Davidsen, A. 1997, ApJ , 479, 523
\par \hangindent=.7cm \hangafter=1 \noindent Bechtold, J., Crotts, A. P. S., Duncan, R. C., \& Fang, Y. 1994,
ApJ , 437, L83
\par \hangindent=.7cm \hangafter=1 \noindent Bernardeau, F., 1992, ApJ, 392, 1
\par \hangindent=.7cm \hangafter=1 \noindent Bernardeau, F., 1994, A\&A, 291, 697
\par \hangindent=.7cm \hangafter=1 \noindent Bernardeau, F., \& Kofman, L., 1995, ApJ , 443, 479
\par \hangindent=.7cm \hangafter=1 \noindent Bryan, G.L., Machacek, M., Anninos, P., Norman, M.L., 1998,
ApJ , 517, 13
\par \hangindent=.7cm \hangafter=1 \noindent Cen, R., 1997, 479, L85
\par \hangindent=.7cm \hangafter=1 \noindent Cen, R., Miralda-Escud\'e, J., Ostriker, J.P., \& Rauch, M. 1994,
ApJ , 437, L9
\par \hangindent=.7cm \hangafter=1 \noindent Croft, R.A.C., \& Efstathiou, G., 1994, MNRAS , 267, 390
\par \hangindent=.7cm \hangafter=1 \noindent Croft, R.A.C., Weinberg, D.H., Hernquist, L. \& Katz, N. 1996,
In : ``Proceedings of the 18th Texas Symposium on Relativistic
Astrophysics and Cosmology'', eds. Olinto, A., Frieman, J.,
\& Schramm, D.N.
\par \hangindent=.7cm \hangafter=1 \noindent Croft, R.A.C., Weinberg, D.H., Katz, N., \& Hernquist, L. 1997,
ApJ , 488, 532
\par \hangindent=.7cm \hangafter=1 \noindent Croft, R.A.C., Weinberg, D.H., Katz, N., \& Hernquist, L. 1998a,
ApJ , 495, 44
\par \hangindent=.7cm \hangafter=1 \noindent Croft, R.A.C., Weinberg, D.H., Pettini, M.,
Hernquist, L., \& Katz, N. 1999, ApJ 520, 1
\par \hangindent=.7cm \hangafter=1 \noindent Crotts, A.P.S., \& Fang, Y., 1998, ApJ , 497, 67
\par \hangindent=.7cm \hangafter=1 \noindent Dav\'e, R., Hernquist, L., Weinberg, D. H., \& Katz, N. 1997,
ApJ , 477, 21
\par \hangindent=.7cm \hangafter=1 \noindent Dav\'e, R., Hernquist, L., Katz, N., \& Weinberg, D. H., 1999,
ApJ , 511, 521
\par \hangindent=.7cm \hangafter=1 \noindent Dinshaw, N., Impey, C. D., Foltz, C. B., Weymann, R. J., \&
Chaffee, F. H. 1994, ApJ , 437, L87
\par \hangindent=.7cm \hangafter=1 \noindent Dinshaw, N., Foltz, C. B., Impey, C. D., Weymann, R. J., \&
Morris, S. L. 1995, Nature, 373, 223
\par \hangindent=.7cm \hangafter=1 \noindent Efstathiou, G., \& Eastwood, J.W., 1981, MNRAS , 194, 503
\par \hangindent=.7cm \hangafter=1 \noindent Efstathiou, G., Bond, J.R., \& White, S.D.M., 1992, MNRAS ,
258, 1p
\par \hangindent=.7cm \hangafter=1 \noindent Efstathiou, G., Davis, M., White, S.D.M., \& Frenk, C.S.,
1985, ApJS , 57, 241
\par \hangindent=.7cm \hangafter=1 \noindent Fardal, M., \& Shull, M., 1993, ApJ , 415, 524
\par \hangindent=.7cm \hangafter=1 \noindent Fosalba, P., \& Gazta\~naga, E., 1998a, MNRAS , 301, 503
(FG98)
\par \hangindent=.7cm \hangafter=1 \noindent Fosalba, P., \& Gazta\~naga, E., 1998b, MNRAS , 301, 535
\par \hangindent=.7cm \hangafter=1 \noindent Fry, J.N., \& Gazta\~naga, E., 1993, ApJ , 413, 447
\par \hangindent=.7cm \hangafter=1 \noindent Gazta\~naga, E., \& Baugh, C.M., 1995, MNRAS, 273, L5
\par \hangindent=.7cm \hangafter=1 \noindent Gazta\~{n}aga, E., \& Croft, R.A.C., 1999, {\it in preparation}
(Paper II)
\par \hangindent=.7cm \hangafter=1 \noindent Gazta\~naga, E., \& Fosalba, P., 1998, MNRAS , 301, 524
astro-ph/9712095
\par \hangindent=.7cm \hangafter=1 \noindent Gazta\~naga, E., \& Frieman, J.A., 1994, MNRAS, 425, 392
\par \hangindent=.7cm \hangafter=1 \noindent Gnedin, N. Y. \& Hui, L. 1996, ApJ , 472, L73
\par \hangindent=.7cm \hangafter=1 \noindent Gnedin, N. Y., \& Hui, L. 1998, MNRAS , 296, 44
\par \hangindent=.7cm \hangafter=1 \noindent Gunn, J.E., \& Peterson, B.A. 1965, ApJ , 142, 1633
\par \hangindent=.7cm \hangafter=1 \noindent Haiman, Z., Thoul, A., \& Loeb, A., 1996, ApJ , 464, 523
\par \hangindent=.7cm \hangafter=1 \noindent Hernquist L., Katz N., Weinberg D.H., \& Miralda-Escud\'e J.
1996, ApJ , 457, L5
\par \hangindent=.7cm \hangafter=1 \noindent Hockney,R.W., \& Eastwood, J.W., 1988, Computer Simulation
Using Particles, Bristol: Hilger
\par \hangindent=.7cm \hangafter=1 \noindent Hu, E.M., Kim, T-S., Cowie, L.L., Songaila, A.,\& Rauch, M.,
1995, AJ, 110, 1526
\par \hangindent=.7cm \hangafter=1 \noindent Hui, L., 1999 ApJ , 516, 519
\par \hangindent=.7cm \hangafter=1 \noindent Hui, L., \& Gnedin, N. 1997, MNRAS , 292, 27
\par \hangindent=.7cm \hangafter=1 \noindent Hui, L., Gnedin, N., \& Zhang, Y. 1997, ApJ , 486, 599
\par \hangindent=.7cm \hangafter=1 \noindent Hui, L., Stebbins, A., \& Burles, S., 1999, ApJ ,
611, L5
\par \hangindent=.7cm \hangafter=1 \noindent Juszkiewicz, R., Weinberg, D.H., Amsterdamski, P., Chodorowski, M.,
\& Bouchet, F., 1995, ApJ, 442, 39
\par \hangindent=.7cm \hangafter=1 \noindent Kaiser, N., 1987, MNRAS, 227, 1
\par \hangindent=.7cm \hangafter=1 \noindent Katz, N., Weinberg D.H., \& Hernquist, L. 1996, ApJS , 105, 19
\par \hangindent=.7cm \hangafter=1 \noindent Kim, T-S, Hu, E.M., Cowie, L.L., Songaila, A., 1997, AJ, 114,1
\par \hangindent=.7cm \hangafter=1 \noindent Kirkman, D., \& Tytler, D., ApJ , 484, 672
\par \hangindent=.7cm \hangafter=1 \noindent Kofman, L., Bertschinger, E., Gelb, J.M., Nusser, A., \& Dekel, A.,
1994, ApJ, 420, 44
\par \hangindent=.7cm \hangafter=1 \noindent Lu, L., Sargent, W.L.W., Womble, D., \& Takada-Hidai, M.,
1996, ApJ , 472, 509
\par \hangindent=.7cm \hangafter=1 \noindent Lynds, C. R., 1971, ApJ, 164, L73
\par \hangindent=.7cm \hangafter=1 \noindent McDonald, P. \& Miralda-Escud\'e, J. 1998, ApJ , 518, 24
\par \hangindent=.7cm \hangafter=1 \noindent McGill, C. 1990, MNRAS , 242, 544
\par \hangindent=.7cm \hangafter=1 \noindent Miralda-Escud\'e J., Cen R., Ostriker J.P., \& Rauch M. 1996,
ApJ , 471, 582
\par \hangindent=.7cm \hangafter=1 \noindent Miralda-Escud\'e J., Rauch, M., Sargent, W., Barlow, T.A.,
Weinberg, D.H., Hernquist, L., Katz, N., Cen, R., \& Ostriker, J.P., 1997,
in: ``Proceedings of 13th
IAP Colloquium: Structure and Evolution of the IGM from QSO
Absorption Line Systems'', eds. Petitjean,P., \& Charlot, S.,
\par \hangindent=.7cm \hangafter=1 \noindent Miralda-Escud\'e J., \& Rees, M. J. 1994, MNRAS , 266, 343
\par \hangindent=.7cm \hangafter=1 \noindent Munshi, D., Sahni, V., \& Starobinsky, A.A., 1994, ApJ, 436, 517
\par \hangindent=.7cm \hangafter=1 \noindent Nusser, A., \& Haehnelt, M., 1999, MNRAS , 300, 1027
\par \hangindent=.7cm \hangafter=1 \noindent Peebles, P.J.E., 1980, The Large-Scale Structure of
the Universe. Princeton Univ. Press, Princeton
\par \hangindent=.7cm \hangafter=1 \noindent Peebles, P.J.E., 1993, Principles of Physical Cosmology,
Princeton Univ, Press, Princeton
\par \hangindent=.7cm \hangafter=1 \noindent Peebles, P.J.E., 1999, ApJ , 510, 531
\par \hangindent=.7cm \hangafter=1 \noindent Press W.H., Rybicki G.B., Schneider D.P., 1993, ApJ , 414, 64
\par \hangindent=.7cm \hangafter=1 \noindent Protogeros, Z.A.M., \& Scherrer, R.J. 1997, MNRAS, 284, 425
\par \hangindent=.7cm \hangafter=1 \noindent Rauch, M., Miralda-Escud\'e, J., Sargent, W. L. W., Barlow, T. A.,
Weinberg, D. H., Hernquist, L., Katz, N., Cen, R., \& Ostriker, J. P.
1997, ApJ , 489, 7
\par \hangindent=.7cm \hangafter=1 \noindent Rauch, M., 1998, ARAA, in press
\par \hangindent=.7cm \hangafter=1 \noindent Reisenegger, A. \& Miralda-Escud\'{e}, J. 1995, ApJ , 449, 476
\par \hangindent=.7cm \hangafter=1 \noindent Sargent, W.L.W., Young, P.J., Boksenberg, A., Tytler, D. 1980,
ApJS , 42, 41
\par \hangindent=.7cm \hangafter=1 \noindent Scherrer, R., \& Gazta\~naga, E., 1998, in preparation
\par \hangindent=.7cm \hangafter=1 \noindent Theuns, T., Leonard, A., Efstathiou, G., Pearce, F.R., Thomas, P.A.,
1998, MNRAS , 301, 478
\par \hangindent=.7cm \hangafter=1 \noindent Wadsley, J. W. \& Bond, J.R. 1996,
in ``Computational Astrophysics'',
Proceedings of the 12th Kingston Conference,
eds. Clarke, D., West, M., PASP, astro-ph 9612148
\par \hangindent=.7cm \hangafter=1 \noindent Weinberg, D.H., Hernquist, L., Katz, N., Croft, R., \&
Miralda-Escud\'{e}, J., 1998a, In : Proc. of the 13th IAP Colloquium,
Structure and Evolution of the IGM from QSO
Absorption Line Systems, eds. Petitjean, P., \& Charlot, S.,
Nouvelles Fronti\`eres, Paris, astro-ph/9709303
\par \hangindent=.7cm \hangafter=1 \noindent Weinberg {\rm et~al. }, 1998b, In:
Proceedings of the MPA/ESO Conference
"Evolution of Large Scale Structure: From Recombination to Garching",
astro-ph/9810142
\par \hangindent=.7cm \hangafter=1 \noindent Zhang Y., Anninos P., \& Norman M.L. 1995, ApJ , 453, L57
\par \hangindent=.7cm \hangafter=1 \noindent Zhang Y., Meiksin, A., Anninos P., \& Norman M.L. 1998, ApJ , 495, 63
\par \hangindent=.7cm \hangafter=1 \noindent Zuo, L. 1992, MNRAS , 258, 36
\par \hangindent=.7cm \hangafter=1 \noindent Zuo, L., \& Bond, J.R., 1994, ApJ , 423, 73
\par \hangindent=.7cm \hangafter=1 \noindent Zuo, L., \& Lu, L., 1993, ApJ , 418, 601
| proofpile-arXiv_065-8762 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In the standard model (SM) the cancellation of the chiral anomalies
takes place only when we consider the contributions of leptons and
quarks, indicating a deeper relation between them. Therefore, it is
rather natural to consider extensions of the SM that treat quarks and
leptons in the same footing and consequently introduce new bosons,
called leptoquarks, that mediate quark-lepton transitions. The class
of theories exhibiting these particles includes composite models
\cite{comp,af}, grand unified theories \cite{gut}, technicolor models
\cite{tec}, and superstring-inspired models \cite{e6}. Since leptoquarks
couple to a lepton and a quark, they are color triplets under
$SU(3)_C$, carry simultaneously lepton and baryon number, have
fractional electric charge, and can be of scalar or vector nature.
From the experimental point of view, leptoquarks possess the striking
signature of a peak in the invariant mass of a charged lepton with a
jet, which make their search much simpler without the need of
intricate analyses of several final state topologies. Certainly, the
experimental observation of leptoquarks is an undeniable signal of
physics beyond the SM, so there have been a large number of direct
searches for them in $e^+e^-$ \cite{LEP}, $e^\pm p$ \cite{HERA}, and
$p \bar{p}$ \cite{PP} colliders. Up to now all of these searches led
to negative results, which bound the mass of vector leptoquarks to be
larger than 245--340 (230--325) GeV, depending on the leptoquark
coupling to gluons, for branching ratio into $e^\pm$-jet equal to 1
(0.5) \cite{donote}.
The direct search for leptoquarks with masses above a few hundred GeV
can be carried out only in the next generation of $pp$ \cite{fut:pp},
$ep$ \cite{buch,fut:ep}, $e^+e^-$ \cite{fut:ee}, $e^-e^-$
\cite{fut:elel}, $e\gamma$ \cite{fut:eg}, and $\gamma\gamma$
\cite{fut:gg} colliders. In this work, we extend our previous analyses
of the LHC potentiality to discover scalar leptoquarks to vector ones
\cite{nos}. We study the pair production of first generation
leptoquarks that lead to a final state topology containing two jets
plus a pair $e^+e^-$. We analyze this signal for vector leptoquarks
and use the results for the SM backgrounds obtained in Ref.\
\cite{nos}, where careful studies of all possible top production,
QCD and electroweak
backgrounds for this topology were performed using the event generator
PYTHIA \cite{pyt}. We restrict ourselves to first generation
leptoquarks that couple to pairs $e^\pm u$ and $e^\pm d$ with the
leptoquark interactions described by the most general effective
Lagrangian invariant under $SU(3)_C \otimes SU(2)_L \otimes U(1)_Y$
\cite{buch}.
In this work, we study the pair production of vector leptoquarks via
quark-quark and gluon-gluon fusions, {\em i.e.}
\begin{eqnarray}
q + \bar{q}~ &&\rightarrow \Phi_{\text{lq}} + \bar{\Phi}_{\text{lq}} \; ,
\label{eq:qq}
\\
g + g~ &&\rightarrow \Phi_{\text{lq}} + \bar{\Phi}_{\text{lq}} \; ,
\label{eq:gg}
\end{eqnarray}
where we denote the vector leptoquarks by $\Phi_{\text{lq}}$. These processes
give rise to $e^+e^-$ pairs with large transverse momenta accompanied by jets.
Using the cuts devised in Ref.\ \cite{nos} to reduce the backgrounds and
enhance the signals, we show that the LHC will be able to discover first
generation vector leptoquarks with masses smaller than 1.5--2.3 TeV, depending
on their couplings and on the integrated luminosity (10 or 100 fb$^{-1}$).
Here, we perform our analyses using a specially created event generator for
vector leptoquarks. Moreover we consider the most general coupling of vector
leptoquarks to gluons, exhibiting our results for two distinct scenarios. In
particular we analyze the most conservative case where the leptoquark
couplings to gluons is such that the pair production cross section is
minimal.~\cite{bbk}.
While we were preparing this paper, a similar study of the production of
vector leptoquarks appeared \cite{dion}, which uses a different event
generator, distinct cuts and a less general leptoquark coupling to gluons,
which contains only the chromomagnetic anomalous coupling to gluons.
Low energy experiments give rise to strong constraints on leptoquarks, unless
their interactions are carefully chosen \cite{shanker,fcnc}. In order to evade
the bounds from proton decay, leptoquarks are required not to couple to
diquarks. To avoid the appearance of leptoquark induced FCNC, leptoquarks are
assumed to couple only to a single quark family and only one lepton
generation. Nevertheless, there still exist low-energy limits on leptoquarks.
Helicity suppressed meson decays restrict the couplings of leptoquarks to
fermions to be chiral \cite{shanker}. Moreover, residual FCNC \cite{leurer},
atomic parity violation \cite{apv}, effects of leptoquarks on the $Z$ physics
through radiative corrections \cite{jkm} and meson decay
\cite{leurer,apv,davi} constrain the first generation leptoquarks to be
heavier than $0.5$--$1.5$ TeV when the coupling constants to fermions are
equal to the electromagnetic coupling $e$. Therefore, our results indicate
that the LHC can not only confirm these indirect limits but also expand them
considerably.
The outline of this paper is as follows. In Sec.\ \ref{l:eff} we
introduce the $SU(3)_C \otimes SU(2)_L \otimes U(1)_Y$ invariant
effective Lagrangians that we analyzed. In Sec.\ \ref{mc} we describe
in detail how we have performed the signal Monte Carlo simulation.
Sec.\ \ref{bckg} contains a brief summary of the backgrounds and
kinematical cuts needed to suppress them. Our results and conclusions
are shown in Sec.\ \ref{resu}.
\section{Models for vector leptoquark interactions}
\label{l:eff}
In this work we assume that leptoquarks decay exclusively into the known
quarks and leptons. In order to avoid the low energy constraints, leptoquarks
must interact with a single generation of quarks and leptons with chiral
couplings. Furthermore, we also assume that their interactions are $SU(3)_C
\otimes SU(2)_L \otimes U(1)_Y$ gauge invariant above the electroweak symmetry
breaking scale $v$. The most general effective Lagrangian satisfying these
requirements and baryon number (B), lepton number (L), electric charge, and
color conservations is \cite{buch}~:
\begin{eqnarray}
{\cal L}^{\text{f}}_{\text{eff}}~ & & =~ {\cal L}_{F=2} ~+~ {\cal L}_{F=0}
+ \text{h.c.} \; ,
\label{e:int}
\\
{\cal L}_{F=2}~ & & =~ g_{\text{2L}}~ (V^{\mu}_{2L})^T~ \bar{d}^c_R~
\gamma_\mu~ i \tau_2~ \ell_L +
g_{\text{2R}}~ \bar{q}^c_L~ \gamma_\mu~i \tau_2~ e_R ~ V^{\mu}_{2R}
+ \tilde{g}_{\text{2L}}~ (\tilde{V}^{\mu}_{2L})^T
\bar{u}^c_R~ \gamma_\mu~ i \tau_2~ \ell_L
\; ,
\label{lag:fer}\\
{\cal L}_{F=0}~ & & =~ h_{\text{1L}}~ \bar{q}_L~ \gamma_\mu~ \ell_L~
V^{\mu}_{1L}
+ h_{\text{1R}}~ \bar{d}_R~ \gamma_{\mu}~ e_R~ V^{\mu}_{1R}
+ \tilde{h}_{\text{1R}}~ \bar{u}_R~ \gamma_{\mu}~ e_R~ \tilde{V}^{\mu}_{1R}
+ h_{\text{3L}}~ \bar{q}_L~ \vec{\tau}~ \gamma_{\mu}~ \ell_L \cdot
\vec{V}^{\mu}_{3L}
\; ,
\label{eff}
\end{eqnarray}
where $F=3B+L$, $q$ ($\ell$) stands for the left-handed quark (lepton)
doublet, and $u_R$, $d_R$, and $e_R$ are the singlet components of the
fermions. We denote the charge conjugated fermion fields by
$\psi^c=C\bar\psi^T$ and we omitted in Eqs.\ (\ref{lag:fer}) and
(\ref{eff}) the flavor indices of the leptoquark couplings to
fermions. The leptoquarks $V^{\mu}_{1R(L)}$ and $\tilde{V}^{\mu}_{1R}$
are singlets under $SU(2)_L$, while $V^{\mu}_{2R(L)}$ and
$\tilde{V}^{\mu}_{2L}$ are doublets, and $V^{\mu}_{3L}$ is a triplet.
From the above interactions we can see that for first generation leptoquarks,
the main decay modes of leptoquarks are those into pairs $e^\pm q$ and $\nu_e
q^\prime$. In this work we do not consider their decays into neutrinos,
however, we take into account properly the branching ratio into charged
leptons. In Table \ref{t:cor} we exhibit the leptoquarks that can be studied
using the final state $e^{\pm}$ plus a jet, as well as their decay products
and branching ratios. Only the leptoquarks $V^{2}_{2L}$,
$\tilde{V}^{2}_{2L}$, and $V_3^{+}$ decay exclusively into a jet and a
neutrino, and are not constrained by our analyses; see Eqs.\ (\ref{lag:fer})
and (\ref{eff}).
Leptoquarks are color triplets, therefore, it is natural to assume that they
interact with gluons. However, the $SU(2)_C$ gauge invariance is not enough to
determine the interactions between gluons and vector leptoquarks since it is
possible to introduce two anomalous couplings $\kappa_g$ and $\lambda_g$ which
are related to the anomalous magnetic and electric quadrupole moments
respectively. We assume here that these quantities are independent in order to
work with the most general scenario. The effective Lagrangian describing the
interaction of vector leptoquarks ($\Phi$) with gluons is given by \cite{bbk}
\begin{equation}
\label{eqLAV}
{\cal L}_V^g
= -\frac{1}{2} V^{i \dagger}_{\mu \nu}
V^{\mu \nu}_i + M_\Phi^2 \Phi_{\mu}^{i \dagger} \Phi^{\mu}_i -
ig_s \left [ (1 - \kappa_g)
\Phi_{\mu}^{i \dagger}
t^a_{ij}
\Phi_{\nu}^j
{\cal G}^{\mu \nu}_a
+ \frac{\lambda_g}{M_\Phi^2} V^{i\dagger}_{\sigma \mu}
t^a_{ij}
V_{\nu}^{j \mu} {\cal G}^{\nu \sigma}_a \right ] ,
\end{equation}
where there is an implicit sum over all vector leptoquarks, $g_s$
denotes the strong coupling constant, $t^a$ are the $SU(3)_C$
generators, $M_\Phi$ is the leptoquark mass, and $\kappa_g$ and
$\lambda_g$ are the anomalous couplings, assumed to be real. The field
strength tensors of the gluon and vector leptoquark fields are
respectively
\begin{eqnarray}
{\cal G}_{\mu \nu}^a &=& \partial_{\mu} {\cal A}_{\nu}^a
- \partial_{\nu}
{\cal A}_{\mu}^a + g_s f^{abc} {\cal A}_{\mu b} {\cal A}_{\nu c},
\nonumber\\
V_{\mu \nu}^{i}
&=& D_{\mu}^{ik}
\Phi_{\nu k} - D_{\nu}^{ik} \Phi_{\mu k},
\end{eqnarray}
with the covariant derivative given by
\begin{equation}
D_{\mu}^{ij} = \partial_{\mu} \delta^{ij} - i g_s
t_a^{ij}
{\cal A}^a_{\mu} \; ,
\end{equation}
where ${\cal A}$ stands for the gluon field.
At present there are no direct bounds on the anomalous parameters $\kappa_g$
and $\lambda_g$. Here we analyze two scenarios: in the first, called minimal
cross section couplings, we minimize the production cross section as a
function of these parameters for a given vector leptoquark mass. In the
second case, which we name Yang--Mills couplings, we consider that the vector
leptoquarks are gauge bosons of an extended gauge group which corresponds to
$\kappa_g=\lambda_g=0$.
\section{Signal Simulation and Rates}
\label{mc}
Although the processes for the production of scalar leptoquarks are
incorporated in PYTHIA, the vector leptoquark production is absent. In order
to study the pair production of vector leptoquarks via the processes
(\ref{eq:qq}) and (\ref{eq:gg}) we have created a Monte Carlo generator for
these reactions, adding a new external user processes to the
PYTHIA~5.7/JETSET~7.4 package \cite{pyt}. We have included in our simulation
two cases of anomalous vector leptoquark couplings to gluons, as well as their
decays into fermions.
In our analyses, we assume that the pair production of leptoquarks is due
entirely to strong interactions, {\i.e.}, we neglect the contributions from
$t$-channel lepton exchange via the leptoquark couplings to fermions
\cite{bbk}. This hypothesis is reasonable since the fermionic couplings $g$
and $h$ are bounded to be rather small by the low energy experiments for
leptoquarks masses of the order of TeV's.
The analytical expressions for the scattering amplitudes were taken from the
{\bf \sf LQPAIR} package \cite{lqpair}, which was created using the {\sf
CompHEP} package \cite{comphep}. The
integration over the phase space was done using {\bf \sf BASES} \cite{bases}
while we used {\bf \sf SPRING} for the simulation \cite{bases}. An interface
between these programs and PYTHIA was specially written.
In our calculations we employed the parton distribution functions CTEQ3L
\cite{cteq2l}, where the scale $Q^2$ was taken to be the leptoquark mass
squared. Furthermore, the effects of final state radiation, hadronization and
string jet fragmentation (by means of JETSET~7.4) have also been taken into
account.
The cross sections for the production of vector leptoquark pairs are presented
in Fig.\ \ref{f:pair1} for Yang--Mills and minimal couplings. The numerical
values of the total cross sections are shown in Table \ref{t:pair1} along with
the values of couplings $\kappa_g$ and $\lambda_g$ that lead to the minimum
total cross section. As we can see from this figure, the gluon-gluon fusion
mechanism (dashed line) dominates the production of leptoquark pairs for the
leptoquark masses relevant for this work at the LHC center-of-mass energy.
Moreover, quark--quark fusion is less important in the minimal coupling
scenario.
Pairs of leptoquarks decaying into $e^\pm$ and a $u$ or $d$ quark produce a
pair $e^+ e^-$ and two jets as signature. In our analyses we kept track of
the $e^\pm$ (jet) carrying the largest transverse momentum, that we denoted by
$e_1$ ($j_1$), and the $e^\pm$ (jet) with the second largest $p_T$, that we
called $e_2$ ($j_2$). Furthermore, we mimicked the experimental resolution of
the hadronic calorimeter by smearing the final state quark energies according
to
\begin{eqnarray*}
\left.\frac{\delta E}{E}\right|_{had} &=& \frac{0.5}{\sqrt{E}} \; .
\end{eqnarray*}
The reconstruction of jets was done using the subroutine LUCELL of PYTHIA.
The minimum $E_T$ threshold for a cell to be considered as a jet initiator has
been chosen 2 GeV, while we assumed the minimum summed $E_T$ for a collection
of cells to be accepted as a jet to be 7 GeV inside a cone $\Delta R =\sqrt{
\Delta \eta^2 + \Delta \phi^2} =0.7$. The calorimeter was divided on $(50
\times 30)$ cells in $\eta \times \phi$ with these variables in the range
$(-5<\eta<5) \times (0.<\phi<2\pi)$.
\section{Background Processes and Kinematical Cuts}
\label{bckg}
Within the scope of the SM, there are many sources of backgrounds leading to
jets accompanied by a $e^+e^-$ pair, which we classify into three classes
\cite{nos}: QCD processes, electroweak interactions, and top quark production.
The reactions included in the QCD class depend exclusively on the strong
interaction and the main source of hard $e^\pm$ in this case is the
semileptonic decay of hadrons possessing quarks $c$ or $b$. The electroweak
processes contains the Drell--Yan production of quark pairs and the single and
pair productions of electroweak gauge bosons. Due to the large gluon-gluon
luminosity at the LHC, the production of top quark pairs is important by
itself due to its large cross section. These backgrounds have been fully
analyzed by us in Ref.\ \cite{nos} and we direct the reader to this reference
for further information.
In order to enhance the signal and reduce the SM backgrounds we have devised a
number of kinematical cuts in Ref.\ \cite{nos} that we briefly present:
\begin{itemize}
\item [(C1)] We require that the leading jets and $e^\pm$ are in the
pseudorapidity interval $|\eta| < 3$;
\item [(C2)] The leading leptons ($e_1$ and $e_2$) should have $p_T > 200$
GeV;
\item [(C3)] We reject events where the invariant mass of the pair $e^+e^-$
($M_{e_1 e_2}$) is smaller than 190 GeV. This cut reduces the backgrounds
coming from $Z$ decays into a pair $e^+ e^-$;
\item [(C4)] In order to further reduce the $t\bar{t}$ and remaining off-shell
$Z$ backgrounds, we required that {\em all} the invariant masses $M_{e_i
j_k}$ are larger than 200 GeV, since pairs $e_i j_k$ coming from an
on-shell top decay have invariant masses smaller than $m_{\text top}$. The
present experiments are able to search for leptoquarks with masses smaller
than 200 GeV, therefore, this cut does not introduce any bias on the
leptoquark search.
\end{itemize}
The above cuts reduce to a negligible level all the SM backgrounds \cite{nos}.
In principle we could also require the $e^\pm$ to be isolated from hadronic
activity in order to reduce the QCD backgrounds. Nevertheless, we verified
that our results do not change when we introduce typical isolation cuts in
addition to any of the above cuts. Since the leptoquark searches at the LHC
are free of backgrounds after these cuts \cite{nos}, the LHC will be able to
exclude with 95\% C.L. the regions of parameter space where the number of
expected signal events is larger than 3 for a given integrated luminosity.
\section{Results and Conclusions}
\label{resu}
In order to access the effect of the cuts C1--C4 we exhibit in Fig.\
\ref{fig1} the $p_T$ distribution of the two most energetic leptons and jets
originating from the decay of a vector leptoquark of 1 TeV for minimal cross
section and Yang--Mills couplings to gluons. As we can see from this figure,
the $p_T$ distribution are peaked at $M_\Phi/2$ (=~500 GeV), and also exhibit
a large fraction of very hard jets and leptons. The presence of this peak
indicates that the two hardest jets and leptons usually originate from the
decay of the leptoquark pair. However, we still have to determine which are
the lepton and jet coming from the decay of one of the leptoquarks. Moreover,
we exhibit in Fig.\ \ref{fig2}a the $e^+ e^-$ invariant mass distribution
associated to 1 TeV vector leptoquark events. Clearly the bulk of the $e^+e^-$
pairs are produced at high invariant masses, and consequently the impact of
the cut C3 on the signal is small. Fig.\ \ref{fig2}b shows the invariant mass
distribution for the four possible $e_i j_k$ pairs combined in the 1 TeV
vector leptoquark case; the cut C4 does not affect significantly the signal
either.
In our analyses of vector leptoquark pair production we applied the cuts
C1---C4 and also required the events to have two $e^\pm$-jet pairs with
invariant masses in the range $| M_\Phi \pm \Delta M|$ with $\Delta M$ given
in Table \ref{bins}. The pair production cross section after cuts is shown in
Fig.\ \ref{fig:xsec} for minimal cross section and Yang--Mills couplings.
For fixed values of $M_\Phi$, $\kappa_g$, and $\lambda_g$, the attainable
bounds at the LHC on vector leptoquarks depend upon its branching ratio
($\beta$) into a charged lepton and a jet, which is 0.5 or 1 for the
leptoquarks listed on Table \ref{t:cor}.
We exhibit in Table \ref{t:lim:pair} the 95\% C.L. limits on the leptoquark
masses that can be obtained from their pair production at the LHC for two
different integrated luminosities. In the worse scenario, {\em i.e.} minimal
cross section couplings, the LHC will be able to place absolute bounds on
vector leptoquark masses smaller than 1.5 (1.6) TeV for $\beta = 0.5$ (1) and
an integrated luminosity of 10 fb$^{-1}$. With a larger luminosity of 100
fb$^{-1}$ this bound increases to 1.8 (1.9) TeV. Moreover, the limits are 300
GeV more stringent in the case of Yang--Mills coupling to gluons. At this
point it is interesting to compare our results with the ones in Ref.\
\cite{dion}. Requiring a $5\sigma$ signal as well as a minimum of 5 events
like in Ref.\ \cite{dion}, we obtain that the LHC will be able to rule out
vector leptoquarks with masses smaller than 2.0 (2.1) TeV for $\beta = 0.5$
(1), Yang--Mills couplings, and an integrated luminosity of 100 fb$^{-1}$.
Therefore, our cuts are more efficient than the ones proposed in Ref.\
\cite{dion} which lead to a bound of 1.55 TeV in the above conditions.
In brief, the discovery of vector leptoquarks is without any doubt a striking
signal of new physics beyond the standard model. The LHC will be of tremendous
help in the quest for new physics since, as we have shown, it will be able to
discover vector leptoquarks with masses smaller than 1.8--2.3 TeV, depending
in their couplings to fermions and gluons, through their pair production for
an integrated luminosity of 100 fb$^{-1}$.
\acknowledgments
This work was partially supported by Conselho Nacional de Desenvolvimento
Cient\'\i fico e Tecnol\'ogico (CNPq), Funda\c{c}\~ao de Amparo \`a Pesquisa
do Estado de S\~ao Paulo (FAPESP), and by Programa de Apoio a N\'ucleos de
Excel\^encia (PRONEX).
| proofpile-arXiv_065-8780 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\subsection{Logic Programming and Program Verification}
The logic programming paradigm in its original form
(see \citeasnoun{Kow74}) is based on a computational
interpretation of a subset of first-order logic that
consists of Horn clauses.
The proof theory and semantics for this subset has been
well understood for some time already (see, e.g. \citeasnoun{Llo87}).
However, the practice has quickly shown that this subset is too
limited for the programming purposes, so
it was extended in a number of ways, notably by allowing
negation. This led to a long and still inconclusive quest for
extending the appropriate soundness and completeness results
to logic programs that allow negation
(see, e.g. \citeasnoun{AB94}).
To complicate the matters further, Prolog extends logic programming
with negation by several features that are very operational
in nature.
Constraint logic programming (see, e.g. \citeasnoun{jaffar-constraint-87})
overcomes some of Prolog's deficiencies, notably its clumsy handling
of arithmetic, by extending the computing process
from the (implicit) domain of terms to arbitrary structures.
Logic programming and constraint logic programming are two
instances of declarative programming. According to
declarative programming a program has
a dual reading as a formula in a logic with a simple semantics.
One of the important advantages of
declarative programming is that, thanks to the semantic
interpretation,
programs are easier to understand, modify and verify.
In fact, the dual reading of a declarative program as a formula allows
us to reason about its correctness by restricting our attention to a
logical analysis of the corresponding formula. For each logical
formalism such an analysis essentially boils down to the question
whether the formula corresponding to the program is in an appropriate
sense equivalent to the specification.\footnote{
This can be made precise in the following way. Let $\vec{x}$ be the
free variables of the specification $\phi_s$, and $\vec{y}$ some auxiliary
variables used in the program $\phi_p$. Now correctness of the program
with respect to the specification can be expressed by the sentence
$\mbox{$\forall$}\vec{x}~((\mbox{$\exists$}\vec{y}~\phi_p(\vec{x},\vec{y})) \mbox{$\:\rightarrow\:$} \phi_s(\vec{x}))$,
to be valid under the fixed interpretation. This sentence ensures
that all solutions found by the program indeed satisfy the specification.
Note that, under this definition, a program corresponding to a false
formula is vacuously ``correct'', because there are no solutions found.
Therefore the stronger notion of correctness and completeness obtained
by requiring also the converse implication above, and loosely phrased as
``equivalence in an appropriate sense'', is the more adequate one.}
However, in our opinion, we do not have at our disposal
{\em simple\/} and {\em intuitive\/} methods that could be
used to verify in a rigorous way realistic ``pure'' Prolog programs
(i.e. those that are also logic programs) or
constraint logic programs.
We believe that one of the reasons for this state of affairs is
recursion, on which both logic programming and constraint logic
programming rely.
In fact, recursion is often less natural than iteration,
which is a more basic concept.
Further,
recursion in combination with negation can naturally lead to programs
that are not easily amenable to a formal analysis.
Finally, recursion always introduces a possibility of divergence
which explains why the study of termination is such an important
topic in the case of logic programming (see, e.g., \citeasnoun{DD94}).
\subsection{First-order Logic as a Computing Mechanism}
Obviously, without recursion logic programming and constraint
logic programming are hopelessly inexpressive. However, as we
show in this paper, it is still possible to construct a
simple and realistic
approach to declarative programming that draws on the ideas of these
two formalisms and in which recursion is absent.
This is done by providing a constructive interpretation of
satisfiability of first-order formulas w.r.t. to a fixed but
arbitrary interpretation. Iteration is realized by means
of bounded quantification that is guaranteed to terminate.
More precisely, assuming a first-order language $L$, we introduce an
effective, though incomplete, computation mechanism that approximates
the satisfiability test in the following sense. Given an
interpretation $I$ for $L$ and a formula $\phi(\bar{x})$ of $L$,
assuming no abnormal termination in an error arises, this mechanism
computes a witness $\bar{a}$ (that is, a vector of elements of the
domain of $I$ such that $\phi(\bar{a})$ holds in $I$) if
$\phi(\bar{x})$ is satisfiable in $I$, and otherwise it reports a
failure.
The possibility of abnormal termination in an error is unavoidable
because effectiveness cannot be reconciled with the fact that for many
first-order languages and interpretations, for example the language of
Peano arithmetic and its standard interpretation, the set of true
closed formulas is highly undecidable. As we wish to use this
computation mechanism for executing formulas as programs, we spend
here considerable effort at investigating the ways of limiting the
occurrence of errors.
From the technical point of view our approach,
called {\em formulas as programs\/}, is obtained by
isolating a number of concepts and ideas present (often implicitly)
in the logic programming and constraint logic programming
framework, and reusing them in a simple and self-contained way.
In fact, the proposed computation mechanism and a rigorous account
of its formal properties rely only on the
basics of first-order logic. This contrasts
with the expositions of logic programming and constraint
logic programming which require
introduction of several concepts and auxiliary results
(see for the latter e.g. \citeasnoun{JMMS98}).
\subsection{Computing Mechanism}
\label{ssec:informal}
Let us explain now the proposed computation mechanism by
means of an example.
Consider the formula
\begin{equation}
(x = 2 \mbox{$\ \vee\ $} x = 3)\mbox{$\ \wedge\ $} (y = x + 1 \mbox{$\ \vee\ $} 2 = y) \mbox{$\ \wedge\ $} (2*x = 3*y)
\label{eq:no-prolog}
\end{equation}
interpreted over the standard structure of natural numbers.
Is it satisfiable? The answer is ``yes'': indeed, it suffices
to assign 3 to $x$ and 2 to $y$.
In fact, we can compute this valuation systematically by initially
assigning 2 to $x$ and first trying the assignment of the value of
$x+1$, so 3, to $y$. As
for this choice of value for $y$ the equality $2*x = 3*y$ does not
hold, we are led to the second possibility, assignment of 2 to $y$.
With this choice $2*x = 3*y$ does not hold either. So we need to
assign 3 to $x$ and, eventually, 2 to $y$.
The above informal argument can be extended to a systematic
procedure that attempts to find a satisfying valuation
for a large class of formulas.
\subsection{Plan and Rationale of the Paper}
\label{ssec:rationale}
This paper is organized as follows.
In Section~\ref{sec:compu} we provide a
formal account of the proposed computation mechanism.
In Section~ \ref{sec:souncom}
we show that this approach is both
correct (sound) and, in the absence of errors, complete.
In the Appendix, Subsection~\ref{ssec:libneg}, \ref{ssec:libimp},
we investigate ways of limiting the occurrence
of errors for the case of negation and implication.
For programming purposes first-order logic has limited expressiveness,
so we extend it in Section~\ref{sec:extensions} by a number of
features that are useful for programming. This involves sorts (i.e.,
types), use of arrays and bounded quantifiers. The resulting fragment
is surprisingly expressive and the underlying computation mechanism
allows us to interpret many formulas as highly non-trivial programs.
As already mentioned above, formulas as programs approach to
computing here discussed is
inspired by logic programming and constraint logic programming
but differs from them in a number of ways.
For example, formula (\ref{eq:no-prolog}) cannot be interpreted as a
logic programming query or run as a Prolog query. The reason is that
the equality symbol in logic programming and Prolog stands for ``is
unifiable with'' and the term $2*x$ does not unify with $3*y$. In
case of Prolog a possible remedy is to replace in (\ref{eq:no-prolog})
specific occurrences of the equality symbol by Prolog's arithmetic
equality ``\verb|=:=|'' or by the Prolog evaluator operator {\tt is}. The
correct Prolog query that corresponds to formula (\ref{eq:no-prolog})
is then
{\tt (X = 2 ; X = 3), (Y is X+1 ; 2 = Y), 2*X =:= 3*Y.}
\noindent
(Recall that ``\verb|;|'' stands in Prolog for disjunction and
``\verb|,|'' for
conjunction.) This is clearly much less readable than
(\ref{eq:no-prolog}) as three different kinds of equality-like
relations are used here.
A more detailed comparison with (constraint) logic programming and
Prolog requires knowledge of the details of our approach and is
postponed to Section~\ref{sec:related}. In principle, the formulas
as programs approach is a variant of constraint logic programming
in which both recursion and constraint handling procedures are absent,
but the full first-order syntax is used.
We also compare in Section~\ref{sec:related} our
formulas as programs approach with the formulas as types approach,
also called the Curry-Howard-De Bruijn interpretation.
The formulas as programs
approach to programming has been realized in the programming
language \mbox{\sf Alma-0}{} \citeasnoun{ABPS98a} that extends imperative
programming by features that support declarative programming.
This shows that this approach, in contrast to logic programming
and constraint logic programming, can easily be
combined with imperative
programming. So the introduced restrictions, such
as lack of a constraint store, can be beneficial in practice.
In Section~\ref{sec:alma0} we summarize the main features of
\mbox{\sf Alma-0}{}.
The work reported here can be used to provide logical underpinnings
for a fragment of \mbox{\sf Alma-0}{} that does not include destructive
assignment or recursive procedures,
and to reason about programs written in this fragment.
We substantiate the latter claim by presenting in
Section~\ref{sec:squares} the correctness proof of a purely declarative
\mbox{\sf Alma-0}{} solution to the well-known non-trivial combinatorial problem
of partitioning a rectangle into a given set of squares.
In conclusion, we provided here
a realistic framework for declarative programming
based on first-order logic and the traditional
Tarskian semantics, which can be combined in a straightforward
way with imperative programming.
\section{Computation Mechanism}
\label{sec:compu}
Consider an arbitrary first-order language with equality
and an interpretation for it.
We assume in particular a domain of discourse,
and a fixed signature with a
corresponding interpretation of its elements in the domain.
\begin{definition}[valuation, assignment]\label{def:valuation}
A \emph{valuation} is a finite mapping from variables to domain elements.
Valuations will be denoted as single-valued sets of pairs
$x/d$, where $x$ is a variable and $d$ a domain element.
We use $\alpha,\alpha',\beta,\ldots$ for arbitrary valuations and
call $\alpha'$ an \emph{extension} of $\alpha$ when $\alpha\subseteq\alpha'$,
that is, every assignment to a variable by $\alpha$ also occurs in $\alpha'$.
Further, $\varepsilon$ denotes the empty valuation.
Let $\alpha$ be a valuation.
A term $t$ is $\alpha$-\emph{closed} if all variables of $t$ get a value
in $\alpha$. In that case $t^\alpha$ denotes the \emph{evaluation}
of $t$ under $\alpha$ in the domain.
More generally, for any expression $E$ the result of the replacement
of each $\alpha$-closed term $t$ by $t^\alpha$ is denoted by $E^\alpha$.
An $\alpha$-assignment is an equation $s = t$ one side of which,
say $s$, is a variable that is not $\alpha$-closed and the other
side, $t$, is an $\alpha$-closed term.\hfill{$\boxtimes$}
\end{definition}
In our setting, the only way to assign values to variables
will be by evaluating an $\alpha$-assignment as above.
Given such an $\alpha$-assignment, say $x = t$, we evaluate it
by assigning to $x$ the value $t^\alpha$.
\begin{definition}[formulas]\label{def:formulas}
In order to accommodate the definition of the operational semantics,
the set of fomulas has an inductive definition which may look a bit
peculiar. First, universal quantification is absent since we
have no operational interpretation for it.
Second, every formula is taken to be
a conjunction, with every conjunct (if any) either
an atomic formula (in short: an
{\em atom\/}), or a disjunction, conjunction or implication of formulas,
a negation of a formula or an existentially quantified formula.
The latter two unary constructors are assumed to bind stronger
then the previous binary ones.
The atoms include equations of the form $s=t$, with $s$ and $t$ terms.
For maximal clarity we
give here an inductive definition of the set of formulas.
In the operational semantics all conjunctions are taken to
be right associative.
\begin{enumerate}
\item The empty conjunction $\Box$ is a formula.
\item If $\psi$ is a formula and $A$ is an atom, then $A\wedge\psi$
is a formula.
\item If $\psi,\phi_1,\phi_2$ are formulas,
then $(\phi_1\vee\phi_2)\wedge\psi$ is a formula.
\item If $\psi,\phi_1,\phi_2$ are formulas,
then $(\phi_1\wedge\phi_2)\wedge\psi$ is a formula.
\item If $\psi,\phi_1,\phi_2$ are formulas,
then $(\phi_1 \mbox{$\:\rightarrow\:$} \phi_2)\wedge\psi$ is a formula.
\item If $\phi,\psi$ are formulas,
then $\neg\phi \wedge\psi$ is a formula.
\item If $\phi,\psi$ are formulas,
then $\exists x\;\phi \wedge\psi$ is a formula.
\hfill{$\boxtimes$}
\end{enumerate}
\end{definition}
\begin{definition}[operational semantics]\label{def:opersem}
The operational semantics of a formula will be defined in terms
of a tree $\B{\phi}_{\alpha}$ depending on the formula $\phi$
and the (initial) valuation $\alpha$.
The root of $\B{\phi}_{\alpha}$ is labelled with the pair $\phi,\alpha$.
All internal nodes of the tree $\B{\phi}_{\alpha}$
are labelled with pairs consisting of a formula and a valuation.
The leaves of the tree $\B{\phi}_{\alpha}$
are labelled with either
\begin{itemize}
\item $\mathit{error}$ (representing the occurrence
of an error in this branch of the computation), or
\item $\mathit{fail}$ (representing
logical failure of the computation), or
\item a valuation (representing
logical success of the computation and yielding values for the
free variables of the formula that make the formula true).~$\boxtimes$
\end{itemize}
\end{definition}
It will be shown that valuations labelling success leaves are
always extensions of the initial valuation.
For a fixed formula, the operational semantics can be viewed as a function
relating the initial valuation to the valuations labelling success leaves.
We can now define the computation tree $\B{\phi}_{\alpha}$.
The reader may consult first
Fig.~\ref{fig:ctree} to see such a tree for formula
(\ref{eq:no-prolog}) and the empty valuation $\varepsilon$.
\begin{figure}[htbp]
\begin{center}
\input{aptbezem1a.pstex_t}
\caption{The computation tree for formula (1) and valuation $\varepsilon$.
\label{fig:ctree}}
\end{center}
\end{figure}
\begin{definition}[computation tree]\label{def:tree}
The (computation) tree $\B{\phi}_{\alpha}$ is defined by
lexicographic induction on the pairs consisting of
the \emph{size} of the formula $\phi$, and of the
\emph{size} of the formula $\phi_1$ for which
$\phi$ is of the form $\phi_1 \mbox{$\ \wedge\ $} \psi$,
following the
structure given by Definition~\ref{def:formulas}.
\begin{enumerate}
\item\label{os:empty}
For the empty conjunction we define $\B{\Box}_\alpha$
to be the tree with the root that has
a success leaf $\alpha$ as its son:
\begin{center}
\input{aptbezem2.pstex_t}
\end{center}
\item\label{os:atom}
If $\psi$ is a formula and $A$ is an atom,
then we distinguish four cases depending on the form of $A$.
In all four cases $\B{A\wedge\psi}_\alpha$ is a tree
with a root of degree one.
\begin{itemize}
\item Atom $A$ is $\alpha$-closed and true.
Then the root of $\B{A\wedge\psi}_\alpha$ has $\B{\psi}_\alpha$ as its subtree:
\begin{center}
\input{aptbezem3.pstex_t}
\end{center}
\item Atom $A$ is $\alpha$-closed and false.
Then the root of $\B{A\wedge\psi}_\alpha$ has the failure leaf $\mathit{fail}$
as its son:
\begin{center}
\input{aptbezem4.pstex_t}
\end{center}
\item Atom $A$ is not $\alpha$-closed, but is not an $\alpha$-assignment.
Then the root of $\B{A\wedge\psi}_\alpha$ has the $\mathit{error}$ leaf as its son:
\begin{center}
\input{aptbezem5.pstex_t}
\end{center}
\item Atom $A$ is an $\alpha$-assignment $s = t$. Then either
$s$ or $t$ is a variable which is not $\alpha$-closed,
say $s \equiv x$ with $x$ not $\alpha$-closed and $t$
$\alpha$-closed. Then the root of $\B{A\wedge\psi}_\alpha$ has
$\B{\psi}_{\alpha'}$ as its subtree, where $\alpha'$ extends
$\alpha$ with the pair $x/t^\alpha$:
\begin{center}
\input{aptbezem6.pstex_t}
\end{center}
The symmetrical case is analogous.
\end{itemize}
\item\label{os:disj}
If $\psi,\phi_1,\phi_2$ are formulas,
then we put $\B{(\phi_1\vee\phi_2)\wedge\psi}_\alpha$ to be
the tree with a root of degree two and with left and right subtrees
$\B{\phi_1\wedge\psi}_\alpha$ and
$\B{\phi_2\wedge\psi}_\alpha$, respectively:
\begin{center}
\input{aptbezem7.pstex_t}
\end{center}
Observe that $\phi_1 \wedge\psi$ and $\phi_2 \wedge\psi$ are smaller
formulas than $(\phi_1\vee\phi_2) \wedge\psi$ in the adopted
lexicographic ordering.
\item\label{os:conj}
If $\psi,\phi_1,\phi_2$ are formulas,
then we put $\B{(\phi_1\wedge\phi_2)\wedge\psi}_\alpha$ to be
the tree with a root of degree one and the tree
$\B{\phi_1\wedge (\phi_2 \wedge\psi)}_\alpha$
as its subtree:
\begin{center}
\input{aptbezem8.pstex_t}
\end{center}
This substantiates the association of conjunctions to the right as
mentioned in Definition~\ref{def:formulas}. Note that, again, the
definition refers to lexicographically smaller formulas.
\item\label{os:impl}
If $\psi,\phi_1,\phi_2$ are formulas,
then we put $\B{(\phi_1\mbox{$\:\rightarrow\:$}\phi_2)\wedge\psi}_\alpha$ to be
a tree with a root of degree one. We distinguish three cases.
\begin{itemize}
\item Formula $\phi_1$ is $\alpha$-closed and $\B{\phi_1}_\alpha$
contains only failure leaves.
Then the root of\\$\B{(\phi_1 \mbox{$\:\rightarrow\:$}\phi_2) \wedge\psi}_\alpha$ has
$\B{\psi}_\alpha$ as its subtree:
\begin{center}
\input{aptbezem14.pstex_t}
\end{center}
\item Formula $\phi_1$ is $\alpha$-closed and
$\B{\phi_1}_\alpha$ contains at least one success leaf.
Then the root of $\B{(\phi_1 \mbox{$\:\rightarrow\:$}\phi_2) \wedge\psi}_\alpha$ has
$\B{\phi_2 \wedge\psi}_\alpha$ as its subtree:
\begin{center}
\input{aptbezem15.pstex_t}
\end{center}
\item
In all other cases the root of $\B{(\phi_1 \mbox{$\:\rightarrow\:$} \phi_2) \wedge\psi}_\alpha$ has
the error leaf $\mathit{error}$ as its son:
\begin{center}
\input{aptbezem16.pstex_t}
\end{center}
\end{itemize}
The above definition relies on the logical equivalence of
$\phi_1 \mbox{$\:\rightarrow\:$}\phi_2$ and $\neg\phi_1 \vee \phi_1$,
but avoids unnecessary branching in the computation tree
that would be introduced by the disjunction.
In the Appendix, Subsection~\ref{ssec:libneg}, we explain how in the
first case the condition that $\phi_1$ is $\alpha$-closed can be relaxed.
\item\label{os:negation}
If $\phi,\psi$ are formulas, then to define
$\B{\neg\phi \wedge\psi}_\alpha$ we distinguish
three cases with respect to $\phi$.
In all of them $\B{\neg\phi \wedge\psi}_\alpha$ is a tree
with a root of degree one.
\begin{itemize}
\item Formula $\phi$ is $\alpha$-closed and $\B{\phi}_\alpha$
contains only failure leaves.
Then the root of $\B{ \neg\phi \wedge\psi}_\alpha$ has
$\B{\psi}_\alpha$ as its subtree:
\begin{center}
\input{aptbezem9.pstex_t}
\end{center}
\item Formula $\phi$ is $\alpha$-closed and $\B{\phi}_\alpha$
contains at least one success leaf.
Then the root of $\B{\neg\phi \wedge\psi}_\alpha$ has
the failure leaf $\mathit{fail}$ as its son:
\begin{center}
\input{aptbezem10.pstex_t}
\end{center}
\item
In all other cases the root of $\B{\neg\phi \wedge\psi}_\alpha$ has
the error leaf $\mathit{error}$ as its son:
\begin{center}
\input{aptbezem11.pstex_t}
\end{center}
There are basically two classes of formulas $\phi$ in this contingency:
those that are not $\alpha$-closed and those for which
$\B{\phi}_\alpha$ contains no success leaf and
at least one error leaf. In Subsection~\ref{ssec:libneg}
we give some examples
of formulas in the first class and show how in some special
cases their negation can still be evaluated in a sound way.
\end{itemize}
\item\label{os:exquant} The case of $\exists x\;\phi \wedge\psi$
requires the usual care with bound variables to avoid name clashes.
Let $\alpha$ be a valuation. First, we require
that the variable $x$ does not occur in the domain of $\alpha$.
Second, we require that the variable $x$ does not occur in $\psi$.
Both requirements are summarized by phrasing that $x$ is \emph{fresh}
with respect to $\alpha$ and $\psi$. They can be met by
appropriately renaming the bound variable $x$.
With $x$ fresh as above we define $\B{\exists x\;\phi \wedge\psi}_\alpha$
to be the tree with a root of degree one and
$\B{\phi\wedge\psi}_\alpha$ as its subtree:
\begin{center}
\input{aptbezem12.pstex_t}
\end{center}
Thus the operational semantics of $\exists x\;\phi \wedge\psi$ is,
apart from the root of degree one, identical to that of $\phi\wedge\psi$.
This should not come as a surprise, as $\exists x\;\phi \wedge\psi$
is logically equivalent to $\exists x\;(\phi\wedge\psi)$ when $x$
does not occur in $\psi$.
Observe that success leaves of $\B{\phi\wedge\psi}_\alpha$,
and hence of $\B{\exists x\;\phi \wedge\psi}_\alpha$,
may or may not contain an assignment for $x$. For example,
$\exists x\;x=3 \wedge\psi$ yields an assignment for $x$,
but $\exists x\;3=3 \wedge\psi$ does not. In any case the assignment
for $x$ is not relevant for the formula as a whole,
as the bound variable $x$ is assumed to be fresh.
In an alternative approach, the possible assignment for $x$ could be deleted.
\hfill{$\boxtimes$}
\end{enumerate}
\end{definition}
To apply the above computation mechanism to arbitrary first-order
formulas we first replace all occurrences of a universal quantifier
$\mbox{$\forall$}$ by $\neg \mbox{$\exists$} \neg$ and rename the bound variables so that no
variable appears in a formula both bound and free.
Further, to minimize the possibility of generating errors it is useful
to delete occurrences of double negations, that is, to replace every
subformula of the form $\neg \neg \psi$ by $\psi$.
\section{Soundness and Completeness}
\label{sec:souncom}
The computation mechanism defined in the previous section
attempts to find a valuation that makes the original
formula true if this formula is satisfiable, and otherwise it
reports a failure. The lexicographic ordering used in
Definition~\ref{def:opersem} guarantees that for any formula
the computation tree is finite.
In this section we prove correctness
and completeness of this mechanism.
We start with an easy lemma which is helpful to keep track of valuations,
followed by a definition.
\begin{lemma}\label{lem:extval} For every formula $\phi$ and valuation $\alpha$,
$\B{\phi}_\alpha$ contains only valuations extending $\alpha$
with pairs $x/d$, where $x$ occurs free in $\phi$ or
appears existentially quantified in $\phi$.
Moreover, if $\phi$ is $\alpha$-closed then $\B{\phi}_\alpha$
contains only valuations extending $\alpha$ with variables
that appear existentially quantified in $\phi$.
\end{lemma}
\Proof
By induction on the lexicographic ordering of formulas
as given in Definition~\ref{def:tree}.\hfill{$\boxtimes$}
\begin{definition}[status of computation tree]\label{def:treestat}
A computation tree is
\begin{itemize}
\item {\em successful\/} if it contains a success leaf,
\item {\em failed\/} if it contains only failure leaves,
\item {\em determined\/} if it is either successful or failed,
that is, it either contains a success leaf or contains only
failure leaves.
\hfill{$\boxtimes$}
\end{itemize}
\end{definition}
Note that according to this definition a successful tree can contain
the $\mathit{error}$ leaves. This means that the $\mathit{error}$ leaves differ from
Prolog's run-time errors. In fact, in a top-down implementation of
the proposed computation mechanism the depth-first search traversal of
a computation tree should {\em not} abort but rather backtrack upon
encounter of such a leaf and continue, if possible, in a search for a
successful leaf.
We can now state the desired correctness result.
\begin{theorem}[Soundness]\label{thm:soundn}
Let $\phi$ be a formula and $\alpha$ a valuation.
\begin{enumerate}\renewcommand{\theenumi}{\roman{enumi}
\item
If $\B{\phi}_\alpha$ contains a success leaf labelled with $\alpha'$,
then $\alpha'$ extends $\alpha$ and $\mbox{$\forall$}(\phi^{\alpha'})$ is true.
(In particular $\mbox{$\exists$}(\phi^{\alpha})$ is true in this case.)
\item If $\B{\phi}_\alpha$ is failed, then
$\mbox{$\exists$}(\phi^{\alpha})$ is false.
\end{enumerate}
\end{theorem}
\Proof
See Appendix, Subsection~\ref{ssec:soundn}.
\hfill{$\boxtimes$}
\vspace{5 mm}
The computation mechanism defined in Section~\ref{sec:souncom} is
obviously incomplete due to the possibility of errors. The following
results states that, in the absence of errors, this mechanism is
complete.
\begin{theorem}[Restricted Completeness] \label{thm:completeness}
Let $\phi$ be a formula and $\alpha$ a valuation such that
$\B{\phi}_\alpha$ is determined.
\begin{enumerate}\renewcommand{\theenumi}{\roman{enumi}
\item Suppose that $\mbox{$\exists$}(\phi^{\alpha})$ is true.
Then the tree $\B{\phi}_\alpha$ is successful.
\item Suppose that $\mbox{$\exists$}(\phi^{\alpha})$ is false.
Then the tree $\B{\phi}_\alpha$ is failed.
\end{enumerate}
\end{theorem}
\Proof
See Appendix, Subsection~\ref{ssec:completeness}.
\hfill{$\boxtimes$}
\vspace{5 mm}
Admittedly, this result is very weak in the sense that any computation
mechanism that satisfies the above soundness theorem
also satisfies the restricted completeness theorem.
It is useful to point out that the computation mechanism of
Section~\ref{sec:compu} used in the above theorems is by no means a simple
counterpart of the provability relation of the first-order logic.
For the sake of further discussion let us say that
two formulas $\phi$ and
$\psi$ are {\em equivalent\/}
if
\begin{itemize}
\item
the computation
tree $\B{\phi}_\varepsilon$
is successful iff the computation
tree $\B{\psi}_\varepsilon$ is successful and in that case
both computation trees have the same set of successful leaves,
\item $\B{\phi}_\varepsilon$ is failed iff $\B{\psi}_\varepsilon$ is failed.
\end{itemize}
Then
$\phi \mbox{$\ \wedge\ $} \psi$ is not equivalent to $\psi \mbox{$\ \wedge\ $} \phi$
(consider ${x=0}\wedge{x<1}$ and ${x<1}\wedge{x=0}$)
and $\neg (\phi \mbox{$\ \wedge\ $} \psi)$ is not equivalent to
$\neg \phi \mbox{$\ \vee\ $} \neg \psi$
(consider $\neg({x=0}\wedge{x=1})$ and $\neg({x=0})\vee\neg({x=1})$.
In contrast,
$\phi \mbox{$\ \vee\ $} \psi$ {\em is\/} equivalent to $\psi \mbox{$\ \vee\ $} \phi$.
We can summarize this treatment of the connectives by saying that we
use a sequential conjunction and a parallel disjunction. The above
notion of equivalence deviates from the usual one, for example
de Morgan's Law is not valid.
A complete axiomatization of the equivalence relation induced by the
computation mechanism of Section~\ref{sec:compu} is an interesting
research topic.
\section{Extensions}
\label{sec:extensions}
The language defined up to now is clearly too limited as a formalism
for programming. Therefore we discuss a number of
extensions of it that are convenient for programming purposes.
These are: non-recursive procedures, sorts (i.e., types),
arrays and bounded quantification.
\subsection{Non-recursive Procedures}
\label{subsec: nonrec}
We consider here non-recursive procedures. These can easily be
introduced in our framework using the well-known
{\em extension by definition\/} mechanism
(see, e.g., \citeasnoun{Sho67}[pages 57-58]).
More specifically,
consider a first-order formula $\psi$ with the free variables
$x_1, \mbox{$\ldots$}, x_n$. Let $p$ be a {\em new\/} $n$-ary relation symbol.
Consider now the formula
\[
p(x_1, \mbox{$\ldots$}, x_n) \mbox{$\:\leftrightarrow\:$} \psi
\]
that we call the {\em definition\/} of $p$.
Suppose that, by iterating the above procedure, we have a collection $P$
of definitions of relation symbols. We assume furthermore that
the fixed but arbitrary interpretation has been extended with
interpretations of the new relation symbols in such a way
that all definitions in $P$ become true. There is only one
such extension for every initial interpretation.
Let $\phi$ be a formula in the extended first-order language,
that is, with atoms $p(t_1, \mbox{$\ldots$}, t_n)$ from $P$ included.
We extend the computation mechanism $\B{\phi}_\alpha$
of Section~\ref{sec:compu}, by adding at the beginning of
Clause~\ref{os:atom} in Definition~\ref{def:tree} the
following item for handling atoms $p(t_1, \mbox{$\ldots$}, t_n)$ from $P$.
\begin{itemize}
\item Atom $A$ is of the form $p(t_1, \mbox{$\ldots$}, t_n)$, where
$p$ is a defined relation symbol with the definition
\[
p(x_1, \mbox{$\ldots$}, x_n) \mbox{$\:\leftrightarrow\:$} \psi_p.
\]
Then the root of $\B{A\wedge\psi}_\alpha$ has $\B{\psi_{p}\C{x_1/t_1, \mbox{$\ldots$},
x_n/t_n} \mbox{$\ \wedge\ $} \psi}_\alpha$ as its subtree:
\begin{center}
\input{aptbezem26.pstex_t}
\end{center}
Here $\psi_{p}\C{x_1/t_1, \mbox{$\ldots$}, x_n/t_n}$ stands for the result of
substituting in $\psi_{p}$ the free occurrences of the variables $x_1,
\mbox{$\ldots$}, x_n$ by $t_1, \mbox{$\ldots$}, t_n$, respectively.
\end{itemize}
The proof of the termination of this extension of the computation mechanism
introduced in Section~\ref{sec:compu} relies on a refinement of the
lexicographic ordering used in Definition~\ref{def:tree},
taking into account the new atoms.
The above way of handling defined relation symbols obviously
corresponds to the usual treatment of procedure calls in
programming languages.
The soundness and completeness results can easily be extended
to the case of declared relation symbols. In this version
truth and falsity refer to the extended interpretation.
So far for \emph{non-recursive} procedures.
\subsection{Sorts}
In this subsection we introduce sorts (i.e., types).
The extension of one-sorted to many-sorted first-order logic is
standard. It requires a refinement of the notion of signature:
arities are no longer just numbers, but have to specify the sorts of
the arguments of the function and predicate symbols, as well as the sorts
of the function values. Terms and atoms are
well-formed if the sorts of the arguments comply with the signature.
In quantifying a variable, its sort should be made explicit (or should
at least be clear from the context).
Interpretations for many-sorted first-order languages are obtained
by assigning to each sort a non-empty domain and by assigning to each
function symbol and each predicate symbol respectively an
appropriate function and relation on these sorts.
Sorts can be used to model
various basic data types occurring in programming practice:
integers, booleans, characters, but also compound data types
such as arrays.
\subsection{Arrays}
Arrays can be modelled as vectors or matrices,
using projection functions that are given a \emph{standard interpretation}.
Given a sort for the indices (typically, a segment of integers or
a product of segments) and a sort for the elements of the array,
we add a sort for arrays of the corresponding type to the signature.
We also add to the language {\em array variables\/},
or {\em arrays\/} for short,
to be interpreted as arrays in the standard interpretation.
We use the letters $a,b,c$ to denote arrays and
to distinguish arrays from objects of other sorts.
We write $a[t_1, \mbox{$\ldots$}, t_n]$ to denote the projection of the array
$a$ on the index $[t_1, \mbox{$\ldots$}, t_n]$,
akin to the use of subscripted variables in programming languages.
The standard interpretation of each projection function maps
a given array and a given index to the correct element.
Thus subscripted variables are simply terms.
These terms are handled by means
of an extension of the computation mechanism of Section~\ref{sec:compu}.
A typical example of the use of such a term is
the formula $a[0,0]=1$, which should be matched with the formula $x=1$
in the sense that the evaluation of each equality can result in an
assignment of the value 1 to a variable, either $a[0,0]$ or $x$. So
we view $a[0,0]$ as a variable and not as a compound term.
To this end we extend a number of notions introduced in
the previous section.
\begin{definition}
An {\em array valuation\/} is a finite mapping whose elements
are of the form $a[d_1, \mbox{$\ldots$}, d_n]/d$, where $a$ is an $n$-ary array
symbol and $d_1, \mbox{$\ldots$}, d_n, d$ are domain elements.
An {\em extended valuation\/} is a finite mapping that is a union
of a valuation and an array valuation.
\hfill{$\boxtimes$}
\end{definition}
The idea is that an element $a[d_1, \mbox{$\ldots$}, d_n]/d$ of an array
valuation assigns the value $d$ to the (interpretation of)
array $a$ applied to the arguments $d_1, \mbox{$\ldots$}, d_n$.
Then, if the terms $t_1, \mbox{$\ldots$}, t_n$ evaluate to the domain
elements $d_1, \mbox{$\ldots$}, d_n$ respectively,
the term $a[t_1, \mbox{$\ldots$}, t_n]$ evaluates to $d$.
This simple inductive clause yields an extension of the notion
of evaluation $t^{\alpha}$, where $\alpha$ is an extended valuation,
to terms $t$ in the presence of arrays.
The notions of an $\alpha$-closed term and an $\alpha$-assignment
are now somewhat more complicated to define.
\begin{definition}
Consider an extended valuation $\alpha$.
\begin{itemize}
\item A variable $x$ is {\em $\alpha$-closed\/} if
for some $d$ the pair $x/d$ is an element of $\alpha$.
\item A term $f(t_1, \mbox{$\ldots$}, t_n)$, with $f$ a function symbol,
is {\em $\alpha$-closed\/} if each term $t_i$ is $\alpha$-closed.
\item A term $a[t_1, \mbox{$\ldots$}, t_n]$ is {\em $\alpha$-closed\/} if
each term $t_i$ is $\alpha$-closed and evaluates to a domain element $d_i$
such that for some $d$ the pair $a[d_1, \mbox{$\ldots$}, d_n]/d$ is an element of
$\alpha$.
\end{itemize}
An equation $s = t$ is an {\em $\alpha$-assignment\/} if either
\begin{itemize}
\item one side of it, say $s$, is a variable that is not $\alpha$-closed
and the other, $t$, is an $\alpha$-closed term, or
\item one side of it, say $s$, is of the form $a[t_1, \mbox{$\ldots$}, t_n]$, where
each $t_i$ is $\alpha$-closed but $a[t_1, \mbox{$\ldots$}, t_n]$ is not $\alpha$-closed,
and the other, $t$, is an $\alpha$-closed term.\hfill{$\boxtimes$}
\end{itemize}
\end{definition}
The idea is that an array $a$ can be assigned a value at a selected position by
evaluating an $\alpha$-assignment $a[t_1, \mbox{$\ldots$}, t_n] = t$. Assuming the
terms $t_1, \mbox{$\ldots$}, t_n, t$ are $\alpha$-closed and evaluate respectively to
$d_1, \mbox{$\ldots$}, d_n, d$, the evaluation of $a[t_1, \mbox{$\ldots$}, t_n] = t$
results in assigning the value $d$ to the array $a$ at the
position $d_1, \mbox{$\ldots$}, d_n$.
With this extension of the notions of valuation and $\alpha$-assignment
we can now apply the computation mechanism of Section~\ref{sec:compu}
to first-order formulas with arrays. The corresponding extensions
of the soundness and completeness theorems of
Section~\ref{sec:souncom} remain valid.
\subsection{Bounded quantification}
In this subsection we show how to extend the language with a form of
bounded quantification that essentially amounts to the
generalized conjunction
and disjunction. We treat bounded quantification with
respect to the integer numbers, but the approach can easily
be generalized to data types with the same discrete and ordered
structure as the integers.
\begin{definition}[bounded quantification]\label{def:bquant}
Let $\alpha$ be a valuation and let $\phi(x)$ be a formula
with $x$ not occurring in the domain of $\alpha$.
Furthermore, let $s,t$ be terms of integer type.
We assume the set of formulas to be extended in such a way that
also $\mbox{$\exists$} x\in[s..t]\;\phi(x)$ and $\mbox{$\forall$} x\in[s..t]\;\phi(x)$
are formulas. The computation trees of these formulas have a
root of degree one and depend on $s$ and $t$
in the following way.
\begin{itemize}
\item If $s$ or $t$ is not $\alpha$-closed, then the roots of both
$\B{\mbox{$\exists$} x\in[s..t]\;\phi(x)}_\alpha$ and
$\B{\mbox{$\forall$} x\in[s..t]\;\phi(x)}_\alpha$ have the error leaf $\mathit{error}$
as its son.
\item If $s$ and $t$ are $\alpha$-closed and $s^\alpha > t^\alpha$,
then the root of $\B{\mbox{$\exists$} x\in[s..t]\;\phi(x)}_\alpha$ has
the failure leaf $\mathit{fail}$ as its son and the root of
$\B{\mbox{$\forall$} x\in[s..t]\;\phi(x)}_\alpha$ has a success leaf $\alpha$
as its son.
\item If $s$ and $t$ are $\alpha$-closed and $s^\alpha \leq t^\alpha$,
then
\begin{itemize}
\item[-] the root of $\B{\mbox{$\exists$} x\in[s..t]\;\phi(x)}_\alpha$ has
$\B{\phi(x)\vee\mbox{$\exists$} y\in[s{+}1..t]\;\phi(y)}_{\alpha\cup\{x/{s^\alpha}\}}$
as its subtree,
\item[-] the root of $\B{\mbox{$\forall$} x\in[s..t]\;\phi(x)}_\alpha$ has
$\B{\phi(x)\wedge\mbox{$\forall$} y\in[s{+}1..t]\;\phi(y)}_{\alpha\cup\{x/{s^\alpha}\}}$
as its subtree.
\end{itemize}
In both cases $y$ should be a fresh variable with respect to $\alpha,\phi(x)$
in order to avoid name clashes.
\end{itemize}
The soundness and completeness results can easily be extended
to include bounded quantification.
\hfill{$\boxtimes$}
\end{definition}
\section{Relation to Other Approaches}
\label{sec:related}
The work here discussed is related in many interesting ways to a number of
seminal papers on logic, logic programming and constraint logic programming.
\subsection{Definition of Truth compared to Formulas as Programs}
First, it is instructive to compare our approach to the inductive
definition of truth given in \citeasnoun{Tar33}. This definition can
be interpreted as an algorithm that, given a first-order language $L$,
takes as input an interpretation $I$ of $L$ and a formula $\phi$ of
$L$, and yields as output the answer to the question whether the
universal closure of $\phi$ is true in $I$. This algorithm is not
effective because of the way quantifiers are dealt with. This is unavoidable
since truth is undecidable for many languages and interpretations, for instance
Peano arithmetic and its standard model.
In the formulas as programs approach the initial problem is modified
in that one asks for a constructive answer to the question whether a formula
is satisfiable in an interpretation. The algorithm proposed here is
effective at the cost of occasionally terminating abnormally in an error.
\subsection{Relation to Logic Programming}
Some forty years later, in his seminal paper \citeasnoun{Kow74}
proposed to use first-order logic as a computation formalism. This led
to logic programming. However, in spite of the paper's title, only a
subset of first-order logic is used in his proposal, namely the one
consisting of Horn clauses. This restriction was essential since
what is now called SLD-resolution was used as the computation mechanism.
In the discussion we first concentrate on the syntax matters and then
focus on the computation mechanism.
The restriction of logic programs and goals to Horn clauses was
gradually lifted in \citeasnoun{Cla78}, by allowing negative literals
in the goals and in clause bodies, in \citeasnoun{LT84}, by allowing
arbitrary first-order formulas as goals and clause bodies, and in
\citeasnoun{LMR92} by allowing disjunctions in the clause heads. In
each case the computation mechanism of SLD-resolution was suitably
extended, either by introducing the negation as failure rule, or by
means of transformation rules, or by generalizing so-called linear
resolution.
From the syntactic point of view
our approach is related to that of
\citeasnoun{LT84}. Appropriate transformation rules are used there
to get rid of quantifiers, disjunctions and the applications of
negation to non-atomic formulas. So these features of first-order
logic are interpreted in an indirect way. It is useful to point out
that the approach of \citeasnoun{LT84} was implemented in the
programming language G\"{o}del of \citeasnoun{HL94}.
Further, it should be noted that bounded quantifiers and arrays
were also studied in logic programming. In particular, they are used
in the specification language Spill of \citeasnoun{KM97} that allows
us to write executable, typed, specifications in the logic programming
style. Other related references are \citeasnoun{Vor92},
\citeasnoun{BB93} and \citeasnoun{Apt96}.
So from the syntactic point of view our approach does not seem to
differ from logic programming in an essential way. The difference
becomes more apparent when we analyze in more detail the underlying computation
mechanism.
To this end it is useful to recall that in logic programming the
computing process takes place implicitly over the free algebra of all
terms and the values are assigned to variables by means of
unification. The first aspect can be modelled in the formulas as
programs approach by choosing a {\em term interpretation\/}, so an
interpretation the domain $D$ of which consists of all terms and such
that each $n$-ary function symbol $f$ is mapped to a function $f_D$
that assigns to elements (so terms) $t_1, \mbox{$\ldots$}, t_n$ of $D$ the term
$f(t_1, \mbox{$\ldots$}, t_n)$. With this choice our use of $\alpha$-assignment
boils down to an instance of matching which in turn is a special case
of unification.
Unification in logic programming can be more clearly related to
equality by means of the so-called homogenization process the purpose
of which is to remove non-variable terms from the clauses heads. For
instance,
{\tt append(x1,ys,z1) <- x1=[x|xs], z1=[x|zs], append(xs,ys,zs)}
\noindent
is a homogenized form of the more compact clause
{\tt append([x|xs],ys,[x|zs]) <- append(xs,ys,zs)}.
\noindent
To interpret the equality in the right way the single clause
{\tt x = x <-}
\noindent
should then be added. This enforces the ``is unifiable with''
interpretation of equality. So the homogenization process reveals
that logic programming relies on a more general interpretation of equality
than the formulas as programs approach. It allows one to avoid generation
of errors for all equality atoms.
In conclusion, from the computational point of view,
the logic programming approach is at the same
time a restriction of the formulas as programs approach to the term
interpretations and a generalization of this approach in which all equality
atoms can be safely evaluated.
\subsection{Relation to Pure Prolog}
By pure Prolog we mean here a subset of Prolog formed by the programs and goals
that are Horn clauses.
Programming in Prolog and in its pure subset relies heavily on
lists and recursion. As a result termination is one of the crucial
issues. This led to an extensive study of methods that allow us to
prove termination of logic and Prolog programs (see \citeasnoun{DD94}
for a survey of various approaches).
In contrast, our approach to programming
is based on arrays and iteration that is realized by means of bounded
quantification. These constructs are guaranteed to terminate. In
fact, it is striking how far one can go in programming in this style
without using recursion. If the reader is not convinced by the
example given of Section~\ref{sec:squares} below, he/she is invited to
consult other examples in \citeasnoun{Vor92}
and \citeasnoun{ABPS98a}.
In the formulas as programs approach the absence of recursion makes it
possible to analyze queries without explicit presence of procedures,
by systematically replacing procedures by their bodies. This allows
us to represent each program as a single query and then rely on the
well-understood Tarskian semantics of first-order logic.
In the standard logic programming setting very few interesting
programs can be represented in this way. In fact, as soon as recursion
is used, a query has to be studied in the context of a program that
defines the recursive procedures. As
soon as negation is also present, a plethora of different semantics
arises --- see e.g. \citeasnoun{AB94}.
Finally, in the presence of recursion
it is difficult to account for Prolog's selection rule
in purely semantic terms.
\subsection{Relation to Pure Prolog with Arithmetic}
By pure Prolog with arithmetic we mean here an extension of pure
Prolog by features that support arithmetic, so Prolog's arithmetic
relations such as ``=:='' and the Prolog evaluator operator {\tt is}.
These features allow us to compute in the presence of arithmetic but
in a clumsy way as witnessed by the example of
formula (\ref{eq:no-prolog}) of Subsection~\ref{ssec:informal}
and its elaborated representation in Prolog
in Subsection~\ref{ssec:rationale}.
Additionally, a possibility of abnormal termination in an error arises.
Indeed, both arithmetic relations and the {\tt is} operator
introduce a possibility of run-time errors,
a phenomenon absent in pure Prolog. For instance, the query
{\tt X is Y} yields an error and so does {\tt X =:= Y}.
In contrast, in the formulas as programs approach arithmetic
can be simply modelled by adding the sorts of integers and of reals.
The $\alpha$-assignment then deals correctly with arithmetic
expressions because it relies on automatic evaluation of terms.
This yields a simpler and more uniform approach to arithmetic in which no
new special relation symbols are needed.
\subsection{Relation to Constraint Logic Programming}
\label{subsec:clp}
The abovementioned deficiencies of pure Prolog with arithmetic have
been overcome in constraint logic programming, an approach to
computing that generalizes logic programming. In what follows we
concentrate on a specific approach, the generic scheme CLP(X) of
\citeasnoun{jaffar-constraint-87} that generalizes pure Prolog by
allowing constraints. In this scheme atoms are divided into those
defined by means of clauses and those interpreted in a direct way. The
latter ones are called constraints.
In CLP(X), as in our case, the computation is carried out over an
arbitrary interpretation. At each step (instead of the
unification test of logic programming and its application if
it succeeds) satisfiability of the so far encountered constraints is
tested. A computation is successful if the last query
consists of constraints only.
There are two differences between the formulas as programs approach
and the CLP(X) scheme. The first one has to do with the fact that in
our approach full first-order logic is allowed, while in the latter --- as
in logic programming and pure Prolog --- Horn clauses are used.
The second one concerns the way values are assigned.
In our case the only way to assign values to variables is
by means of an $\alpha$-assignment, while in the CLP(X) scheme
satisfiability of constraints guides the computation
and output is identified with a set of constraints (that still have to
be solved or normalized).
The CLP(X) approach to computing has been realized in a number of
constraint logic programming languages, notably in the CLP(${\cal R}$)
system of \citeasnoun{jaffar-clpr} that is an instance of the CLP(X)
scheme with a two-sorted structure that consists of reals and terms.
In this system formula (\ref{eq:no-prolog}) of Subsection
\ref{ssec:informal} can be directly run as a query.
Once negation is added to the CLP(X) scheme (it is in fact present in
CLP(${\cal R}$)), the extension of the CLP(X) syntax to full
first-order logic could be achieved by using the approach
\citeasnoun{LT84} or by extending the computation mechanism along the
lines of Section~\ref{sec:compu}.
So, ignoring the use of the first-order logic syntax in the formulas
as programs approach and the absence of (recursive) procedures that
could be added to it, the main difference between this approach and the
CLP(X) scheme has to do with the fact that in the former only very
limited constraints are admitted, namely ground atoms and
$\alpha$-assignments. In fact, these are the only constraints that can
be resolved directly.
So from this point of view the formulas as programs approach is less
general than constraint logic programming, as embodied in the CLP(X)
scheme. However, this more limited approach does not rely on the
satisfiability procedure for constraints (i.e., selected atomic formulas),
or any of its approximations used in specific implementations.
In fact, the formulas as programs approach
attempts to clarify how far constraint logic programming approach can
be used without any reliance on external procedures that deal with
constraint solving or satisfiability.
\subsection{Formulas as Programs versus Formulas as Types}
\def{\mathit left}{{\mathit left}}
\def{\mathit right}{{\mathit right}}
\def{\mathit ex}{{\mathit ex}}
In the so-called \emph{formulas as types} approach, also called
the Curry-Howard-De Bruijn interpretation (see e.g. \citeasnoun{TD})
(constructive) proofs of
a formula are terms whose type is the formula in question.
The type corresponding to a formula
can thus be viewed as the (possibly empty) set of all proofs
of the formula. Here `proof' refers to an operational notion of proof,
in which
\begin{itemize}
\item a proof of $\phi \vee \psi$ is either ${\mathit left}(p)$ with $p$
a proof of $\phi$, or ${\mathit right}(p)$ with $p$ a proof of $\psi$;
\item a proof of $\phi \wedge \psi$ is a pair $\langle p,q\rangle$
consisting of a proof $p$ of $\phi$ and and a proof $q$ of $\psi$;
\item a proof of an implication $\phi \mbox{$\:\rightarrow\:$} \psi$
is a function that maps proofs of $\phi$ to proofs of $\psi$;
\item a proof of $\forall x~\phi(x)$ is a function that maps
domain elements $d$ to proofs of $\phi(d)$;
\item a proof of $\exists x~\phi(x)$ is of the form ${\mathit ex}(d,p)$
with domain element $d$ a witness for the existential statement,
and $p$ a proof of $\phi(d)$.
\end{itemize}
Such proofs can be taken as programs. For example,
a constructive proof of $\forall x~\exists y~\phi(x,y)$ is a function
that maps
$d$ to an expression of the form ${\mathit ex}(e_d,p_d)$ with
$p_d$ a proof of $\phi(d,e_d)$. After extraction of the witness $e_d$
the proof yields a program computing $e_d$ from $d$.
The main difference between formulas as types and formulas as
programs is that in the latter approach not the proofs
of the formulas, but the formulas themselves have an operational
interpretation. To illustrate this difference, consider
the computation tree of formula (\ref{eq:no-prolog})
in Fig.~\ref{fig:ctree} with its proof:
$$
{\mathit ex}(3,{\mathit ex}(2,
\langle{\mathit right}(p_{3=3}),\langle{\mathit right}(p_{2=2}),p_{2*3=3*2}\rangle\rangle
))
$$
Here $p_A$ is a proof of $A$, for each true closed atom $A$.
Observe that in the above proof the witnesses $3$ and $2$ for $x$ and
$y$, respectively, \emph{have to be given beforehand}, whereas in
our approach they are computed. In the formulas as programs
approach the proofs
are constructed in the successful branches of the computation tree and
the computation is guided by the search for such a proof. Apart from
differences in syntax, the reader will recognize the above proof in
the successful branch of Figure~\ref{fig:ctree}.
Given the undecidability of the first-order logic, there is a price to be
paid for formulas programs. It consists of the possibility of abnormal
termination in an error.
\section{\mbox{\sf Alma-0}{}}
\label{sec:alma0}
We hope to have convinced the reader that the formulas as programs
approach, though closely related to logic programming, differs from
it in a number of crucial aspects.
This approach to programming has been realized in the implemented
programming language \mbox{\sf Alma-0}{} \cite{ABPS98a}.
A similar approach to programming has been taken in the 2LP language
of \citeasnoun{MT95b}. 2LP (which stands for ``logic programming and linear
programming") uses C syntax and has been designed for constraint
programming in the area of optimization.
\mbox{\sf Alma-0}{} is an
extension of a subset of Modula-2 that includes nine new features
inspired by the logic programming paradigm. We briefly recall those
that are used in the sequel and refer to
\citeasnoun{ABPS98a} for a detailed presentation.
\begin{itemize}
\item Boolean expressions can be used as statements and vice versa.
A boolean expression that is used as a statement and evaluates to
{\tt FALSE} is identified with a {\em failure}.
\item {\em Choice points} can be created by the non-deterministic
statements {\tt ORELSE} and {\tt SOME}. The former is a dual of the
statement composition and the latter is a dual of the {\tt FOR}
statement. Upon failure the control returns to the most recent
choice point, possibly within a procedure body, and the computation
resumes with the next branch in the state in which the previous branch
was entered.
\item The notion of {\em initialized} variable is introduced and the
equality test is generalized to an assignment statement in case one
side is an uninitialized variable and the other side an
expression with known value.
\item A new parameter passing mechanism, {\em call by mixed
form}, denoted by the keyword {\tt MIX}, is introduced for
variables of simple type. It works as
follows: If the actual parameter is a variable, then it is passed by
variable. If the actual parameter is an expression that is not a
variable, its value is computed and assigned to a new variable $v$
(generated by the compiler): it is $v$ that is then passed by
variable. So in this case the call by mixed form boils down to call
by value. Using this parameter mechanism we can pass both expressions with
known values and uninitialized variables as actual parameters. This
makes it possible to use a single procedure both for testing and
computing.
\end{itemize}
For efficiency reasons the \mbox{\sf Alma-0}{} implementation does not
realize faithfully the computation mechanism of
Section~\ref{sec:compu} as far as the errors are concerned. First, an
evaluation of an atom that is not $\alpha$-closed and is not an
$\alpha$-assignment yields a run-time error. On the other hand, in the
other two cases when the evaluation ends with the $\mathit{error}$ leaf, in the
statements {\tt NOT S} and {\tt IF S THEN T END}, the computation
process of \mbox{\sf Alma-0}{} simply proceeds.
The rationale for this decision is that the use of insufficiently
instantiated atoms in \mbox{\sf Alma-0}{} programs is to be discouraged
whereas the catching of other two cases for errors would be
computationally prohibitive.
In this respect the implementation of \mbox{\sf Alma-0}{} follows
the same compromise as the implementations of Prolog.
We now associate with each first-order formula $\phi$ an
\mbox{\sf Alma-0}{} statement ${\cal T}(\phi)$. This is done by induction
on the structure of the formula $\phi$. The translation process is
given in Table~\ref{table: table1}.
\begin{table}[htdp]
\begin{center}
\begin{tabular}{|l|l|} \hline
Formula & \mbox{\sf Alma-0}{} construct \\ \hline
$A$ (atom) & $A$ \\
$\phi_1 \mbox{$\ \vee\ $} \phi_2$ & {\tt EITHER} ${\cal T}(\phi_1)$ {\tt ORELSE} ${\cal T}(\phi_2)$ {\tt END}\\
$\phi_1 \mbox{$\ \wedge\ $} \phi_2$ & ${\cal T}(\phi_1); {\cal T}(\phi_2)$ \\
$\phi \mbox{$\:\rightarrow\:$} \psi$ & {\tt IF} ${\cal T}(\phi)$ {\tt THEN} ${\cal T}(\psi)$ {\tt END} \\
$\neg \phi$ & {\tt NOT} ${\cal T}(\phi)$ \\
$\mbox{$\exists$} x \phi(x, \bar{y})$ & {\tt p}$(\bar{y})$, where the procedure {\tt p} is defined by \\
& {\tt PROCEDURE p(MIX $\bar{y}: \bar{\tt T});$} \\
& {\tt VAR} {\em x}~:~{\tt T;} \\
& {\tt BEGIN} \\
& \quad ${\cal T}(\phi(x, \bar{y}))$ \\
& {\tt END;} \\
& where {\tt T} is the type (sort) of the variable $x$ and \\
& $\bar{\tt T}$ is the sequence of types of the variables in $\bar{y}$. \\
$\mbox{$\exists$} x \in [s..t] \phi$ & {\tt SOME} $x := s$ {\tt TO} $t$ {\tt DO} $ {\cal T}(\phi)$ {\tt END} \\
$\mbox{$\forall$} x \in [s..t] \phi$ & {\tt FOR} $x := s$ {\tt TO} $t$ {\tt DO} $ {\cal T}(\phi)$ {\tt END} \\
\hline
\end{tabular}
\caption{Translation of formulas into \mbox{\sf Alma-0}{} statements.
\label{table: table1}}
\end{center}
\end{table}
This translation allows us to use in the sequel \mbox{\sf Alma-0}{} syntax
to present specific formulas.
\section{Example: Partitioning a Rectangle into Squares}
\label{sec:squares}
To illustrate the \mbox{\sf Alma-0}{} programming style and the
use of formulas as programs approach for program verification,
we consider now the following variant of a problem from
\citeasnoun[pages 46-60]{Hon70}.
\\\\
{\it Squares in the rectangle.\/}
Partition an integer sized $nx\times ny$ rectangle into given squares
$S_1,\dots,S_m$ of integer sizes $s_1,\dots,s_m$.
\\\\
We develop a solution that, in contrast to the one given in
\citeasnoun{ABPS98a}, is purely declarative.
To solve this problem we use a backtracking algorithm that fills in all the
cells of the rectangle one by one, starting with the left upper cell
and proceeding downward in the leftmost column, then the next column,
and so on. The algorithm checks for each cell whether it is already
covered by some square used to cover a previous cell.
Given the order in which the cells are visited,
it suffices to inspect the left neighbour cell
and the upper neighbour cell
(if these neighbours exist). This is done by the test
\begin{eqnarray*}
\label{eq:test}
&&\lefteqn{{\tt ((1 < i)~AND~(i < RightEdge[i-1,j]))~OR~}}\\
&&{\tt ((1 < j)~AND~(j < LowerEdge[i, j-1]))}.
\end{eqnarray*}
Here \verb|[i,j]| is the index of the cell in question,
and \verb|RightEdge[i-1,j]| is the right edge of the square covering
the left neighbour (\verb|[i-1,j]|, provided \verb|i > 1|),
and \verb|LowerEdge[i, j-1]| is the lower edge of the square covering
the upper neighbour (\verb|[i,j-1]|, provided \verb|j > 1|).
The cell under consideration is already covered if and only if
the test succeeds.
If it is not covered, then the algorithm
looks for a square not yet used, which is placed with its top-left corner
at \verb|[i,j]| provided the square fits within the rectangle.
The algorithm backtracks when none of the available
squares can cover the cell under consideration
without sticking out of the rectangle. See Figure~\ref{fig:square}.
\begin{figure}[htpb]
{\scriptsize
$$\xymatrix@R=0.7em@C1.18em{
&1&&&&nx&&
&1&&&&nx&\\
1&4\ar@{-}[r]\ar@{-}[d]&4\ar@{-}[r]&4\ar@{-}[r]&\ar@{-}[ddd]\ar@{.}[rr]&&
\ar@{.}[dddddd]&
&4\ar@{-}[r]\ar@{-}[d]&4\ar@{-}[r]&4\ar@{-}[r]&\ar@{-}[ddd]\ar@{.}[rr]&&
\ar@{.}[dddddd]&1\\
&4\ar@{-}[d]&4&4&&&&
&4\ar@{-}[d]&4&4&&&\\
&4\ar@{-}[d]&4&4&&&&
&4\ar@{-}[d]&{\framebox{4}}&4&&&\\
&{\framebox{2}} \ar@{-}[r]\ar@{-}[d]&{\ast}\ar@{-}[rr]\ar@{-}[d]&&&&&
&5\ar@{-}[r]\ar@{-}[d]&{\ast}\ar@{-}[rr]\ar@{-}[d]&&&&\\
&3\ar@{-}[r]\ar@{-}[d]&3\ar@{-}[r]&\ar@{-}[dd]&&&&
&7\ar@{-}[d]\ar@{-}[r]&7\ar@{-}[r]&\ar@{-}[dd]&&&\\
ny&3\ar@{-}[d]&3&&&&&
&7\ar@{-}[d]&7&&&&&ny\\
ny{+}1&\ar@{-}[rr]&&\ar@{.}[rrr]&&&&
&\ar@{-}[rr]&&\ar@{.}[rrr]&&&&ny{+}1\\
}$$}
\caption{Example of values of {\tt RightEdge} (left diagram) and
{\tt LowerEdge} (right diagram), respectively. Entry $\ast$ is indexed by
{\tt [2,4]}. It is not covered already since neither
$2<{\tt RightEdge[1,4]}=2$ nor $4<{\tt LowerEdge[2,3]}=4$.\label{fig:square}}
\end{figure}
In test (\ref{eq:test}) we used the {\tt AND} and {\tt OR} connectives
instead of the ``;'' and {\tt ORELSE} constructs for the
following reason. In case all variables occurring in a test are instantiated,
some optimizations are in order. For example, it is not necessary
to backtrack within the test, disjunctions do not have to create
choice points, and so on. The use of {\tt AND} and {\tt OR}
enables the compiler to apply these optimizations.
Backtracking is implemented by a {\tt SOME} statement that checks for each
square whether it can be put to cover a given cell. The solution is returned
via two arrays {\tt posX} and {\tt posY} such that for square $S_k$ (of size
{\tt Sizes[k]}) {\tt posX[k]}, {\tt posY[k]} are the coordinates of its
top-left corner.
The two equations {\tt posX[k] = i} and {\tt posY[k] = j} are used both to
construct the solution and to prevent using an already placed square again at a
different place.
The declaration of the variables {\tt posX} and {\tt posY} as {\tt MIX}
parameters allows us to use the program both to check a given
solution or to complete a partial solution.
{\small
\begin{verbatim}
TYPE SquaresVector = ARRAY [1..M] OF INTEGER;
PROCEDURE Squares(Sizes:SquaresVector, MIX posX, posY:SquaresVector);
VAR RightEdge,LowerEdge: ARRAY [1..NX],[1..NY] OF INTEGER;
i,i1, j,j1, k: INTEGER;
BEGIN
FOR i := 1 TO NX DO
FOR j := 1 TO NY DO
IF NOT (* cell [i,j] already covered? *)
(((1 < i) AND (i < RightEdge[i-1,j])) OR
((1 < j) AND (j < LowerEdge[i, j-1])))
THEN
SOME k := 1 TO M DO
PosX[k] = i;
PosY[k] = j; (* square k already used? *)
Sizes[k] + i <= NX + 1;
Sizes[k] + j <= NY + 1; (* square k fits? *)
FOR i1 := 1 TO Sizes[k] DO
FOR j1 := 1 TO Sizes[k] DO
RightEdge[i+i1-1,j+j1-1] = i+Sizes[k];
LowerEdge[i+i1-1,j+j1-1] = j+Sizes[k]
END (* complete administration *)
END
END
END
END
END
END Squares;
\end{verbatim}
\def{\tt {LowerEdge}}{{\tt {LowerEdge}}}
\def{\tt {RightEdge}}{{\tt {RightEdge}}}
\def{\tt {PosX}}{{\tt {PosX}}}
\def{\tt {PosY}}{{\tt {PosY}}}
\def{\tt {Sizes}}{{\tt {Sizes}}}
This program is declarative and consequently has a dual
reading as the formula
\begin{eqnarray*}
&&\mbox{$\forall$} i\in[1..nx]~\mbox{$\forall$} j\in[1..ny]\\
&&\neg(1<i<{\tt {RightEdge}}[i{-}1,j] \vee 1<j<{\tt {LowerEdge}}[i,j{-}1]) \mbox{$\:\rightarrow\:$} \\
&&\mbox{$\exists$} k\in[1..m]~\phi(i,j,k),
\end{eqnarray*}
\noindent
where $\phi(i,j,k)$ is the formula
\begin{eqnarray*}
&&{{\tt {PosX}}[k]=i}\wedge{{\tt {PosY}}[k]=j}\wedge\\
&&{{\tt {Sizes}}[k]{+}i\leq nx{+}1}\wedge{{\tt {Sizes}}[k]{+}j\leq ny{+}1}\wedge\psi(i,j,k)
\end{eqnarray*}
\noindent
and $\psi(i,j,k)$ is the formula
\begin{eqnarray*}
&&\mbox{$\forall$} i'\in[1..{\tt {Sizes}}(k)]~\mbox{$\forall$} j'\in[1..{\tt {Sizes}}(k)]\\
&&{{\tt {RightEdge}}[i{+}i'{-}1,j{+}j'{-}1]=i{+}{\tt {Sizes}}[k]}\wedge\\
&&{{\tt {LowerEdge}}[i{+}i'{-}1,j{+}j'{-}1]=j{+}{\tt {Sizes}}[k]
\end{eqnarray*}
\noindent
This dual reading of the program entails over the standard
interpretation the formula
\begin{eqnarray}
&&\mbox{$\forall$} i\in[1..nx]~\mbox{$\forall$} j\in[1..ny]~\mbox{$\exists$} k\in[1..m]\nonumber\\
&&{{\tt {PosX}}[k]\leq i<{\tt {PosX}}[k]{+}{\tt {Sizes}}[k]\leq nx{+}1} \wedge\nonumber\\
&&{{\tt {PosY}}[k]\leq j<{\tt {PosY}}[k]{+}{\tt {Sizes}}[k]\leq ny{+}1}\label{prob:spec}
\end{eqnarray}
\noindent
expressing that every cell is covered by a square.
The entailment is not trivial, but can be made completely rigorous.
The proof uses arithmetic,
in particular induction on lexicographically ordered pairs $(i,j)$.
This entailment actually means that the program satisfies
its specification, that is,
if the computation is successful, then a partition is found (and can
be read off from ${\tt {PosX}}[k]$ and ${\tt {PosY}}[k]$). The latter fact relies
on the Soundness Theorem~\ref{thm:soundn}.
Conversely, assuming that the surfaces of the squares sum up exactly to
the surface of the rectangle, the specification (\ref{prob:spec}) entails
the formula corresponding to the program, with suitable values for ${\tt {RightEdge}},{\tt {LowerEdge}}$.
Furthermore, the absence of
errors can be established by lexicographic induction. This ensures that
the computation tree is always determined.
By the Completeness Theorem~\ref{thm:completeness}, one always gets an answer.
If this answer is negative, that is, if the computation tree is failed,
then by the Soundness Theorem~\ref{thm:soundn} the formula corresponding
to the program cannot be satisfied, and hence (\ref{prob:spec})
cannot be satisfied.
\section{Current and Future Work}
The work here presented can be pursued in a number of directions.
We listed here the ones that seem to us most natural.
\paragraph{Recursive procedures}
The extension of the treatment of non-recursive procedures in
Subsection \ref{subsec: nonrec} to the case of recursive procedures is
far from obvious. It requires an extension of the computation
mechanism to one with possible non-terminating behaviour. This could
be done along the lines of \citeasnoun{AD94} where the
SLDNF-resolution of logic programs with negation is presented in a top
down, non-circular way.
Also, on the semantic level several choices arise, much like in the
case of logic programming, and the corresponding soundness and
completeness results that provide a match between the computation
mechanism and semantics need to be reconsidered from scratch.
\paragraph{Constraints}
As already said in Subsection \ref{subsec:clp}, the formulas as
programs approach can be seen as a special case of constraint logic
programming, though with a full first-order syntax. It is natural to
extend our approach by allowing constraints, so arbitrary atoms that
have no definition in the sense of Subsection \ref{subsec: nonrec}.
The addition of constraints will require on the computation mechanism
level use of a constraint store and special built-in procedures that
approximate the satisfiability test for conjunctions of constraints.
\paragraph{Automated Verification}
The correctness proof presented in Section \ref{sec:squares} was
carried out manually. It boils down to a proof of validity of an
implication between two formulas, This proof is based on an
lexicographic ordering os it should be possible to mechanize this
proof. This would lead a fully mechanized correctness proof of the
\mbox{\sf Alma-0}{} program considered there.
\paragraph{Relation to Dynamic Predicate Logic}
In \citeasnoun{GS91} an alternative ``input-output'' semantics of
first-order logic is provided. In this semantics both the connectives
and the quantifiers obtain a different, dynamic, interpretation that
better suits their use for natural language analysis.
This semantic is highly nondeterministic due to its treatment of
existential quantifiers and it does not take into account a possibility
of errors.
It is natural to investigate the precise connection between this
semantics and our formulas as programs approach. A colleague of us,
Jan van Eijck, has recently undertook this study. Also, it would be
useful to clarify to what extent our approach can be of use for
linguistic analysis, both as a computation mechanism and as a means
for capturing errors in discourse analysis.
\paragraph{Absence of abnormal termination}
Another natural line of research deals with the improvements of the
computation mechanism in the sense of limiting the occurrence of
errors while retaining soundness. In Appendix, Subsections~\ref{ssec:libneg}
and \ref{ssec:libimp} we consider two such possibilities but several
other options arise. Also, it is useful to provide
sufficient syntactic criteria that for a formula
guarantee absence of abnormal termination. This work is naturally related
to a research on verification of \mbox{\sf Alma-0}{} programs.
\renewcommand{\thebibliography}[1]
\section*{References}%
\list{%
\arabic{enumi}.\hfill
}{%
\topsep0pt\parskip0pt\partopsep0pt
\settowidth\labelwidth{#1.}%
\leftmargin\labelwidth
\advance\leftmargin\labelsep
\usecounter{enumi}%
\itemsep.3\baselineskip
\parsep0pt
}
\def\newblock{\hskip .11em plus .33em minus .07em}%
\sloppy\clubpenalty4000\widowpenalty4000
\sfcode`\.=1000\relax}
\section*{Acknowledgements}
We would like to thank Jan van Eijck and David Scott Warren for a
number of helpful suggestions.
\bibliographystyle{../art/agsm}
| proofpile-arXiv_065-8791 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
A simple extension of Standard Model (SM) is to enlarge the particle
content by adding vector quarks, whose right-handed and left-handed
components transform in the same way under the weak $SU(2)\times U(1)$
gauge group. This extension is acceptable because the anomalies
generated by the vector quarks cancel automatically and vector quarks
can be heavy naturally. Vector quarks also arise in some Grand
Unification Theory (GUT). For example, in some superstring theories,
the ${\rm E_6}$ GUT gauge group occurs in four dimensions when we
start with ${\rm E_8
\times E_8}$ in ten dimensions. The fermions are placed in a
27-dimensional representation of ${\rm E_6}$. In such model, for each
generation one would have new fermions including an isosinglet charge
$-{1\over3}$ vector quark.
In this article, we discuss the $B$ meson radiative decay in the
context of a generic vector quark model and show that the experimental
data can
be used to constrain the mixing angles. In vector quark models, due
to the mixing of vector quarks with ordinary quarks, the
Kobayashi-Maskawa (KM) matrix of the charged current interaction is
not unitary. The internal flavor independent contributions in the $W$
exchange penguin diagrams no longer cancel among the various internal
up-type quarks. In addition, the mixing also generates non-zero tree level
FCNC in the currents of $Z$ boson and that of Higgs boson, which in turn
gives rise to
new penguin diagrams due to neutral boson exchanges. All these
contributions will be carefully analyzed in this paper.
Leading logarithmic (LL) QCD corrections are also included
by using the effective Hamiltonian formalism.
The paper is organized as follows:
In section 2, we review the charged current interaction and the FCNC
interactions in a generic vector quark model. Through the diagonalization
of
mass matrix, the non-unitarity of KM matrix and the magnitude of the FCNC
can both be related to the mixing angles between vector and ordinary quarks.
In section 3, various contributions to $B$ meson radiative decays are
discussed in the vector quark model.
In section 4, we discuss constraints on the mixing angles from the new
data on $B$ radiative decays and from other FCNC effects.
There are many previous analyses on the same issue. We shall make
detailed comparison at appropriate points (mostly in section 3.) of our
discussion. Most vector quark models in the literature are more
complicated than the one we considered here.
\section{Vector Quark Model}
We consider the model in which the gauge structure of SM
remains while one charge $-{1\over3}$ and one charge
${2\over3}$ isosinglet vector quarks are introduced.
Denote the charge $-{1\over3}$ vector quark
as $D$ and the charge ${2\over3}$ vector quark as $U$.
Large Dirac masses of vector quarks, invariant under $SU(2)_L$,
naturally arise:
\begin{equation}
M_{\scriptscriptstyle U} ( \bar{U}_L U_R + \bar{U}_R U_L) +
M_{\scriptscriptstyle D} (\bar{D}_L D_R + \bar{D}_R D_L)
\end{equation}
All the other Dirac masses can only arise from $SU(2)_L$ symmetry
breaking effects. Assume that the weak $SU(2)$ gauge
symmetry breaking sector is an isodoublet scalar Higgs field
$\phi$, denoted as
\begin{equation}
\phi \equiv \left( \begin{array}{c}
\phi^+ \\
\phi^0
\end{array} \right)
= \left( \begin{array}{c}
\phi^+ \\
\frac{\displaystyle 1}{\displaystyle \sqrt{2}} (v+h^0)
\end{array} \right)
\end{equation}
We can express the neutral field $h$ in terms of real components:
\begin{equation}
h^0 = H + i \chi.
\end{equation}
The conjugate of $\phi$ is defined as
\begin{equation}
\tilde{\phi} \equiv \left( \begin{array}{c}
\phi^{0*} \\
-\phi^-
\end{array} \right)
= \left( \begin{array}{c}
\frac{\displaystyle 1}{\displaystyle \sqrt{2}}
(v+h^{0*}) \\
-\phi^-
\end{array} \right)
\end{equation}
Masses for ordinary quarks arise from gauge invariant Yukawa
couplings:
\begin{equation}
- f_d^{ij} \, \bar{\psi}^i_{\scriptscriptstyle L} d^{j}_{\scriptscriptstyle R} \phi
- f_u^{ij} \, \bar{\psi}^i_{\scriptscriptstyle L} u^{j}_{\scriptscriptstyle R} \tilde{\phi}
- f_d^{ij*} \, \phi^{\dagger} \bar{d}^{j}_{\scriptscriptstyle R} \psi^i_{\scriptscriptstyle L}
- f_u^{ij*} \, \tilde{\phi}^{\dagger} \bar{u}^{j}_{\scriptscriptstyle R} \psi^i_{\scriptscriptstyle L}
\end{equation}
In addition, gauge invariant Yukawa couplings between vector
quarks and ordinary quarks are possible, which give rise to mixing
between quarks of the same charge.
For the model we are considering, these are:
\begin{equation}
- f_d^{i4} \, \bar{\psi}^i_{\scriptscriptstyle L} D_{\scriptscriptstyle R} \phi
- f_u^{i4} \, \bar{\psi}^i_{\scriptscriptstyle L} U_{\scriptscriptstyle R} \tilde{\phi}
- f_d^{i4*} \, \phi^{\dagger} \bar{D}_{\scriptscriptstyle R} \psi^i_{\scriptscriptstyle L}
- f_u^{i4*} \, \tilde{\phi}^{\dagger} \bar{U}_{\scriptscriptstyle R} \psi^i_{\scriptscriptstyle L}
\end{equation}
In general, $U$ will mix with the up-type quarks and $D$ with down-type
quarks. It is thus convenient to put mixing quarks into a
four component column matrix:
\begin{equation}
(u_{\scriptscriptstyle L,R})_{\alpha} = \left[ \begin{array}{c}
u_{\scriptscriptstyle L,R} \\ c_{\scriptscriptstyle L,R}
\\ t_{\scriptscriptstyle L,R} \\ U_{\scriptscriptstyle L,R} \end{array} \right]_{\alpha}
\ \
(d_{\scriptscriptstyle L,R})_{\alpha} = \left[ \begin{array}{c}
d_{\scriptscriptstyle L,R} \\ s_{\scriptscriptstyle L,R} \\
b_{\scriptscriptstyle L,R} \\ D_{\scriptscriptstyle L,R} \end{array} \right]_{\alpha}
\end{equation}
where $\alpha=1,2,3,4$. All the Dirac
mass terms can then be collected into a matrix form:
\begin{equation}
\bar{d}^{\prime}_{L} {\cal M}_{d} d^{\prime}_{R} +
\bar{d}^{\prime}_{R} {\cal M}_{d}^{\dagger} d^{\prime}_{L}
\, \, \, \, {\rm and} \, \, \, \,
\bar{u}^{\prime}_{L} {\cal M}_{u} u^{\prime}_{R} +
\bar{u}^{\prime}_{R} {\cal M}_{u}^{\dagger} u^{\prime}_{L}.
\end{equation}
In this article, we use fields with prime to denote the weak
eigenstates and those without prime to denote mass eigenstates.
${\cal M}_{u,d}$ are $4 \times 4$ mass matrices.
Since all the right-handed quarks, including vector quark, are
isosinglet, we can use the right-handed chiral transformation to choose
the right handed quark basis so that $U_L,D_L$ do not have Yukawa coupling
to the ordinary right-handed quarks. In this basis,
${\cal M}_{d}$ and ${\cal M}_u$ can be written as
\begin{equation}
{\cal M}_{d} = \left(
\begin{array}{cc}
\hat{M}_{d} & \vec{J}_d \\
0 & M_{\scriptscriptstyle D}
\end{array}
\right), \ \
{\cal M}_{u} = \left(
\begin{array}{cc}
\hat{M}_{u} & \vec{J}_u \\
0 & M_{\scriptscriptstyle U}
\end{array}
\right).
\end{equation}
with
\begin{equation}
\hat{M}_{u,d} = \frac{v}{\sqrt{2}} f_{u,d} , \,\, \,\,
\vec{J}_{u,d}^i = \frac{v}{\sqrt{2}} f_{u,d}^{i4}
\end{equation}
$\hat{M}_{d,u}$ (with hats) are the standard $3
\times 3$ mass matrices for ordinary quarks. $\vec{J}_{d,u}$ is the
three component column matrix which determines the mixings between ordinary
and vector quarks.
We assume that the bare masses $M_{\scriptscriptstyle U,D}$ are much larger $M_W$.
With $M_{\scriptscriptstyle U,D}$ factored out, ${\cal M}_{d,u}$ can be expressed in
terms of small dimensionless parameters $a,b$:
\begin{equation}
{\cal M}_{d} = M_{\scriptscriptstyle D} \left(
\begin{array}{cc}
\hat{a}_d & \vec{b}_d \\
0 & 1
\end{array}
\right), \ \
{\cal M}_{u} = M_{\scriptscriptstyle U} \left(
\begin{array}{cc}
\hat{a}_u & \vec{b}_u \\
0 & 1
\end{array}
\right).
\end{equation}
The mixing matrix $U^{u,d}_L$ of the left-handed quarks and
the corresponding one $U^{u,d}_R$ for right-handed quarks, defined as,
\begin{equation}
u'_{\scriptscriptstyle L,R}= U^{u}_{\scriptscriptstyle L,R} u_{\scriptscriptstyle L,R}, \ \
d'_{\scriptscriptstyle L,R}= U^{d}_{\scriptscriptstyle L,R} d_{\scriptscriptstyle L,R},
\end{equation}
are the matrices that diagonalize ${\cal M}_{u,d} {\cal
M}_{u,d}^{\dagger}$ and
${\cal M}^{\dagger}_{u,d} {\cal M}_{u,d}$ respectively.
Hence the mass matrices can be expressed as
\begin{equation}
{\cal M}_u = U^{u}_{\scriptscriptstyle L} m_u U^{u \dagger}_{\scriptscriptstyle R} \,\,\,\,
{\cal M}_d = U^{d}_{\scriptscriptstyle L} m_d U^{d \dagger}_{\scriptscriptstyle R}
\end{equation}
with $m_{u,d}$ the diagonalized mass matrices.
The diagonalization can be carried out order by order in perturbation
expansion with respect to small numbers $\hat{a}$ and $\vec{b}$.
For isosinglet vector quark model, the right-handed quark mixings are
significantly smaller. The reason is that
$M^{\dagger}_{d} M_{d}$ is composed of elements suppressed by two powers
of $a$ or $b$ except for the $(4,4)$ element. As a result, the mixings of
$D_R$ with $d_{R}, s_{R}, b_{R}$ are also suppressed by two powers of $a$
or $b$. On the other hand, it
can be shown that the mixings between $D_L$ and $b_{L},s_{L},d_{L}$ are only
of first order in $a$ or $b$. To get leading order results in the
perturbation, one can assume that $U_R = I$.
For convenience, write $U_L$ as
\begin{equation}
U_L = \left( \begin{array}{cc}
\hat{K} & \vec{R} \\
\vec{S}^T & T
\end{array}
\right) \;.
\end{equation}
where $\hat{K}$ is a $3 \times 3$ matrix and $\vec{R},\vec{S}$ are three
component column matrices.
To leading order in $a$ and $b$, $T$ is equal to $1$. $K$ equals the unitary
matrix that diagonalizes $\hat{a} \hat{a}^{\dagger}$. The
columns $\vec{R}$ and $\vec{S}$, characterizing the mixing, are given by
\begin{equation}
\vec{R} = \vec{b}, \ \ \vec{S} = - \hat{K} \vec{b}.
\end{equation}
Now we can write down the various electroweak interactions in terms of
mass eigenstates.
The $Z$ coupling to the left-handed mass eigenstates are given by
\begin{eqnarray}
& & {\cal L}_Z = \frac{g}{\cos \theta_W} Z_{\mu}
(J_{3}^{\mu} - \sin^2 \theta_{W} J_{\rm em}^{\mu}), \\
& & J_{3}^{\mu}=\bar{u}'_{L}
T^{\rm w}_3 \gamma^{\mu} u'_{L} +
\bar{d}'_{L}
T^{\rm w}_3 \gamma^{\mu} d'_{L}
= \frac{1}{2} \bar{u}_{L} (z^u)
\gamma^{\mu} u_{L}-
\frac{1}{2} \bar{d}_{L} (z^d)
\gamma^{\mu} d_{L}
\end{eqnarray}
The $4 \times 4$ matrices $z$ are related to the mixing matrices by
\begin{eqnarray}
(z^{u}) & = & U_L^{u \dagger} a_{\scriptscriptstyle Z} U_L^{u}
\nonumber\\
(z^{d}) & = & U_L^{d \dagger} a_{\scriptscriptstyle Z} U_L^{d}.
\label{FCNC}
\end{eqnarray}
with $a_{\scriptscriptstyle Z} \equiv {\rm Diag} (1,1,1,0)$.
Note that the matrix $z$ is not diagonal.
Flavor Changing Neutral Current (FCNC) is generated by the mixings between
ordinary and vector quarks\cite{lavoura2,branco,silverman}.
The charged current interaction is given by
\begin{eqnarray}
& & {\cal L}_W = \frac{g}{\sqrt{2}} (W_{\mu}^- J^{\mu +}
+ W_{\mu}^+ J^{\mu -}), \\
& & J^{\mu -}=\bar{u}'_{L}
a_{\scriptscriptstyle W} \gamma^{\mu} d'_{L}
= \bar{u}_{L} V
\gamma^{\mu} d_{L}
\end{eqnarray}
where $a_{\scriptscriptstyle W} \equiv {\rm Diag} (1,1,1,a)$ is composed of the
Clebsch-Gordon coefficients of
the corresponding quarks. For an isosinglet vector quark,
$a=0$. The $4 \times 4$ generalized KM matrix $V$ is given by:
\begin{equation}
V= U_L^{u \dagger} a_{\scriptscriptstyle W} U_L^d.
\end{equation}
The standard $3 \times 3$ KM matrix $V_{\rm \sss KM}$ is the
the upper-left submatrix of $V$.
Neither $V$ nor $V_{\rm \sss KM}$ is unitary.
Note that the non-unitarity of $V$ is captured by two matrices
\begin{eqnarray}
(V^{\dagger} V) & = &
U_L^{d \dagger} a^2_{\scriptscriptstyle W} U_L^d \nonumber \\
(V V^{\dagger}) & = &
U_L^{u \dagger} a^2_{\scriptscriptstyle W} U_L^u.
\label{KM}
\end{eqnarray}
In the model we are considering, these two matrices are identical to
$z^{u,d}$ of the FCNC effects in Eq. \ref{FCNC}
since $a^2_{\scriptscriptstyle W}$ is equal to $a_z$. Indeed
\begin{equation}
V^{\dagger} V = (z^d), \, \, \, \, \, \,
V V^{\dagger} = (z^u)
\end{equation}
This intimate relation between the non-unitarity of $W$ charge current and
the FCNC of $Z$ boson is important for maintaining the gauge
invariance of their combined contributions to any physical process.
The off-diagonal elements of these matrices, characterizing the
non-unitarity, is closely related to the mixing of ordinary and vector
quarks.
The off-diagonal elements are of order $a^2$ or $b^2$. To calculate it,
in principle, the next-to-leading order expansion of $\hat{K}$, denoted as
$\hat{K}_2$, is needed. In fact
\begin{equation}
(V^{\dagger} V)_{ij}=(\hat{K}^d_2+\hat{K}_2^{d\dagger})_{ij} +
a^2 (\vec{b}_d)_i (\vec{b}_d)_j^*
\end{equation}
Fortunately, by the unitarity of the mixing matrix $U^d$, the
combination $\hat{K}_2^d+\hat{K}_2^{d\dagger}$ is equal to
$-(\vec{b}_d)(\vec{b}_d)^{\dagger}$.
\begin{equation}
\hat{K}_2^d+\hat{K}_2^{d\dagger} = -(\vec{b}_d)(\vec{b}_d)^{\dagger}
\end{equation}
Thus the off-diagonal elements can be simplified
\begin{equation}
(V^{\dagger} V)_{ij}=(-1+a^2)
(\vec{b}_d)_i (\vec{b}_d)_j^*
\end{equation}
For isosinglet vector quark, $a=0$.
The Yukawa couplings between Higgs fields and quarks in weak eigenstate
can be written in a matrix form as
\begin{equation}
-\frac{v}{\sqrt{2}} \left(
\bar{\psi}'_L a_{\scriptscriptstyle Z} {\cal M}_d d'_R \phi
+\bar{d}'_R {\cal M}_d^{\dagger} a_{\scriptscriptstyle Z} \psi'_L \phi^{\dagger}
+\bar{\psi}'_L a_{\scriptscriptstyle Z} {\cal M}_u u'_R \tilde{\phi}
+\bar{u}'_R {\cal M}_u^{\dagger} a_{\scriptscriptstyle Z} \psi'_L
\tilde{\phi}^{\dagger} \right)
\end{equation}
Note that $\hat{a}_{\scriptscriptstyle Z}$ is added to ensure that the left handed
isosinglet vector quarks do not participate in the Yukawa couplings.
The Yukawa interactions of quark mass eigenstates
and unphysical charged Higgs
fields $\phi^{\pm}$ are given by
\begin{equation}
{\cal L}_{\phi^{\pm}} =
\frac{g}{\sqrt{2} M_W} \left[ \bar{u} (m_u V L -
V m_d R) d \right] \phi^{+} +
\frac{g}{\sqrt{2} M_W} \left[ \bar{d} (-m_d V^{\dagger} L +
V^{\dagger} m_u R) u \right] \phi^{-}
\end{equation}
while those of Higgs boson $H$ and unphysical neutral Higgs field $\chi$ by
\begin{eqnarray}
{\cal L}_{H} & = &
- \frac{g}{2 M_W} \left[ \bar{d} (m_d z^d L + z^d m_d R )d +
\bar{u} (m_u z^u L+z^u m_u R) u \right] H \\
{\cal L}_{\chi} & = &
- \frac{ig}{2 M_W} \left[ \bar{d} (- m_d z^d L + z^d m_d R) d +
\bar{u} (m_u z^u L- z^u m_u R ) u \right] \chi^0
\end{eqnarray}
\section{$B$ Meson Radiative Decay}
The $B \to X_s \gamma$ decay, which already exists via one-loop $W$-exchange
diagram in SM, is known to be extremely sensitive to the structure of
fundamental interactions at the electroweak scale and serve as a good
probe of new physics beyond SM because new interaction generically can
also give rise to significant contribution at the one-loop level.
The inclusive $B \to X_s \gamma$ decay is especially interesting.
In contrast to exclusive decay modes, it is theoretically clean in the
sense
that no specific low energy hadronic model is needed to describe the
decays.
As a result of the Heavy Quark Effective Theory (HQET), the
inclusive $B$ meson decay width $\Gamma(B \to X_s \gamma)$ can be well
approximated by the corresponding $b$ quark decay width
$\Gamma(b \to s \gamma)$. The corrections to this approximation are
suppressed by $1/m_b^2$ \cite{chay} and is estimated to contribute
well below $10\%$ \cite{falk,misiak}.
This numerical limit is supposed to hold even for the recently discovered
non-perturbative contributions which are
suppressed by $1/m_c^2$ instead of $1/m_b^2$ \cite{voloshin}.
In the following, we focus on the dominant quark decay $b \to s \gamma$.
In SM, $b\rightarrow s\gamma$ arises at the one loop
level from the various $W$ mediated penguin diagrams.
The number of diagrams needed to be considered can be reduced by
choosing the non-linear ${\rm R}_{\xi}$ gauge as in \cite{deshpande}.
In this gauge, the tri-linear coupling involving photon, $W$ boson and the
unphysical Higgs field $\phi^+$ vanishes. Therefore only four diagrams
, as in Fig.~1.
contribute.
\begin{figure}[ht]
\begin{center}
\leavevmode
\epsfbox{bsg-fig-w.eps}
\caption{Charged boson mediated penguin.}
\end{center}
\end{figure}
The on-shell Feynman amplitude can be written as
\begin{equation}
i {\cal M}(b \rightarrow s \gamma) = \frac{\sqrt{2} G_F}{\pi}
{e\over 4\pi}
\sum_{i} V_{ib} V^*_{is} F_2(x_i) q^{\mu} \epsilon^{\nu}
\bar{s}\sigma_{\mu \nu}(m_b R+m_s L)b
\end{equation}
with $x_i \equiv m_i^2/M_W^2$ The sum is over the quarks $u,c$ and $t$.
The contributions to $F_2$ from the four diagrams are denoted as
$f^{W,\phi}_{1,2}$,
with the subscript $1$ used to denote the diagrams with
photon emitted from internal quark and $2$ those with photon
emitted from $W$ boson.
The functions $f$'s are given by
\begin{eqnarray}
f^W_1(x)&=& e_i\left[\xi_0(x)-\hbox{$3\over2$}\xi_1(x)
+\hbox{$1\over2$}\xi_2(x)
\right] \ ,\\
f^W_2(x)&=& \xi_{-1}(x)-\hbox{$5\over2$}\xi_0(x)
+2\xi_1(x)-\hbox{$1\over2$}\xi_2(x) \ ,\\
f^{\phi}_1(x)&=& \hbox{$1\over4$}e_i x \left[ \xi_1(x) + \xi_2(x)\right] \ ,\\
f^{\phi}_2(x)&=& \hbox{$1\over4$} x \left[\xi_0(x)-\xi_2(x)\right] \ .
\end{eqnarray}
Here the functions $\xi(x)$ are defined as
\begin{equation}
\xi_n(x) \equiv \int^1_0 \frac{z^{n+1}{\rm d}z}{1+(x-1)z}
=-{ {\rm ln}x+(1-x)+\cdots+{(1-x)^{n+1}\over n+1} \over (1-x)^{n+2} }
\ ,
\end{equation}
and
\begin{equation}
\xi_{-1}(x) \equiv \int^1_0 \frac{{\rm d}z}{1+(x-1)z}
=-{ {\rm ln}x\over 1-x}
\end{equation}
$F_2(x)$ is the sum of these functions and is given by
\begin{equation} \label{F2}
F_2(x)= f^W_1(x) + f^W_2(x) + f^{\phi}_1(x) +f^{\phi}_2(x) =
\frac{8x^3+5x^2-7x}{24(1-x)^3}-
\frac{x^2(2-3x)}{4(1-x)^4} \ln x + \frac{23}{36}
\end{equation}
For light quarks such as $u$ and $c$, with $x_i \to 0$, the first two
terms on the right hand side are small and can be ignored. $F_2(x_{u,c})$ is dominated by
the $x$ independent term ${23\over36}$.
However these mass-independent terms get canceled among the up-type
quarks due to the unitarity of KM matrix in SM
\begin{equation}
\sum_{i} V_{ib} V^*_{is} = 0
\end{equation}
After the cancelation, the remaining contributions
are essentially from penguins with internal $t$ quark.
It is convenient to discuss weak decays using the effective
Hamiltonian formalism \cite{buras,grinstein}, which is crucial for
incorporating the QCD corrections to be discussed later.
In this formalism, the $W$ and $Z$ bosons are integrated out at
matching boundary $M_{W}$. Their effects are compensated by
non-renormalizable effective Hamiltonian operators.
The important dim-6 operators relevant for $b\rightarrow s \gamma$
include 12 operators
\begin{equation}
H_{\rm eff}=-\frac{G_F}{\sqrt{2}} V_{ts}^*V_{tb} \sum_{i=1}^{12}
C_i(\mu) O_i
\quad,
\end{equation}
\begin{eqnarray}
Q_1 & = & (\bar{c}_{\alpha} b_{\beta})_{V-A}
(\bar{s}_{\beta} c_{\alpha})_{V-A}
\nonumber \quad,\\
Q_2 & = & (\bar{c}_{ \alpha} b_{ \alpha})_{V-A}
(\bar{s}_{\beta} c_{ \beta})_{V-A} \nonumber \\
Q_3 & = & (\bar{s}_{\alpha} b_{\alpha})_{V-A}
\sum_q (\bar{q}_{ \beta} q_{\beta})_{V-A} \nonumber\\
Q_4 & = & (\bar{s}_{ \alpha} b_{ \beta})_{V-A}
\sum_q (\bar{q}_{ \beta} q_{ \alpha})_{V-A}\nonumber\\
Q_5 & = & (\bar{s}_{\alpha} b_{\alpha})_{V-A}
\sum_q (\bar{q}_{ \beta} q_{\beta})_{V+A}\nonumber \\
Q_6 & = & (\bar{s}_{\alpha} b_{ \beta})_{V-A}
\sum_q (\bar{q}_{ \beta} q_{ \alpha})_{V+A}\nonumber \\
Q_{7}&=& \frac{3}{2} (\bar s_{\alpha} b_{ \alpha})_{V-A}
\sum_q e_q (\bar q_{\beta} q_{ \beta} )_{V+A}
\nonumber\\
Q_{8}&=& \frac{3}{2} (\bar s_{\alpha} b_{ \beta})_{V-A}
\sum_q e_q (\bar q_{\beta} q_{\alpha})_{V+A}
\nonumber\\
Q_{9} &=& \frac{3}{2} (\bar s_{\alpha} b_{\alpha})_{V-A}
\sum_q e_q (\bar q_{ \beta} q_{ \beta} )_{V-A}
\nonumber\\
Q_{10} &=& \frac{3}{2} (\bar s_{\alpha} b_{\beta})_{V-A}
\sum_q e_q (\bar q_{\beta} q_{\alpha})_{V-A} \nonumber \\
Q_{\gamma} & = & \frac{e}{8\pi^2}
\bar{s}_{ \alpha} \sigma^{\mu \nu}
[m_b (1+\gamma_5)+m_s (1-\gamma_5)]b_{\alpha}
F_{\mu\nu} \nonumber\\
Q_{G} & = & \frac{g_{\rm s}}{8\pi^2}
\bar{s}_{ \alpha} \sigma^{\mu \nu}
[m_b (1+\gamma_5)+m_s (1-\gamma_5)] (T_{\alpha\beta}^A) b_{ \alpha}
G^A_{\mu\nu}
\end{eqnarray}
In Standard Model, the electroweak penguin
operators $Q_{7},\ldots ,Q_{10}$ are not necessary for a leading order
calculation in $O(\alpha)$. However, we will show later that in the vector
quark model, FCNC effects exist as a linear combination of
$Q_7,\ldots,Q_{10}$. This effect will mix with $Q_{\gamma}$
through RG evolution.
The Wilson coefficients $C_i$ at $\mu=M_W$ are determined
by the matching
conditions when $W$, $Z$ bosons and $t$ quark are integrated out.
To the zeroth order of $\alpha_s$ and $\alpha$,
the only non-vanishing Wilson coefficients at $\mu=M_W$ for
the above set are $C_{2},C_{\gamma},C_G$. $C_2$ is given by
\begin{equation}
C_2(M_W) = -V_{cs}^{\ast }V_{cb}/V_{ts}^{\ast }V_{tb}.
\end{equation}
It is equal to one if the KM matrix is unitary and
$V_{us}^{\ast }V_{ub}$ is ignored.
$C_{\gamma}$ at the scale $M_W$
is given by the earlier penguin calculations
\begin{equation}
C_{\gamma}^{\rm SM}(M_{W})= \frac{1}{V_{ts}^{\ast }V_{tb}}
\sum_{i} V_{ib} V^*_{is} F_2(x_i) =
-\frac{1}{2} D'_0(x_{t})
\simeq -0.193 \quad\quad.
\end{equation}
The numerical value is given when $m_t=170$ GeV.
Here the function $D'_0$ is
defined as \cite{buras,inami}
\begin{equation}
D'_0(x) \equiv - \frac{8x^3+5x^2-7x}{12(1-x)^3}+
\frac{x^2(2-3x)}{2(1-x)^4} \ln x.
\end{equation}
$C_{\gamma}$ retains only the mass-dependent contribution from
the penguin diagram with internal t quark.
The mass-independent terms get cancelled, due to unitarity, among the
three internal up type quarks. The mass-dependent contributions
from the penguin diagrams with internal $u$ and $c$ quarks
(they are small anyway) appear both
in the high energy and low energy theories and get canceled in the
matching procedure.
Similarly, in SM the $b \to s g$ transition arises from $W$
exchange penguin diagrams which induce $Q_{G}$.
Since the gluons do not couple to $W$ bosons, the gluonic $W$ boson
penguin consists only of two diagrams, which are given by $f^{W,\phi}_1$
with $Q$ replaced by one. With the mass-independent contribution
canceled, the Wilson coefficient $C_G$ can be written as
\begin{equation}
C_{G}^{\rm SM}(M_{W})=-\frac{1}{2} E'_0(x_{t})\simeq 0.096 \quad\quad,
\end{equation}
The function $E'_0$ is defined as \cite{inami}
\begin{equation}
E'_0(x) \equiv - \frac{x(x^2-5x-2)}{4(1-x)^3}+
\frac{3}{2} \frac{x^2}{(1-x)^4} \ln x.
\end{equation}
It is well known that short distance QCD correction is important for
$b \to s \gamma$ decay and actually enhances the decay rate by more than a
factor of two. These QCD corrections can be attributed to logarithms of
the form $\alpha_s^n(m_b) \log^m(m_b/M_W)$. The Leading Logarithmic
Approximation (LLA) resums the LL series ($m = n$).
Working to next-to-leading-log (NLL) means that we also resum all the
terms of the form $\alpha_s(m_b) \alpha_s^n(m_b) \log^n(m_b/M_W)$.
The QCD corrections
can be incorporated simply by running the renormalization
scale from the matching scale $\mu=M_W$ down to $m_{b}$ and then calculate
the Feynman amplitude at the scale $m_b$.
The anomalous dimensions for the RG running
have been found to be scheme dependent even to LL order,
depending on how $\gamma_5$ is defined in the dimensional
regularization scheme.
\chk It has also been noticed \cite{ciuchini} that the one-loop matrix elements of
$Q_5,Q_6$
for $b \rightarrow s \gamma$ are also regularization scheme dependent.
The matrix elements of $Q_{5,6}$ vanish in any four dimensional
regularization scheme and in
the `t Hooft-Veltman (HV) scheme
but are non-zero in the Naive dimension regularization (NDR)
scheme. This dependence will exactly cancel the
scheme-dependence in the anomalous
dimensions and render a scheme-independent prediction.
We refer to
\cite{buras,ciuchini} for a review and details.
In following, we choose to use the HV scheme.
\footnote{ It is customary in the literature to introduce \cite{pokorski}
the scheme independent "effective coefficients" for $Q_{\gamma},Q_{G}$.
These coefficients are defined so that $Q_{\gamma},Q_{G}$ are
the only operators with non-zero one loop matrix element for the process
$b \rightarrow s \gamma (g)$.
The "effective
coeffients" are hence identical to the original
Wilson coefficients
in HV scheme.}
In the HV scheme,
only $Q_{\gamma}$ has a non-vanishing matrix
element between $b$ and $s \gamma$,
to leading order in $\alpha_s(m_b)$, .
Thus we only need $C_{\gamma}(m_b)$ to
calculate the LLA of $b \to s \gamma$
decay width. For $m_t=170$ GeV, $m_b=5$ GeV,
$\alpha_s^{(5)}(M_Z)=0.117$ and in the HV scheme,
$C_{\gamma}(m_b)$ is related to the non-zero Wilson coefficients at $M_W$ by
\cite{buras,pokorski,ciuchini}
\[
C_{\gamma}(m_b)=0.698\,C_{\gamma}(M_{W})+0.086\,C_{G}(M_{W})
-0.156\,C_{2}(M_{W}).
\]
The $b \to s \gamma$ amplitude is given by
\begin{equation}
{\cal M}(b \to s \gamma) = - V_{tb} V^{*}_{ts} {G_F \over \sqrt{2}}
C_{\gamma}(m_b) \langle Q_{\gamma} \rangle_{\rm tree}
\end{equation}
To avoid the uncertainty in $m_b$, it is
customary to calculate the ratio $R$ between the radiative decay and the
dominant semileptonic decay.
The ratio $R$ is given, to LLA, by \cite{pokorski}
\begin{equation}
R \equiv \frac{\Gamma(b \rightarrow s \gamma)}
{\Gamma(b \rightarrow c e \bar{\nu}_{e})}
=\frac{1}{\left| V_{cb}\right|^{2}}\frac{6\alpha}{\pi g(z)}
\left| V_{ts}^{\ast}V_{tb}C_{\gamma}(m_b)\right|^{2}. \label{RR}
\end{equation}
Here the function $g(z)$ of $z=m_c/m_b$ is defined as
\begin{equation}
g(z)=1-8z^2+8z^6-z^8-24z^4 \ln z
\end{equation}
In the vector quark model, deviations from SM result
come from various sources: (1) charged current KM matrix
non-unitarity, (2) Flavor Changing Neutral Current (FCNC) effects in
neutral boson mediated penguin diagrams, and (3) the $W$ penguin with
internal heavy $U$ vector quark. Since the last one can be
incorporated quite straight-forwardly, we do not elaborate on this
contribution which will not be relevant for models without the $U$
quark.
We concentrate on the first two contributions, which have been
discussed in Refs.\cite{brancobsg,handoko,barger}. Here we make a more
careful and complete analysis which supplements or corrects these earlier
analyses.
Refs.\cite{brancobsg}
have calculated effects due to non-unitarity of the KM
matrix and effects due to the $Z$ mediated penguin in the Feynman
gauge, however, their analysis did not
include the FCNC contribution from the unphysical neutral Higgs boson,
which is necessary for gauge invariance. The Higgs boson mediated
penguins were also ignored. On the other hand,
Ref.\cite{handoko}, while taking the unphysical Higgs boson into account,
did not consider effects due to non-unitarity of the KM matrix,
which gives the most important contribution.
None of the above treatments, except Ref.\cite{barger},
included QCD corrections.
Our strategy is first to integrate out the vector quark together with $W$ and
$Z$ bosons at the scale $M_W$.
As shown above, the KM matrix is not unitary in the
presence of an isosinglet vector quark.
Therefore the mass-independent contributions in Eq. (\ref{F2})
from the magnetic penguin diagrams with various up-type
quarks no longer cancel.
This contribution
is related to the short distance part of loop integration, i.e.\ when the
loop momenta are large so that
the quark mass which appears in the propagator can be ignored.
In the formalism of the low energy effective Hamiltonian,
it can be shown that
these mass independent contributions never arise if the theory is renormalized
using DRMS or similar schemes.
By dimensional analysis, it is clear that the corresponding
diagrams calculated using effective Hamiltonians are always proportional
to the square of the loop quark mass.
When we match the two calculations at the $M_{W}$ scale, the
mass-independent contributions
must be compensated by new terms in Wilson coefficients.
This is consistent with the notion that the effective field theory formalism
separates the short
distance physics encoded in Wilson coefficients from
the long distance physics parameterized by matrix elements of the effective
Hamiltonian. Such separation enables us to calculate effects of
new physics by simply calculating Wilson coefficients perturbatively
at the matching boundary. The matching results serve as
initial conditions when Wilson coefficients
run to a lower scale by renormalization group.
Since the vector quarks have been integrated out, the anomalous dimensions
are not affected by the new physics.
Following this procedure, we calculate the extra contributions to the
Wilson coefficient $C_{\gamma}$ from non-unitarity:
\begin{equation}
\frac{(V^{\dagger} V)_{23}}
{V_{ts}^{*}V_{tb}} \, \frac{23}{36}
=\frac{\delta}{V_{ts}^{*}V_{tb}} \, \frac{23}{36} \ .
\label{eq:nonu}
\end{equation}
The parameter $\delta$, one of the off-diagonal elements of the matrix
$V^{\dagger} V$, characterizes the
non-unitarity:
\begin{equation}
\delta = (V^{\dagger} V)_{23}=z_{sb}
\label{eq:delta_z}
\end{equation}
The $b \rightarrow s \gamma$ transitions also arise from FCNC $Z$
boson and Higgs boson mediated penguin diagrams as in Fig.~2.
The FCNC contribution to $C_{\gamma}(M_W)$ can be denoted as follows:
\begin{equation}
\frac{z_{sb}}{V_{tb}V^*_{ts}}(f_{s,b}^Z+f_{s,b}^{\chi}+f_{s,b}^{H})+
\frac{z_{4b}z^{\ast}_{4s}}{V_{tb}V^*_{ts}}(f_D^Z+f_D^{\chi}+f_D^{H})
\end{equation}
For the sake of gauge invariance, $f^Z$ needs to be considered together
with
$f^{\chi}$.
The $Z$ boson penguins consist of internal charge $-{1\over3}$ quarks.
The contribution from internal $i=b,s$ quark, $f^Z_{s,b}$, is given by
($y_i \equiv m_i^2/M_Z^2$):
\begin{eqnarray}
f^Z_b &=& -\hbox{$1\over2$} e_d \left\{ (-\hbox{$1\over2$}-e_d \sin^2 \theta_W)
\left[\,2\xi_0(y_b)-3\xi_1(y_b)+\xi_2(y_b)\,\right] \right. \nonumber \\
& & \left.\,\,\,\,\,\,\,\, + \, e_d \sin^2 \theta_W
\left[\,4\xi_0(y_b)-4\xi_1(y_b)\,\right] \right\} \\
f^Z_s &=& -\hbox{$1\over2$} e_d
\left\{ ( -\hbox{$1\over2$}-e_d \sin^2 \theta_W)
\left[\,2\xi_0(y_s)-3\xi_1(y_s)+\xi_2(y_s)\,\right] \right. \nonumber \\
& & \left.\,\,\,\,\,\,\,\, + \, \frac{m_s}{m_b} e_d \sin^2 \theta_W
\left[\,4\xi_0(y_b)-4\xi_1(y_b)\,\right]
\right\}
\end{eqnarray}
The last term in $f^Z_s$ has a mass insertion in the
$s$ quark line. It is suppressed by $m_s/m_b$ and will be ignored.
\begin{figure}[ht]
\begin{center}
\leavevmode
\epsfbox{bsg-fig-z.eps}
\caption{Neutral boson mediated penguin diagrams.}
\end{center}
\end{figure}
The calculation is similar to that of $f^W$.
For a consistent approximation, the two variables $y_b$ and $y_s$,
which are the ratios of $m_b^2,m_s^2$ to $M_Z^2$, are also set to zero.
Hence
\begin{eqnarray}
f^Z_b+f^Z_s &\approx& -\hbox{$1\over2$} {e_d}
\left\{ (-\hbox{$1\over2$} - e_d \sin^2 \theta_W)
\left[\,4\xi_0(0)-6\xi_1(0)+2\xi_2(0)\,\right] \right. \nonumber \\
& & \left. \,\,\,\,\,\,\,\, + e_d \sin^2 \theta_W
\left[\,4\xi_0(0)-4\xi_1(0)\,\right]
\right\} \\ &=& -\frac{1}{9}-\frac{1}{27} \sin^2 \theta_W \simeq -0.12
\end{eqnarray}
The $Z$-mediated penguin diagram with internal $D$ quark can also be
calculated.
\begin{equation}
f^Z_D = {1\over4} e_d
\left[\,2\xi_0(y_D)-3\xi_1(y_D)+\xi_2(y_D)\,\right]
\rightarrow -{5\over 72 y_D} +O(\frac{1}{y_D^2}) \ .
\end{equation}
It approaches zero when $y_D \rightarrow \infty$ and thus
$f^Z_D$ is negligible in large $y_D$ limit.
For a gauge invariant result, the unphysical neutral Higgs $\chi$ mediated
penguin needs to be considered together with the $Z$ boson penguin.
In the non-linear Feynman gauge we have chosen, the mass of $\chi$ is
equal to $M_Z$. The calculation is very similar to the $\phi^{\pm}$ penguin.
For internal $s,b,D$ quarks, the contributions
$f^{\chi}_{s,b,D}$ are given by
\begin{eqnarray}
f^{\chi}_{i} &=& \frac{e_d}{8}
y_{i} \left[\,\xi_1(y_{i})+\xi_2(y_{i})\,\right] \nonumber \\
& = & - \frac{e_d}{8} y_i \left[(2-y_i)\ln y_i -
\frac{5}{6} y_i^3+ 4 y_i^2- \frac{13}{2} y_i+\frac{10}{3} \right]
\frac{1}{(1-y_i)^4}
\end{eqnarray}
It is obvious that the light quark contributions are suppressed by the
light quark masses and thus negligible.
The situation is quite different for the heavy $D$ quark.
As an approximation, for $y_D \to \infty$, $f^{\chi}_D \to -{5\over144} \sim
-0.035$.
This contribution, comparable to the $Z$ mediated penguin $f^Z$ from
light quarks, has been $overlooked$ in previous
calculations\cite{brancobsg}.
Since the quark $D$ may not be much heavier than $Z$ boson, we expand
$f^{\chi}_D$ in powers of $1/y_D$ and keep also the next leading term.
\begin{equation}
f^{\chi}_D \approx - \frac{5}{144} + \frac{1}{36} \frac{1}{y_D} +
O(\frac{1}{y_D^2}) \ .
\end{equation}
The Higgs boson $H$ mediated penguin is similar to that of unphysical
Higgs $\chi$:
\begin{eqnarray}
f^{H}_{i} &=& - \frac{e_d}{8}
w_{i} \left[3 \xi_1(w_{i})-\xi_2(w_{i})\right] \nonumber \\
& = & - \frac{e_d}{8} w_i \left[(-2+3w_i)\ln w_i
+\frac{7}{6} w_i^3-6 w_i^2+ \frac{15}{2}w_i-\frac{8}{3} \right]
\frac{1}{(1-w_i)^4}
\end{eqnarray}
where $w_i \equiv m_i^2 / M_H^2$. Similar to the $\chi$ penguin,
$f^H_{s,b}$ can be ignored since $m_s,m_b \ll m_H$.
For $f^H_D$, we again expand it in powers of $1/w_D$ and keep up to the
next leading term:
\begin{equation}
f^H_D \approx + \frac{7}{144} - \frac{1}{18} \frac{1}{w_D} +
O(\frac{1}{w_D^2})
\end{equation}
The leading term is $+0.048$, again comparable to the $Z$ penguin.
Put together, the Wilson coefficient $C_{\gamma}(M_W)$ in the vector quark model
is given by
\begin{eqnarray}
&&\quad\quad C_{\gamma}(M_W) =
C_{\gamma}^{\rm SM}(M_W) + \frac{\delta}{V_{tb}V^*_{ts}}\frac{23}{36}
+\frac{z_{sb}}{V_{tb}V^*_{ts}}(f_s^Z+f_s^{\chi}+f_s^{H}
+f_b^Z+f_b^{\chi}+f_b^{H})
\nonumber\\
&& \hskip 3in
+\frac{z_{4b}z_{4s}^*}{V_{tb}V^*_{ts}}
(f_D^Z+f_D^{\chi}+f_D^{H})
\nonumber\\
&& = C_{\gamma}^{\rm SM}(M_W)
+ \frac{z_{sb}}{V_{tb}V^*_{ts}} \, \left( \frac{23}{36} -
\frac{1}{9} - \frac{1}{27} \sin^2 \theta_W
+ {5\over 72 y_D} + \frac{5}{144} - \frac{1}{36} \frac{1}{y_D}
- \frac{7}{144} +\frac{1}{18} \frac{1}{w_D} \right)
\nonumber\\
&&
\rightarrow
-0.193 + \frac{z_{sb}}{V_{tb}V^*_{ts}} \times 0.506 \quad .
\end{eqnarray}
Here we have used the unitarity relations
$z_{4b} z^*_{4s} = -|U_{44}|^2 z_{sb} \approx -z_{sb}$ to leading
order in FCNC due to the unitarity of $U_L^d$ and
$\delta=z_{sb}$ from Eq. (\ref{eq:delta_z}).
In the above numerical estimate we took $y_D$, $w_D$ to infinity.
Similarly the Wilson coefficient of the gluonic magnetic-penguin operator
$Q_{G}$ is modified by the vector quark.
In the vector quark model, the mass-independent term will give an extra
contribution ${1\over3} \delta$ if the KM matrix is
non-unitary\cite{deshpande}.
The FCNC neutral boson mediated gluonic magnetic penguin diagrams are
identical to those of the photonic magnetic penguin, except for a trivial
replacement of $Q_d$ by color factors, since photon and gluons do
not
couple to neutral bosons.
$C_G(M_W)$ in the vector quark model is given by
\begin{eqnarray}
&&\quad\quad C_G(M_W)
= C_G^{\rm SM}(M_W) + \frac{\delta}{V_{tb}V^*_{ts}}\frac{1}{3}
-3\frac{z_{sb}}{V_{tb}V^*_{ts}}(f_s^Z+f_s^{\chi}+f_s^{H}
+f_b^Z+f_b^{\chi}+f_b^{H})
\nonumber\\
&& \hskip 3in
-3\frac{z_{4b}z_{4s}^*}{V_{tb}V^*_{ts}}
(f_D^Z+f_D^{\chi}+f_D^{H})
\nonumber\\
&&
= C_G^{\rm SM}(M_W) + \frac{z_{sb}}{V_{tb}V^*_{ts}} \left( \frac{1}{3} +
\frac{1}{3} + \frac{1}{9} \sin^2 \theta_W
-{5\over 24 y_D}
- \frac{5}{48} + \frac{1}{12} \frac{1}{y_D}
+ \frac{7}{48} - \frac{1}{6} \frac{1}{w_D} \right)
\nonumber\\
&&
\to -0.096 + \frac{z_{sb}}{V_{tb}V^*_{ts}} \times 0.733 \quad .
\end{eqnarray}
The above deviation from SM does not include QCD evolution. We can
incorporate LL QCD corrections to these deviations in the
framework of effective Hamiltonian.
The key is that the deviation from Standard Model is a short distance
effect at the scale of $M_W$ and $M_Q$. It can be separated into the
Wilson coefficients at the matching scale, as we just did. The
evolution of Wilson coefficients, which incorporates the LL QCD
corrections, is not affected by the short distance
physics of vector quark model
and all the anomalous dimensions used in SM
calculation still valid here. One only needs to use the corrected Wilson
coefficients at $\mu=M_W$ and in so doing we resum all the terms of the
form $z_{sb} \alpha_s^n(m_b) \log^n(m_b/M_W)$.
One subtlety in the vector quark model is that the quark mixing will generate
Flavor Changing Neutral Current that couple to $Z$ boson, which
in turn gives rise to $Z$ boson exchange interaction. This interaction
is represented by an Effective Hamiltonian which is
a linear combination of strong penguin operator $Q_3$ and
electroweak penguin operator $Q_{7,9}$:
\begin{eqnarray}
H_{\rm NC}&= & 2 \frac{G_F}{\sqrt{2}}
z_{sb} \left(-{1\over2}\right) (\bar s_{\alpha} b_{ \alpha})_{V-A}
\sum_q (t_3-e_q \sin^2 \theta_W) (\bar q_{\beta} q_{ \beta} )_{V \pm A} \nonumber \\
& = & -\frac{G_F}{\sqrt{2}} z_{sb} \left[ -{1\over6} Q_3 -
{2\over3}\sin^2 \theta_W Q_7 +{2\over3} (1-\sin^2 \theta_W) Q_9 \right]
\end{eqnarray}
which gives additional non-zero Wilson coefficients:
\begin{eqnarray}
C_3(M_W)&=&-\frac{z_{sb}}{V_{tb}V^*_{ts}}{1\over6} \ ,\nonumber\\
C_7(M_W)&=&-\frac{z_{sb}}{V_{tb}V^*_{ts}}{2\over3}\sin^2 \theta_W \ ,
\nonumber\\
C_9(M_W)&=&\frac{z_{sb}}{V_{tb}V^*_{ts}}{2\over3}(1-\sin^2 \theta_W) \ .
\end{eqnarray}
To LL, the strong penguin and electroweak penguin operators could
mixing among themselves and also with $Q_{\gamma}$ and $Q_{G}$.
This will generate an additional
LL QCD correction to $b \rightarrow s \gamma$ decay in
the vector quark model. The crucial Wilson coefficient $C_{\gamma}(m_b)$
obtain additional contributions:
\begin{eqnarray}
C_{\gamma}(m_b) & = & 0.698\,C_{\gamma}(M_{W})+0.086\,C_{G}(M_{W})
-0.156\,C_{2}(M_{W}) \nonumber \\
& & +0.143\,C_{3}(M_{W})+0.101\,C_{7}(M_{W})
-0.036\,C_{9}(M_{W}).
\end{eqnarray}
This FCNC LL QCD corrections is about one fifth the FCNC contribution of Z boson
mediated penguin.
The detail of the RG running calculation is given in the appendix.
The correction to ratio $R$ in the vector quark model, including its
LL QCD corrections, is given by
\[
\Delta R=\frac{6\alpha}{\pi g(z) \, |V_{cb}|^2 }\times 0.307 \times {\rm Re}
\left[ V_{ts}^{\ast}V_{tb} \, z_{sb} \right] =
0.23 \, {\rm Re} \, z_{sb}
\]
to leading order in $\delta$.
In this result, the difference between $V_{ts}^{*} V_{tb}$ and
$- V_{cs}^{*} V_{cb}$, i.e.
\begin{equation}
V_{cs}^{*} V_{cb} = z_{sb} -V_{ts}^{*} V_{tb},
\end{equation}
has been taken into account.
We use the value $|V_{ts}^{*} V_{tb}|^2/|V_{cb}|^2=0.95$.
\section{Constraints}
The inclusive $B \to X_s \gamma$ branching ratio has been measured by CLEO
with the branching ratio
\cite{cleo,cleo1}
\begin{equation}
{\cal B} (B \to X_s \gamma )_{\rm EXP}=(3.15\pm 0.54)\times 10^{-4}
\end{equation}
This branching ratio could be used to constrain the mixing in
the vector quark model.
We calculate the vector quark model deviation to leading logarithmic
order, ie. all the terms of the form
$z_{sb} \alpha_s^n(m_b) \log^n(m_b/M_W)$.
The Standard Model prediction to leading logarithmic order is \cite{reina}
\begin{equation}
{\cal B}(B \to X_s \gamma )_{\rm LO}=(2.93 \pm 0.67)\times 10^{-4}
\end{equation}
The difference between the
experimental data and the Standard Model LO prediction,
\begin{equation}
{\cal B}(B \to X_s \gamma )_{\rm EXP}-
{\cal B}(B \to X_s \gamma )_{\rm NLO}=(0.22 \pm 0.86)\times 10^{-4}
\end{equation}
It gives a range of possible vector quark model deviation and hence
on $z_{sb}$ (with the input ${\rm B}(B \rightarrow X_c e
\bar{\nu})=0.105$):
\begin{equation}
-0.0027< z_{sb} < 0.0045
\end{equation}
\chk The SM prediction up to next-to-leading order has been
calculated in Ref.\cite{misiak}, with the result
\begin{equation}
{\cal B}(B \to X_s \gamma )_{\rm NLO}=(3.28\pm 0.33)\times 10^{-4}
\end{equation}
Ref.\cite{pott} later did a new analysis, which discards all
corrections beyond NLO by expanding formulas like Eq.(\ref{RR}) in powers of
$\alpha_s$, and reported a slightly higher result:
\begin{equation}
{\cal B}(B \to X_s \gamma )_{\rm NLO}=(3.60\pm 0.33)\times 10^{-4}
\end{equation}
Here we also use these new
next-to-leading order SM calculations and the leading order
vector quark model correction to
constraint $Z_{sb}$.
To be consistent in the estimate of the theoretical errors, however,
a full next-to-leading order calculation of
the vector quark model matching correction is required.
The difference between the
experimental data and the Standard Model NLO prediction,
with the errors added up directly, is
\begin{eqnarray}
{\cal B}(B \to X_s \gamma )_{\rm EXP}-
{\cal B}(B \to X_s \gamma )_{\rm NLO}&=&(-0.13 \pm 0.63)\times 10^{-4}
\,\,\,\,\, {\rm \cite{misiak}} \nonumber \\
& &(-0.45 \pm 0.63) \times 10^{-4}
\,\,\,\,\, {\rm \cite{pott}}
\end{eqnarray}
It gives a constraint
on $z_{sb}$:
\begin{eqnarray}
-0.0032< & z_{sb} <& 0.0021 \,\,\,\,\, {\rm \cite{misiak}} \nonumber \\
-0.0045< & z_{sb} <& 0.0007 \,\,\,\,\, {\rm \cite{pott}}
\end{eqnarray}
The previously strongest bound on $z_{sb}$ is from
$Z$-mediated FCNC effect in the mode $B\rightarrow X\mu^{+}\mu^{-}$
\cite{branco}: \begin{equation}
-0.0012< z_{sb} <0.0012 \label{FCNCB}
\end{equation}
Our new bound is as strong as that from FCNC.
It shows that even though the vector quarks contribute to the
radiative decay rate through one loop, as in SM, the data could still put
strong bound.
On the other hand, in models like Ref.~\cite{cck},
operators of different chiralities such as
\begin{eqnarray}
{O'}_{\gamma} = \frac{e}{8 \pi^2}
\bar{s}_{\scriptscriptstyle \alpha} \sigma^{\mu \nu}
[m_b (1-\gamma_5)+m_s (1+\gamma_5)]
b_{\scriptscriptstyle \alpha} F_{\mu\nu} \ , \nonumber\\
{O'}_{G} = \frac{g_{\rm s}}{8 \pi^2}
\bar{s}_{\scriptscriptstyle \alpha} \sigma^{\mu \nu}
[m_b (1-\gamma_5)+m_s (1+\gamma_5)]
(T_{\scriptscriptstyle \alpha\beta}^A) b_{\scriptscriptstyle \alpha}
G^A_{\mu\nu} \ ,
\end{eqnarray}
occurs via the new interaction.
Our study can be extended to these models too. However, the new
amplitude for $b\to s \gamma$ belongs to a different helicity
configuration in the final state and it
will not interfere with the SM contribution. Consequently,
the constraint obtained from $b\to s\gamma$ in these models is
less stringent than that from $B\to X\mu^+\mu^-$.
In the upcoming years, much more precise measurements
are expected from the upgraded CLEO detector, as well as from the
$B$-factories presently under construction at SLAC and KEK. The new
experimental result will certainly give us clearer evidence whether the
vector quark model is viable.
DC wishes to thank T. Morozumi and E. Ma for discussions. WYK is
supported by a grant from DOE of USA. DC and CHC are supported by grants
from NSC of ROC.
CHC's support by NSC is under contract No.NSC 88-2112-M-003-017.
\section*{Appendix}
\def{ \overline{\overline{f}} }{{ \overline{\overline{f}} }}
The RG equations for the Wilson coefficients
$\mbox{\boldmath $C_r$} \equiv (C_1(\mu), \ldots, C_{10}(\mu))$ and $C_{\gamma}$, $C_G$
can be written as \cite{ciuchini}
\begin{eqnarray}
\frac{d}{d \ln \mu} \mbox{\boldmath $C_r$}(\mu)& = &\frac{\alpha_s}{4\pi} \hat{\gamma}_r^{\rm T} \mbox{\boldmath $C_r$}(\mu)
\nonumber\\
\frac{d}{d \ln \mu} C_{\gamma}(\mu)& = &\frac{\alpha_s}{4\pi}
( \mbox{\boldmath $\beta$}_{\gamma} \cdot \mbox{\boldmath $C_r$}(\mu)+\gamma_{\gamma\gamma} C_{\gamma}(\mu)
+\gamma_{G \gamma} C_{G}(\mu) ) \nonumber \\
\frac{d}{d \ln \mu} C_{G}(\mu)& = &\frac{\alpha_s}{4\pi}
( \mbox{\boldmath $\beta$}_{G} \cdot \mbox{\boldmath $C_r$}(\mu)
+\gamma_{G G} C_{G}(\mu) )
\end{eqnarray}
The 10 by 10 submatrix $\hat{\gamma}_{r}$ can be found in \cite{ciuchini}.
The anomalous dimension matrix entries $\mbox{\boldmath $\beta$}_{\gamma,G}^{7-10}$
are extracted from the results of $\mbox{\boldmath $\beta$}_{\gamma,G}^{3-6}$ in
Ref.\ \cite{ciuchini}. \chk In the HV scheme, $\mbox{\boldmath $\beta$}_{\gamma,G}$ are given by:
\begin{equation}
\mbox{\boldmath $\beta$}_{\gamma} = \left(
\begin{array}{c}
0 \\
-\frac{4}{27} {{N^2-1\over 2N}} - \frac{2 e_u}{e_d} {{N^2-1\over 2N}} \\
-{116\over27} {{N^2-1\over 2N}} \\
-{4 f\over27}{{N^2-1\over 2N}} - {2 {\bar f}}{{N^2-1\over 2N}} \\
{8\over 3} {{N^2-1\over 2N}} \\
-{4 f\over27}{{N^2-1\over 2N}} + {2 {\bar f}}{{N^2-1\over 2N}} \\
4 e_d {{N^2-1\over 2N}} \\
-{2 \bar f\over9}e_d{{N^2-1\over 2N}} + 3 { \overline{\overline{f}} } {{N^2-1\over 2N}} \\
-{58\over9} e_d {{N^2-1\over 2N}} \\
-{2 \bar f\over9}e_d {{N^2-1\over 2N}} - 3 { \overline{\overline{f}} } {{N^2-1\over 2N}}
\end{array}
\right),
\,\,\,\,\,\,\,\,\,
\mbox{\boldmath $\beta$}_{G} = \left(
\begin{array}{c}
3 \\
\frac{11N}{9} - \frac{29}{9N} \\
{22N\over9}-{58\over 9N}+3f \\
6 + {11N f\over 9} - {29f\over9N} \\
-2N +{4\over N} - 3f \\
-4 - {16Nf\over 9} +{25 f \over 9N} \\
-3N e_d+{6\over N} e_d- {9\bar f \over 2} e_d \\
-6 e_d- {8N {\bar f} \over 3}e_d +{25 {\bar f} \over 6N}e_d \\
{11N\over3}e_d-{29\over 3N}e_d+{9\bar f\over 2}e_d \\
9 e_d + {11N {\bar f}\over 6} e_d- {29{\bar f}\over6N} e_d
\end{array}
\right)
\end{equation}
Here $u$ and $d$ are the numbers of active up-type quarks and down type quarks
respectively, $f$ is the total number of active quark flavor.
Between the scales $m_b$ and $M_W$, $u=2$, $d=3$, $f=5$,
${\bar f} \equiv (e_d d + e_u u)/e_d=-1$,
${ \overline{\overline{f}} } \equiv (e_d^2 d + e_u^2 u)/e_d=-11/3$.
For $SU(3)$ color, $N=3$ .
\newpage
| proofpile-arXiv_065-8794 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Indirect methods to search for extra-solar planets do not measure
emission from the planet itself, but instead seek to discover and
quantify the tell-tale effects that the planet would have on the
position (astrometry) and motion (radial velocity) of its parent star,
or on the apparent brightness of its parent star (occultation)
or random background sources (gravitational microlensing).
All of these indirect signals have a characteristic temporal behavior
that aids in the discrimination between planetary effects and other
astrophysical causes.
The variability can be due to the changing position of the planet with
respect to the parent star (astrometry, radial velocity, occultation),
or the changing position of the complete planetary system with respect
to background stars (microlensing).
The time-variable photometric signals that can be measured using
occultation and microlensing techniques are the focus of this small series
of lectures.
An occultation is the temporary dimming of the apparent brightness of
a parent star that occurs when a planet transits the stellar disk;
this can occur only when the orbital plane is nearly perpendicular
to the plane of the sky.
Because the planet is considerably cooler than its parent star,
its surface brightness at optical and infrared wavelengths is less,
causing a dip in the stellar light curve whenever the planet
(partially) eclipses the star.
Since the fractional change in brightness is proportional to the fraction of
the stellar surface subtended by the planetary disk,
photometric measurements directly yield a measure of the planet's size.
For small terrestrial planets, the effect is simply to occult a fraction
of the stellar light; the atmospheres of larger gaseous planets
may also cause absorption features that can be measured during transit with
high resolution, very high S/N spectroscopic monitoring.
The duration of a transit is a function of the size of the stellar disk
and the size and inclination of the planetary orbit.
Together with an accurate stellar typing of the parent star, measurement
of the transit duration and period provides an estimate for the
radius and inclination of the planet's orbital plane.
Since large planets
in tight orbits will create the most significant and frequent
occultations, these are the easiest to detect.
If hundreds of stars can be monitored with significantly better
than 1\% photometry, the transit method can be applied from the
ground to place statistics on Jupiter-mass planets in tight orbits.
Space-based missions, which could search for transits continuously and with
higher photometric precision, may be capable of detecting
Earth-mass planets in Earth-like environments via the occultation method.
Moons or multiple planets may also be detectable, not through their eclipsing
effect, but by the periodic change they induce in the timing of
successive transits of the primary occulting body.
Microlensing occurs when a foreground compact object ({e.g.,}\ a star, perhaps
with orbiting planets) moves between an observer and a luminous
background source ({e.g.,}\ another star). The gravitational field of
the foreground lens alters the path of the light from the background source,
creating multiple images with a combined brightness larger than that of
the unlensed background source.
For stellar or planetary mass lenses, the separation
of these images is too small to be resolved, but the combined brightness
of the images changes with time in a predictable manner as the lensing
system moves across the sky with respect to the background source.
Hundreds of microlensing events have been detected in the Galaxy,
a large fraction of which are due to (unseen) stellar lenses.
In binary lenses with favorable geometric configurations, the
lensing effect of the two lenses combines in a
non-linear way to create detectable and rapid variations
in the light curve of the background source star. Modeling of these
features yields estimates for the mass ratio and
normalized projected orbital radius for the binary lens;
in general, smaller-mass companions produce weaker and shorter deviations.
Frequent, high-precision
photometric monitoring of microlensing events
can thus be used to discover and characterize
extreme mass-ratio binaries ({i.e.,}\ planetary systems).
With current ground-based technology,
microlensing is particularly suited to the
detection of Jupiter-mass planets in Jupiter-like environments.
Planets smaller than Neptune will resolve the brightest background sources
(giants) diluting the planetary signal.
For planets above this mass, the planetary detection
efficiency of microlensing is a weak function of the planet's mass
and includes a rather broad range in orbital radii, making it
one of the best techniques for a statistical study of the frequency
and nature of planetary systems in the Galaxy.
Microlensing can discover planetary systems at distances of the
Galactic center and is the only technique that is capable of
detecting unseen planets around {\it unseen parent stars\/}!
These lectures begin with a discussion of the physical basis of
occultation and microlensing, emphasizing their strengths and weaknesses
as well as the selection effects and challenges
presented by sources of confusion for the planetary signal.
The techniques are then placed in the larger
context of extra-solar planet detection. Speculative comments about possibilities in the next decade cap the
lectures.
\section{Principles of Planet Detection via Occultations}
Due to their small sizes and low effective temperatures, planets
are difficult to detect directly. Compared to stars, their
luminosities are reduced by the square of the ratio of their radii
(factors of $\sim$$10^{-2} - 10^{-6}$ in
the Solar System) and the
fourth power of the ratio of their effective temperatures
(factors of $\sim$$10^{-4} - 10^{-9}$ in the Solar System).
Such planets may be detected indirectly however if they chance
to transit (as viewed by the observer) the face of their parent star
and are large enough to occult a sufficient fraction of
the star's flux. This method of detecting planets around
other stars was discussed as early as mid-century (Sturve 1952),
but received serious attention only after the detailed
quantification of its possibilities by Rosenblatt (1971) and
Borucki and Summers (1984).
Such occultation effects have been observed for many years in
the photometry of binary star systems whose orbital planes
lie close enough to edge-on as viewed from Earth that the disk of
each partner occults the other at some point during the orbit, creating
two dips in the combined light curve of the system.
The depth of the observed occultation depends on the
relative size and temperatures of the stars.
For planetary systems, only the dip caused by the occultation of the brighter
parent star by the transit of the smaller, cooler planet will be detectable.
The detection rate for a given planetary system will depend on
several factors: the geometric probability that a transit will occur,
the frequency and duration of the observations compared to the
frequency and duration of the transit, and the sensitivity of the
photometric measurements compared to the fractional deviation in
the apparent magnitude of the parent star due to the planetary
occultation. We consider each of these in turn.
\hglue 1cm
\epsfxsize=9cm\epsffile{atransit.ps}
\vglue -3.55cm
{\small Fig.~1 --- Geometry of a transit
event of inclination $i$ and orbital radius $a$
as seen from the side (top) and observer's vantage point (bottom) at a moment
when the planet lies a projected distance $d(t)$ from the stellar center.}
\vskip 0.4cm
Unless stated otherwise in special cases below,
we will assume for the purposes of
discussion that planetary orbits are circular and
that the surface brightness, mass, and radius of the planet
are small compared to that of the parent star.
We will also assume that the orbital radius is much larger than
the size of the parent star itself.
\subsection{Geometric Probability of a Transit}
Consider a planet of radius $R_p$ orbiting a star of radius $R_*$
and mass $M_*$ at an orbital radius $a$. A transit of the stellar
disk will be seen by an external observer only if the orbital
plane is sufficiently inclined with respect to the sky plane (Fig.~1).
In particular, the inclination $i$ must satisfy
\begin{equation}
a \, \cos{i} \leq R_* + R_p~~.
\end{equation}
Since $\cos{i}$ is simply the projection of the
normal vector (of the orbital plane) onto the sky plane, it is
equally likely to take on any random value between 0 and 1.
Thus, for an ensemble of planetary systems with arbitrary
orientation with respect to the observer, the probability
that the inclination satisfies the geometric criterion for a transit
is:
\begin{equation}
{\rm Geometric \, \, Transit \, \, Prob} =
\frac{ \int _{_0} ^{(R_* + R_p)/a} \, d(\cos{i}) }
{ \int _{_0} ^{^1} \, d(\cos{i}) }
= \frac{R_* + R_p}{a} \approx \frac{R_*}{a}
\end{equation}
Geometrically speaking, the occultation method favors those
planets with small orbital radii in systems with large parent stars.
As can be seen in Fig.~2, for planetary systems like the Solar System
this probability is small: $\mathrel{\raise.3ex\hbox{$<$}\mkern-14mu\lower0.6ex\hbox{$\sim$}} 1\%$ for
inner terrestrial planets and about a factor of 10 smaller for
jovian gas giants. This means that unless a method can be
found to pre-select stars with ecliptic planes oriented perpendicular
to the plane of the sky, thousands of random stars must
be monitored in order to detect statistically meaningful numbers of
planetary transits due to solar systems like our own.
\hglue 1cm
\epsfxsize=10cm\epsffile{probtransit.ps}
\vglue -0.5cm
{\small Fig.~2 ---
Probability of transits by Solar System objects
as seen by a random external observer.}
\vskip 0.4cm
\subsubsection{Inclination Pre-selection}
Under the assumption that the orbital angular momentum vector of a planetary
system and the rotational angular momentum vector of the parent star
share a common origin and thus a common direction, single stars can
be pre-selected for transit monitoring programs on the basis of a
measurement of their rotational spin. In this way, one may hope
to increase the chances of viewing the planetary orbits edge-on.
Through spectroscopy, the line-of-sight component of the
rotational velocity $v_{*, \, los}$ of a star's atmosphere can be measured.
The period $P_{*, \, rot}$ of the rotation
can be estimated by measuring the periodic photometric signals
caused by sunspots, and the radius $R_*$ of the star can be determined
through spectral typing and stellar models. An estimate for the
inclination of the stellar rotation plane to the plane of the sky
can then be made:
\begin{equation}
\sin{i_{*, \, rot}} =\frac{v_{*, \, los} \, \, P_{*, \, rot}}{2\pi \, R_*} ~~~,
\end{equation}
\noindent
and only those stars with high rotational inclinations selected to
be monitored for transits.
How much are the probabilities increased by such pre-selection?
Fig.~3 shows the probability
of the planetary orbital inclination being larger (more edge-on) than
a particular value ranging from 89.5$^{\circ}$\ $< i < 85$$^{\circ}$,
if the parent star is pre-selected to have a rotational plane
with inclination $i_{*, \, rot} \ge i_{\rm select}$.
\hglue 1cm
\epsfxsize=9cm\epsffile{preselectinc.ps}
\vglue -1.5cm{\small Fig.~3 --- Increase of geometric transit probability
through pre-selection of the inclination angle to be larger than
$i_{\rm select}$, for example through measurement of the rotational
spin of the parent.}
\vskip 0.4cm
In order to produce a detectable transit, most planets will require
an orbital inclination $\mathrel{\raise.3ex\hbox{$<$}\mkern-14mu\lower0.6ex\hbox{$\sim$}} 1$$^{\circ}$\ from edge-on.
If planetary systems could be pre-selected to have $i > 85$,
the geometric transit probability would be increased by a factor of $\sim$10.
Unfortunately, measurement uncertainties in the quantities
required to determine $\sin{i_{*, \, rot}}$ are likely to remove much
of the advantage that pre-selection would otherwise afford.
Since $\delta(\cos{i}) = - \tan{i} \, \delta(\sin{i})$,
even small errors in $\sin{i_{*, \, rot}}$ translate into large
uncertainties in $\cos{i_{*, \, rot}}$ and thus the probability
that a transit will occur.
Furthermore, an accurate measurement of
$\cos{i_{*, \, rot}}$ does not ensure that $\cos{i}$ for the
planetary orbital plane is known. The planets in our
own Solar System are misaligned by about 7$^{\circ}$\ with the Sun's rotational
plane, a result that is similar to that found for binaries orbiting
solar-type stars (Hale 1994). It is thus reasonable
to assume that an accurate measurement of $i_{*, \, rot}$ will constrain
the planetary orbital plane only to within $\sim$10$^{\circ}$.
To enhance probabilities, current ground-based attempts
to detect transits have taken a different tack by
concentrating on known eclipsing binary star systems in which the orbital
plane of the binary is known to be close to edge-on.
Assuming that any other companions will have similarly aligned angular momentum
vectors, it is hoped that such systems will have larger than
random chances of producing a transit event.
The precession of orbital plane likely to be present in such systems
may actually bring the planet across the face of the star more
often than in single star systems (Schneider 1994).
On the other hand, the evolution
and dynamics of single and double star systems is so
different that the formation and frequency of their planetary companions
is likely to be quite different as well. In particular, it may be difficult
for planets in some binary systems to maintain long-lived circular
orbits and thus, perhaps, to become the birth place
of life of the sort that has evolved on Earth.
Given the uncertainties involved, inclination pre-selection
in single stars is unlikely to increase geometric transit
probabilities by factors larger than 3 -- 5.
Ambitious ground-based and space-based initiatives, however,
may monitor so many stars that pre-selection is not necessary.
\subsection{Transit Duration}
The duration and frequency of the expected transits will determine the
observational strategy of an occultation program. The frequency is
simply equal to one over the orbital period $P = \sqrt{4 \pi^2 a^3 / G M_*}$.
If two or more transits for a given system can be measured and
confirmed to be due to the same planet, the period $P$ and
orbital radius $a$ are determined.
In principle, the ratio of the transit duration to the total
duration can then be used to determine the inclination of the orbital
plane, if the stellar radius is known.
The duration of the transit will be equal to the fraction
of the orbital period $P$ during which the
projected distance $d$ between the centers of the star and
planet is less than the sum of their radii $R_* + R_p$.
Refering to Fig.~4 we have
\begin{equation}
{\rm Duration} \equiv t_T = \frac{2 \, P}{2 \pi} \arcsin
{\left( \frac{\sqrt{(R_* + R_p)^2 - a^2 \cos^2{i}}}{a} \right)} ~~~,
\end{equation}
\noindent
which for $a >> R_* >> R_p$ becomes
\begin{equation}
t_T = \frac{P}{\pi} \sqrt{\left(\frac{R_*}{a}\right)^2 - \cos^2{i}} \ \
\leq \ \ \frac{P \, R_*}{\pi \, a} ~~~.
\end{equation}
\noindent
Note that because the definition of a transit requires
that $a\cos{i} \leq (R_* + R_p)$,
the quantity under the square root in Eq.~4 does not become negative.
\vskip -2.25cm
\hglue -1cm
\epsfxsize=13.5cm\epsffile{durgeometry.ps}
\vglue -6.25cm{\small Fig.~4 --- Transit duration is set by
fraction of total orbit (left) for which a portion of the planet eclipses the stellar disk (right).}
\vskip 0.4cm
Fig.~5 shows the maximum
transit duration and period for planets in the Solar System.
In order to confirm a planetary detection with one or more additional
transits after the discovery of the first eclipse, a 5-year
experiment can be sensitive to planets orbiting solar-type stars only
if their orbital radius is equal to or smaller than that of Mars.
Such planets will have transit durations of less than one day, requiring
rapid and continuous sampling to ensure high detection probabilities.
The actual transit duration depends sensitively
on the inclination of the planetary
orbit with respect to the observer, as shown in Fig.~6.
The transit time of Earth as seen by an external observer changes from
0.5 days to zero (no transit) if the observers viewing angle is more
than 0.3$^{\circ}$\ from optimal. Since the orbital planes of any two of the
inner terrestrial planets in the
\newpage
\vglue -1.15cm
\hglue -0.5cm
\epsfxsize=12cm\epsffile{maxduration.ps}
\vskip -4.6cm {\small Fig.~5 --- Edge-on transit durations and
periods for Solar System planets.}
\vskip 0.4cm
\noindent
Solar System are misaligned
by 1.5$^{\circ}$\ or more, if other planetary
systems are like our own, a given observer would expect to see transits
from only one of the inner planets.
This would decrease the detection probabilities for
planetary systems, but also the decrease the probability of
incorrectly attributing transits from different planets to
successive transits of one (mythical) shorter period object.
\vglue -0.5cm
\hglue 1.5cm
\epsfxsize=8cm\epsffile{innerdurations.ps}
\vglue -0.3cm{\small Fig.~6 --- ``Inner planet'' transit durations
for different inclinations ($R_* = R_\odot$).}
\vskip 0.4cm
If the parent star can be typed spectroscopically, stellar models
can provide an estimate for the stellar radius $R_*$ in the
waveband in which the photometric partial eclipse was
measured. (It is important to match wavebands
since limb-darkening can make the star look larger at redder
wavelengths which are more sensitive to the cooler
outer atmosphere of the star.) The temporal resolution of a single
transit then places a lower limit on the orbital radius $a$ of the
planet, but a full determination of $a$ requires
knowledge of the period from multiple transit
timings which remove the degeneracy due to the otherwise unknown
orbital inclination. In principle, if the limb darkening of
the parent star is sufficiently well-understood, measurements in
multiple wavebands can allow an estimate for the inclination, and
thus for $a$ from a single transit; this is discussed more
fully in \S2.3.1.
\subsection{Amplitude and Shape of the Photometric Signature}
Planets with orbital radii of 2~AU or less orbiting stars even
as close as 10~parsec will subtend angles $\mathrel{\raise.3ex\hbox{$<$}\mkern-14mu\lower0.6ex\hbox{$\sim$}} 50$
microarcseconds; any reflected or thermal radiation that
they might emit thus will be confused by normal photometric
techniques with that from the parent star. Only
exceedingly large and close companions of high albedo would be capable of
creating a significant modulated signal throughout their orbit as the viewer
sees a different fraction of the starlit side; we will not consider
such planets here. All other planets will alter the total observed
flux only during an actual transit of the
stellar face, during which the amplitude and shape of the photometric dip
will be determined by the fraction of the stellar light that is occulted
as a function of time.
\vglue -0.5cm
\hglue 2.5cm
\epsfxsize=8cm\epsffile{dipgeometry.ps}
\vglue -0.5cm{\small Fig.~7 --- The area eclipsed by a planet
as it crosses the stellar limb determines the wing shape of the resulting
photometric dip.}
\vskip 0.4cm
The maximum fractional change in the observed flux is given by:
\begin{equation}
{\rm Maximum~~} \frac{\abs \delta {\cal F_\lambda} \abs}{{\cal F_\lambda}} =
\frac{\pi {\cal F}_{\lambda, *} \, R_p^2}
{\pi {\cal F}_{\lambda, *} \, R_*^2 + \pi {\cal F}_{\lambda, p} \, R_p^2}
\approx \left(\frac{R_p}{R_*}\right)^2 \equiv \rho^2
\end{equation}
\noindent
The shape of the transit dip will depend on the inclination angle,
the ratio of the planet to stellar size, and the degree of limb-darkening
in the observational band.
Begin by considering a star of uniform
brightness (no limb-darkening) transited by a small planet.
The stellar limb will then describe a nearly straight
chord across the planet at any time, and integration over
planet-centered axial coordinates (see Fig.~7) yields an eclipsing area
during ingress and egress of:
\begin{equation}
{\cal A_E}
\approx \int_x^{R_p} r_p \, dr_p \,
\int^{+ \arccos{(x/r_p)}}_{- \arccos{(x/r_p)}}
\, d\phi_p \, \, = 2 \, \int^{R_p}_x r_p \, \arccos{\left(\frac{x}{r_p}\right)} \, dr_p ~~,
\end{equation}
\noindent
where $x \equiv d - R_*$, $d$ is the projected star-planet separation
and $x$ is constrained to lie in the region $ - R_p < x < R_p$.
The last integral can be done analytically to yield,
\begin{equation}
{\cal A_E}
\approx R_p^2 \, \arccos{(x/R_p)} - R_p x \, \sqrt{1 - \frac{x^2}{R_p^2}} ~~~.
\end{equation}
For larger planets, and to facilitate the introduction of
limb-darkened sources, it is more useful to
integrate over stellar-centered axial coordinates; the Law of Cosines can
then be used to show that
\begin{equation}
{\cal A_E}(t)
= 2 \int^{^{{\rm min}(R_*, \, d(t) + R_p)}}_{_{{\rm max}(0, \, d(t) - R_p)}} r_* \,
\arccos{[\Theta(t)]} \, dr_*
\end{equation}
\begin{equation}
{\rm where ~~~}
\Theta(t) \equiv
\frac{d^2(t) + r_*^2 - R_p^2}{2 r_* d(t)} ~~~~ {\rm for~}r_* > R_p + d(t)
{\rm ,~and~}\pi{\rm ~otherwise.}
\end{equation}
The light curve resulting from the occultation of a uniform brightness
source by a planet of arbitrary size, orbital radius and orbital inclination
can now be constructed by substituting into Eq.~9 the
time dependence of the projected planet-star separation,
$d(t) =$ $a \, \sqrt{\sin^2{\omega t} + \cos^2{i} \cos^2{\omega t}}$,
where $\omega \equiv 2\pi/P$. The {\it differential\/} light curve is then
given by:
\begin{equation}
\frac{{\cal F} (t)}{{\cal F}_{0}} = 1 \, - \,
\frac{{\cal A_E}(t)}{\pi \, R_*^2}
\end{equation}
\noindent
For spherical stars and planets, the light curve will be symmetric
and have a minimum at the closest projected approach of planet
to star center, where the fractional decrease in
the total brightness will be less than or equal to $(R_p/R_*)^2$.
For Jupiter-sized planets orbiting solar-type stars, this is a
signal of $\sim$1\%;
for Earth-sized planets the fractional
change is $\mathrel{\raise.3ex\hbox{$<$}\mkern-14mu\lower0.6ex\hbox{$\sim$}}$ 0.01\% (Fig.~8).
\vglue -4.3cm
\hglue 0.2cm
\epsfxsize=11cm\epsffile{eclipseinc.ps}
\vglue -0.6cm{\small Fig.~8 --- {\bf Left:} Photometric light curves for
Earth-sized and Jupiter-sized planets orbiting a solar-type star at 1 AU.
{\bf Right:} A Jupiter-sized planet orbiting a solar-type star at an
orbital radius of 0.05 AU ({e.g.,}\ 51 Peg) with
inclinations ranging from 85$^{\circ}$\ to 90$^{\circ}$.
The parent star is assumed here to have
constant surface brightness. Note change in time scale between two panels.}
\vskip 0.4cm
\noindent
If proper care is taken, photometry of bright, uncrowded
stars can be performed to $\sim$0.1\% precision from the ground
(Henry {\it et~al.}\ 1997),
so that ground-based transit searches can in principle be sensitive
to Jupiter-sized planets at $\mathrel{\raise.3ex\hbox{$<$}\mkern-14mu\lower0.6ex\hbox{$\sim$}}$1 AU --- planets perhaps similar
to those being found by the radial velocity technique
({e.g.,}\ Mayor \& Queloz 1995, Butler \& Marcy 1996).
Transit detections of terrestrial planets like those in our own
Solar System must await space observations in order to achieve the
required photometric precision.
\subsubsection{Effects of Limb Darkening}
Because observations at different wavelengths probe material at different
depths in stellar atmospheres, a stellar disk is differentially limb-darkened:
the radial surface brightness profile $B_\lambda (r_*)$ of a star is
wavelength dependent. In redder bands, which probe the cooler
outer regions of the star, the stellar disk will appear larger and
less limb-darkened. Limb darkening is important to transit techniques
for two reasons: it changes the shape of the photometric signal
and it does so in a wavelength-dependent way.
Since a given planet can produce dips of varying strength depending on the
inclination $i$ of the orbit, the inclination must be known in order to
estimate the planet's radius $R_p$ accurately.
In principle, if the parent star has been typed so that its mass and stellar
radius $R_*$ are known, Kepler's Law together with Eq.~5 will
yield $i$ once the transit time $t_T$ and period $P$ have been measured.
Ignoring the effects of limb darkening, however,
will result in an underestimate of $t_T$, and thus an underestimate
for the inclination $i$ as well.
In order to produce the required amplitude at minimum, the size of
the planet $R_p$ will then be overestimated.
Furthermore, the sloping shape of the limb-darkened profile might be
attributed to the smaller inclination $i$, reinforcing misinterpretation.
This difficulty will be removed if the limb darkening can be properly
modeled. In addition, transit monitoring
in more than one waveband could confirm the occultation hypothesis
by measuring the characteristic color signature associated with
limb darkening.
In principle this signature can be used to determine the
orbital inclination from a single transit, in which case Eq.~5 can be
inverted to solve for the period $P$ without waiting for a second
transit.
How strong is the effect of limb darkening? To incorporate its effect,
the integral in Eq.~9 used to determine the eclipsing area must
be weighted by the surface brightness as a function of stellar radius,
yielding the differential light curve:
\begin{equation}
\frac{{\cal F_\lambda} (t)}{{\cal F_\lambda}_{, \, 0}} = 1 \, - \,
\frac{\int^{^{{\rm min}(R_*, \, d(t) + R_p)}}_{_{{\rm max}(0, \, d(t) - R_p)}} r_* \, B_{\lambda}(r_*) \,
\arccos{[\Theta(t)]} \, dr_*}{\pi \int^{^{R_*}}_{_0} r_* \, B_{\lambda}(r_*)
\, dr_*}
\end{equation}
A commonly-used functional form for the surface brightness profile
is $B_{\lambda}(\mu) = [1 - c_\lambda (1-\mu)]$, where
$\mu \equiv \cos{\gamma}$ and $\gamma$ is the angle between the
normal to the stellar surface and the line-of-sight. In terms of the
projected radius $r_*$ from the stellar center this can be written as
$B_{\lambda}(r_*) = [1 - c_\lambda (1 - \sqrt{1 - (r_*/R_*)^2})]$.
Using this form and constants $c_\lambda$ appropriate for the Sun,
light curves and color curves are shown in Fig.~9 for a Jupiter-sized
planet orbiting 1~AU from a solar-type star at inclinations of 90$^{\circ}$\
and 89.8$^{\circ}$.
As expected, the bluer band shows more limb darkening,
which rounds the sharp edges of the occultation profile making it
qualitatively degenerate with a larger planet at somewhat smaller
inclination. The color curves for different inclinations, however,
are qualitatively different and can thus be used to break this
degeneracy. During ingress and egress the color curve becomes bluer
as the differentially redder limb is occulted; at maximum occultation
the color curve is redder than the unocculted star for transits
with inclination $i = 90$$^{\circ}$\ since the relative blue central
regions are then occulted. For smaller inclinations, the planet
grazes the limb blocking preferentially red light only, and the
color curve stays blue through the event. Since the size of the
color signal is $\sim$10\% of the deviation in total flux, excellent
photometry is required to measure this effect and use it to estimate
the orbital inclination; even for jovian giants it remains at
or just beyond the current limits of photometric precision.
\vglue -0.3cm
\hglue 0cm
\epsfxsize=11cm\epsffile{limbdark.ps}
\vglue -0.4cm{\small Fig.~9 --- {\bf Left:} Light curves for a
planet with $R_p = 11 R_ {\oplus} $ orbiting a solar-type star with
orbital inclinations of 90$^{\circ}$\ (top) and 89.8$^{\circ}$\ (bottom) normalized
to the total (unocculted) flux in the indicated band. Black
shows a uniformly bright stellar disk; blue and red indicate observations
in the R and K bands respectively. {\bf Right:} Color curves indicating
the flux ratios at any given time between R (blue) and K-band (red)
limb-darkened curves and a uniformly bright target star, and the
observed limb-darkened $R/K$ flux ratio (black).}
\vskip 0.4cm
\subsection{Observational Rewards and Challenges}
In sum, what can be learned by observing a planetary object transiting
the face of its parent star? The amplitude of the photometric signal
places a lower limit on the ratio of the planetary radius to stellar
radius $\rho \equiv R_p/R_*$, while the duration of the event
places a lower limit on the orbital period $P$ and thus on
the orbital radius $a$ as well. If the inclination $i$ is known,
these lower limits become measurements. In principle $i$ could
be determined by fitting the wings of the transit profile
in different wavebands using the known limb-darkening of the star,
but in practice this will probably prove too difficult. Instead,
multiple transits will be required to time the transits and
thus measure the period $P$ of the planet, from which the inclination can
be determined from the known transit duration (Eq.~5). This makes the
transit method most appropriate for large planets orbiting their
parent stars at (relatively) small radii $a$. The primary
challenge then reduces to performing the very precise photometry
required on a large enough sample of stars to place meaningful
statistics on the numbers of planets at small $a$.
What limits the photometric accuracy and clear detection of
a transit signal? The dwarf stars that have suitably small
stellar radii $R_*$ must have apparent magnitudes bright enough
(ie, be close enough) that enough photons can be captured in
short exposures so that a sub-day transit event can be well-sampled.
This will limit the depth of the sample to only nearby stars.
Fields with very high stellar densities
(like globular clusters or the Galactic Center) or very wide fields
that can capture hundreds of candidate stars simultaneously will
be required in order to maintain the required temporal sampling on a large
enough sample. Regions of high stellar density, however, will be hampered by
the additional challenges associated with precision photometry in
confused fields.
The use of reference constant stars in the field can reduce the effects
of varying extinction to produce the best current photometry in uncrowded
fields, precise to the $\sim$0.1\% level.
Ultimately, scintillation, the rapidly-varying turbulent refocusing of rays
passing through the atmosphere, limits Earth-bound photometry
to 0.01\%. Detection of Earth-mass transits is thus probably
restricted to space-borne missions, although in special circumstances,
periodicity analyses may be used to search for very short-period
Earth-sized transits from the ground ({e.g.,} Henry {\it et~al.}\ 1997).
For larger, jovian gas-giants, the signal can be measured from
the ground, but must be distinguished from intrinsic effects that
could be confused with transits. Late-type dwarf stars often undergo
pulsations that cause
their brightness to vary on the order of a few hours, but
due to their cyclic nature these pulsations should
be distinguished easily from all but very short period transits
corresponding to $a \mathrel{\raise.3ex\hbox{$<$}\mkern-14mu\lower0.6ex\hbox{$\sim$}} 0.02$~AU or so.
Solar flares produce excess of flux at the
$\mathrel{\raise.3ex\hbox{$<$}\mkern-14mu\lower0.6ex\hbox{$\sim$}} 0.001$\% level, and thus would not confuse a typical transit signal.
Later-type dwarfs tend to have more surface activity, however, and
thus produce flares that contain a larger fraction of the star's
total flux. Since the flares are generally blue,
the primary problem will be in confusing the chromatic
signal expected due to limb-darkening effects during a transit.
More troublesome will be
separating transits from irregular stellar variability due to star spots.
Star spots are cool regions on the stellar
surface that remain for a few rotations
before disappearing. They could mock a transit event and thus are
probably the most important non-instrumental source of noise.
Although the power spectrum of the Solar flux does show variations on
day and sub-day time scales, most of the power during periods of
sunspot maximum occurs at the approximate 1-month time scale of
the Sun's rotation. Even during sunspot maximum, variations on
day and sub-day scales are at or below the $0.001\%$ level
(Borucki, Scargle \& Hudson 1985).
Star spots on solar-type stars will therefore not be confused with
the transit signal of a gas giant, but spots might be a source of
additional noise for terrestrial-sized planets of small
orbital radius ($a \mathrel{\raise.3ex\hbox{$<$}\mkern-14mu\lower0.6ex\hbox{$\sim$}} 0.3$AU) and for parent stars that are
significantly more spotted than the Sun.
\subsubsection{Pushing the Limits: Rings, Moons and Multiple Planets}
If the parent star can be well-characterized, the transit method
involves quite simple physical principles that can perhaps be
exploited further to learn more about planetary systems.
For example, if a system is discovered to contain large transiting
inner planets, it can be assumed to have a favorable inclination
angle that would make it a good target for more sensitive searches for
smaller radius or larger $a$ planets in the same system.
If the inner giants are large enough, differential spectroscopy
with a very large telescope before and during transits could reveal
additional spectral lines that could be attributed to absorption of
stellar light by the atmosphere of the giant (presumably
gaseous) planet (see Laurent \& Schneider, this proceedings).
A large occulting ring inclined to the observer's
line-of-sight would create a transit profile of a different shape
than that of a planet (Schneider 1997),
though the signal could be confused with
limb-darkening effects and would likely be important only for
outer gas giants where icy rings can form more easily.
Finally, variations in the ingress timing of inner planets
can be used to search for cyclic
variations that could betray the presence of moons (Schneider 1997)
or --- in principle ---
massive (inner or outer) planets that are nearly coplanar but
too misaligned to cause a detectable
transit themselves. Transit timing shifts would be caused
by the slight orbital motion of the planet around the planet-moon
barycenter or that of the star around the system barycenter.
(The latter is unobservable for a single-planet system
since the star's motion is always phase-locked with the planet.)
\newpage
\section{Principles of Planet Detection via Microlensing}
Microlensing occurs when a foreground compact object ({e.g.,}\ a star)
moves between an observer and a luminous background object
({e.g.,}\ another star). The gravitational field of the foreground lens
alters the path of the light from the background source, bending
it more severely the closer the ray passes to the lens. This results
in an image as seen by a distant observer that is altered both in
position and shape from that of the unlensed source. Indeed since light
from either side of a lens can now be bent to reach the observer,
multiple images are possible (Fig.~10). Since the total flux reaching
the observer from these two images is larger than that from the
unlensed source alone, the lens (and any planets that may encircle it)
betrays its presence not through its own luminous emission, but
by its gravitational magnification of the flux of background objects.
Einstein (1936) recognized microlensing in principle, but thought
that it was undetectable in practice.
\vglue -0.5cm
\hglue -1.5cm
\epsfxsize=9.75cm\epsffile{microgeometryside.ps}
\vglue -10.5cm
\hglue 4.75cm
\epsfxsize=9cm\epsffile{microgeometrysky.ps}
\vglue -2.5cm{\small Fig.~10 --- {\bf Left:} A compact lens (L)
located a distance $D_L$ nearly along the line-of-sight to a background
source (S) located at a distance $D_S$ will bend incoming light rays
by differing amounts $\alpha$ to create two images
($I_1$ and $I_2$) on either side of the line-of-sight.
{\bf Right:} An observer $O$ does not see the microlensed
source at its true angular sky position $\theta_S$,
but rather two images at positions $\theta_1$ and $\theta_2$.}
\vskip 0.4cm
Ray tracing, together with the use of general relatively to relate
the bending angle $\alpha$ with the lens mass distribution,
produces a mapping from the source positions ($\xi$, $\eta$) to the
image positions (x,y) for a given mass distribution.
For ``point'' masses, the angle $\alpha$ is just given by the mass of the lens
$M$ and the distance of closest approach $r$ as:
\begin{equation}
\alpha = \frac{4 \, G \, M}{c^2 \, r} = \frac{2R_S}{r}~~,
\end{equation}
as long as $r$ is well outside the Schwarzschild radius $R_S$ of the lens.
Simple geometry alone then requires
\begin{equation}
{\theta_S} \, D_S = { r} \, \frac{D_S}{D_L} - (D_S - D_L) \, {\alpha(r)} ~~,
\end{equation}
which can be rewritten to yield the lens equation
\begin{equation}
{\theta_S} = {\theta} - \frac{D_{LS}}{D_S} \, {\alpha(r)} ~~,
\end{equation}
giving the (angular) vector image positions $\bf \theta$ for a source
at the angular position $\theta_S$ as measured from the observer-lens
line-of-sight. $D_S$ and $D_L$ are the source and lens distances
from the observer, respectively, and $D_{LS} \equiv D_S - D_L$.
For convenience, the characteristic angular size scale is defined as
\begin{equation}
{\theta_E} \equiv \sqrt{\frac{2 R_S D_{LS}}{D_L \, D_S}}
= \sqrt{\frac{4 G M D_{LS}}{c^2 \, D_L \, D_S}} ~~~.
\end{equation}
Since $r = D_L \, \theta$, Eq.~15 can now be rewritten to yield
a quadratic equation in $\theta$
\begin{equation}
\theta^2 - \theta_S \, \theta \ - \theta_E^2 = 0 ~~~,
\end{equation}
with two solutions $\theta_{1, \, 2} =
\frac{1}{2} \left( \theta_S \pm \sqrt{4 \theta_E^2 + \theta_S^2} \, \right) $
giving the positions of images $I_1$ and $I_2$.
When the source lies directly behind the lens as seen from the observer,
$\theta_S = 0$ and the two images merge into a ring of radius $\theta_E$,
the so-called ``Einstein ring.'' For all other source positions, one image
will lie inside $\theta_E$ and one outside. The flux observed from
each image is the integral of the image surface brightness
over the solid angle subtended by the (distorted) image. Since the specific
intensity of each ray is unchanged in the bending process, so is the
surface brightness. The magnification $A_{1, 2}$ for each image
is then just the ratio of the image area to the source area,
and is found formally by evaluating at the image positions the
determinant of the Jacobian mapping $J$ that describes the lensing
coordinate transformation from image to source plane:
\begin{equation}
A_{1, \, 2} = \left. \frac{1}{\abs det \, J \abs} \right|_{\, \theta = \theta_{1, \, 2}}
= \left| \frac{\partial \, {\theta_S}}{\partial \, {\theta}} \right|^{-1}_{\, \theta = \theta_{1, \, 2}}~~~,
\end{equation}
where $\theta_S$ and $\theta$ are (angular) position vectors for the
source and image, respectively.
What is most important for detection of extra-solar planets around lenses
is not the position of the images but their magnification.
For stellar lenses and typical source and lens distances
within the Milky Way, the typical image separation ($\mathrel{\raise.3ex\hbox{$>$}\mkern-14mu\lower0.6ex\hbox{$\sim$}} 2\theta_E$)
is $\sim$1~milliarcsecond, too small
to be resolved with current optical telescopes. The observer
sees one image with a combined
magnification $A \equiv A_1 + A_2$ that can be quite large.
In order to distinguish intrinsically
bright background sources from fainter ones that appear bright due to
microlensing, the observer relies on the characteristic
brightening and dimming that occurs as motions within the Galaxy
sweep the source (nearly) behind the lens-observer line-of-sight.
The unresolved images also sweep across the sky (Fig.~11); their combined
brightness reaches its maximum when the source has its closest
projected distance to the lens.
\vglue -1.75cm
\hglue 1cm
\epsfxsize=10cm\epsffile{microgeometryskymoving.ps}
\vglue -1.75cm{\small Fig.~11 --- As a background source
(open circle) moves nearly behind a foreground lens (central dot),
the two microimages remain at every moment colinear with the lens
and source. (Adapted from Paczy\'nski 1996.)}
\vskip 0.4cm
For a single lens, the combined magnification
can be shown from Eqs.~17 and 18 to be:
\begin{equation}
A = \frac{u^2+2}{u \sqrt{u^2+4}} ~~~,
\end{equation}
where $u \equiv \theta_S/\theta_E$ is the angular source-lens separation
in units of the Einstein ring radius. For rectilinear motion,
$u(t) = \sqrt{(t - t_0)^2/t_E^2 + u^2_{min}}$, where
$t_0$ is the time at which $u$ is minimum and the magnification is maximum,
and $t_E \equiv \theta_E \, D_L / v_\perp$ is the characteristic
time scale defined as the time required for the lens to travel a
projected distance across the observer-source sightline
equal to the Einstein radius $r_E$.
The result is a symmetric light curve that has a magnification
of 1.34 as it cross the Einstein ring radius and a peak amplification
that is approximately inversely proportional to the source impact
parameter $u_{min}$. Since the $u_{min}$ are distributed randomly,
all of the light curves shown in Fig.~12 are equally probable.
\vglue -0.3cm
\hglue -2.25cm
\epsfxsize=10.5cm\epsffile{impactparams.ps}
\vglue -10cm
\hglue 4.5cm
\epsfxsize=8cm\epsffile{lcs.ps}
\vglue -3cm{\small Fig.~12 --- {\bf Left:} Equally-probable source
trajectories. {\bf Right:} The corresponding single microlens light curves.}
\vskip 0.4cm
Typical event durations $\hat t = 2 t_E$
for microlensing events detected in the direction of
the Galactic Bulge are on the order of a few weeks to a few months,
generally matching expectations for stellar lenses distributed in
the Galactic disk and bulge.
\subsection{Microlensing by Binary Lenses}
Microlensing was proposed as a method to detect compact baryonic
dark matter in the Milky Way by Paczy\'nski in 1986. In 1991,
Mao and Paczy\'nski suggested that not only dark lenses, but possible
dark planets orbiting them may be detected through their microlensing
influence on background stars.
The magnification patterns of a single lens are axially symmetric
and centered on the lens; the Einstein ring radius, for example,
describes the position of the $A \equiv A_1 + A_2 = 1.34$
magnification contour.
Binary lens structure destroys this symmetry:
the magnification patterns become distorted and are symmetric only
upon reflection about the binary axis.
Positions in the source place for which the
determinant of the Jacobian (Eq.~18) is zero represent potential
source positions for which the magnification is formally infinite.
The locus of these positions is called a ``caustic.'' For a
single point-lens, the only such position is the point caustic
at $\theta_S = 0$, but the caustics of binary lenses are extended
and complicated in shape. In the lens plane, the condition $|{\rm det}~J| = 0$
defines a locus of points known as the critical curve; when the source
crosses a caustic a pair of new images of high amplification appear
with image positions ${\bf \theta}$ on the critical curve.
A static lens configuration has a fixed magnification pattern
relative to the lens;
the observed light curve is one-dimensional cut through this pattern that
depends on the source path.
As Fig.~13 illustrates,
the exact path of the source trajectory behind a binary lens
will determine how much its light curve deviates from the simple
symmetric form characterizing a single lens. Due to the finite size of the
source, the magnification during a caustic crossing is not infinite,
but will be quite large for sources that are small compared to the
size of the caustic structure. Several binary-lens light curves have
already been observed and characterized (Udalski {\it et~al.}\ 1994, Alard, Mao \& Guibert 1995, Alcock {\it et~al.}\ 1997, Albrow {\it et~al.}\ 1998b, Albrow {\it et~al.}\ 1999).
\vglue -1cm
\hglue -1.25cm
\epsfxsize=8cm\epsffile{binarypaths.ps}
\vglue -8.25cm
\hglue 5.25cm
\epsfxsize=8cm\epsffile{binarylcs.ps}
\vglue -0.5cm{\small Fig.~13 --- {\bf Left:} The caustic (thick closed line)
for two equal mass lenses (dots) is shown with several possible source
trajectories. Angular distances are scaled to the Einstein ring radius of
the combined lens mass. {\bf Right:} The light curves
resulting from the source trajectories shown at left; the temporal axis
is normalized to the Einstein time $t_E$ for the combined lens. (Adapted from
Paczy\'nski 1996.)}
\vskip 0.4cm
A single lens light curve is described by four parameters:
the Einstein crossing time $t_E$, the impact parameter $u_{min}$,
the time of maximum amplification $t_0$, and
the unlensed flux of the source $F_0$. Only the first of these
contains information about the lens itself. Three additional
parameters are introduced for binary lenses:
the projected separation $b$ of the lenses in
units of $\theta_E$, the mass ratio $q$ of the two lenses, and
the angle $\phi$ that the source trajectory makes with respect
to the binary axis.
Given the large number of free parameters and the variety of complicated
forms that binary light curves can exhibit, it may seem quite
difficult to characterize the binary lens with any degree of
certainty on the basis of a single 1-D cut through its magnification
pattern. In fact, with good data the fitting procedure
is unique enough that the {\it future\/} behavior of
the complicated light curve --- including the timing of future
caustic crossings --- can be predicted in real time. This is important since
the ability to characterize extra-solar planets via microlensing
requires proper determination of the planetary system parameters
$b$ and $q$ through modeling of light curve anomalies.
\subsection{Planetary Microlensing}
The simplest planetary system is a binary consisting of a
stellar lens of mass $M_*$ orbited by a planet of mass $m_p$ at
an orbital separation $a$.
The parameter range of interest
is therefore $q \equiv M_*/m_p \approx 10^{-3}$ for jovian-mass planets and
$q \approx 10^{-5}$ for terrestrial-mass planets.
The normalized projected angular separation $b \mathrel{\raise.3ex\hbox{$<$}\mkern-14mu\lower0.6ex\hbox{$\sim$}} a/(\theta_E D_L)$
depends at any moment on the inclination and phase of the planetary orbit.
The light curve of a source passing behind a lensing planetary system
will depend on the form of the magnification pattern of the lensing
system, which is influenced by the size and position of the
caustics. How do the magnification patterns vary with $b$ and $q$?
\hglue -1.5cm
\epsfxsize=14cm\epsffile{binarycausticpanel.ps}
\vglue -4.25cm{\small Fig.~14 ---
Positive (magenta) and negative (blue) 1\% and 5\% {\it excess\/}
magnification contours for binary lenses (black dots)
of different projected separations $b$ and mass ratios $q$.
Caustics are shown in red. Dimensions are normalized
to the Einstein ring radius of combined system (green circle).
Dashed and solid lines are two possible source
trajectories. (Adapted from Gaudi \& Sackett 1998.)}
\vskip 0.4cm
Shown in Fig.~14 is the {\it excess\/} magnification pattern of a binary
over that of a single lens for different separations $b$
and mass ratios $q$.
The deviations can be positive or negative. High-mass ratio binaries
({i.e.,}\ $q$ not too much less than 1)
are easier to detect since their excess magnification contours
cover a larger sky area
making it more likely that a source trajectory will cross an
``anomalous'' region. For a given mass ratio $q$, the 1\% and 5\%
excess magnification contours also cover more sky when the binary
separation is comparable to the Einstein ring radius of the system,
{i.e.,}\ whenever $b \approx 1$.
The symmetric caustic structure centered between equal mass ($q = 1$)
binaries becomes elongated along the binary axis for smaller mass
ratios, eventually splitting the caustic into a central caustic
centered on the primary lens and outer ``planetary'' caustics.
For planetary separations larger than the Einstein ring radius $b > 1$,
the planetary caustic is situated on the binary axis between the lens
and planet. For $b < 1$, the
planetary caustics are two ``tips'' that are symmetrically positioned
above and below the binary axis on the opposite side of the lens
from the planet. As the mass ratio decreases, all the caustics
shrink in size and the two ``tips'' approach the binary axis, nearly ---
but not quite --- merging.
\subsubsection{The ``Lensing Zone''}
For the planetary (small $q$) regime, a source that
crosses the central caustic will generate new images near the Einstein
ring of the primary lens; a source crossing a planetary caustic will
generate new images near the Einstein ring of the planet, {i.e.,}\ near
the position of the planet itself. Planets with separations
$0.6 \mathrel{\raise.3ex\hbox{$<$}\mkern-14mu\lower0.6ex\hbox{$\sim$}} b \mathrel{\raise.3ex\hbox{$<$}\mkern-14mu\lower0.6ex\hbox{$\sim$}} 1.6$ create planetary caustics inside
the Einstein ring radius of the parent lensing star; this is
the region in which the source must be in order to be
alerted by the microlensing survey teams.
For this reason, planets with projected separations
$0.6 \mathrel{\raise.3ex\hbox{$<$}\mkern-14mu\lower0.6ex\hbox{$\sim$}} b \mathrel{\raise.3ex\hbox{$<$}\mkern-14mu\lower0.6ex\hbox{$\sim$}} 1.6$ are said to lie in the
{\it ``lensing zone.''\/}
Since the separation $b$ is normalized to the size of the Einstein
ring, the physical size of the lensing zone will depend on
the lens mass and on the lens and source distances. Most of
the microlensing events in the Milky Way are detected in the direction
of the Galactic bulge where, at least for the bright red clump sources,
it is reasonable to assume that the sources lie at $D_S \approx 8 \, $kpc.
Table I shows the size of the lensing zone for foreground
lenses located in the disk ($D_L = 4 \, $kpc) and bulge ($D_L = 6 \, $kpc)
for typical stellar masses, assuming that $D_S = 8 \, $kpc.
One of the reasons that microlensing is such an attractive method
to search for extra-solar planets is that the typical lensing zone
corresponds to projected separations
of a few times the Earth-Sun distance (AU) --- a good match to many planets
in the Solar System. Planets orbiting at a radius $a$
in a plane inclined by $i$ with respect to the plane of the sky will
traverse a range of projected separations
$a \, \cos{i}/(\theta_E \, D_L) < b < a/ (\theta_E \, D_L)$,
and can thus be brought into the lensing zone of their primary even
if their orbital radius is larger than the values given in Table I.
{\small
\begin{center}
\begin{tabular}{l r r}
\noalign{\medskip\hrule\smallskip}
\multicolumn{3}{c}{TABLE I. Typical Lensing Zones for Galactic Lenses}\\
\noalign{\hrule}
\noalign{\smallskip\hrule\smallskip}
\medskip
~~~~ & disk lens (4 kpc) & bulge lens (6 kpc) \\
1.0 $ \rm {M_{\odot}} $ solar-type & 2.4 - 6.4 AU & 2.1 - 5.5 AU \\
0.3 $ \rm {M_{\odot}} $ dwarf & 1.3 - 3.5 AU & 1.1 - 3.0 AU \\
\noalign{\medskip\hrule\smallskip}
\end{tabular}
\end{center}
}
\vskip 0.25cm
Planets that are seldom or never brought into the lensing zone of
their primary can still be detected by microlensing in one of two ways.
Either the light curve must be monitored for source positions outside
the Einstein ring radius of the primary ({i.e.,}\ for magnifications $A < 1.34$)
in order to have sensitivity to the isolated, outer planetary caustics
(DiStefano \& Scalzo 1999),
or very high amplification events must be monitored in
order to sense the deviations that are caused any planet on the
central primary caustic (Griest \& Safizadeh 1998).
\subsubsection{Determining the Planet-Star Mass Ratio and Projected Separation}
The generation of caustic structure and the anomalous magnification
pattern associated with it makes planetary masses orbiting stellar lenses
easier to detect than isolated lensing planets. Even so, most planetary
light curves will be anomalous because the source passed near, but
not across a caustic (Fig.~15).
How is the projected planet-star separation $b$
and the planet-star\\
\vglue -4.25cm
\hglue -1cm
\epsfxsize=13cm\epsffile{pgram.eps}
\vglue -3.75cm{\small Fig.~15 ---
{\bf Left:} A background point source travels along the (blue) trajectory
that just misses the (red) caustic structure caused by a ``Jupiter'' with
mass ratio $q=0.001$ located at 1.3 Einstein ring radii (several AU)
from its parent stellar lens. {\bf Right:} The resulting light curve is
shown in the top panel; the excess magnification $\delta$ associated
with the planetary anomaly is shown in the bottom panel; time scale
is in days.}
\noindent
mass ratio $q = m_p/M_*$ extracted
from a planetary anomaly atop an otherwise normal microlensing
light curve? In practice, the morphology of planetary light curve anomalies
is quite complex, and detailed modeling of the excess magnification
pattern (the anomalous change in the pattern due to the planet)
is required, but the general principles can be easily understood.
\hglue -1.5cm
\epsfxsize=14cm\epsffile{binarylcpanel.ps}
\vglue -4.5cm{\small Fig.~16 ---
Excess magnifications $\delta$ for the (solid and dotted) trajectories
of Fig.~14 are shown for (the same) range of planetary mass ratios
and projected separations. ``Super-jupiters'' with $q \sim 0.01$
should create detectable anomalies for a significant fraction of
source trajectories in high quality light curves. (Adapted from
Gaudi \& Sackett 1998.)}
\vskip 0.4cm
Since the planet and parent star lenses are
at the same distance $D_L$ and travel across the line of sight
with the same velocity $v_\perp$ (ignoring the planet's orbital motion),
Eq.~16 shows that
the mass ratio $q$ is equal to the square of the ratio of the Einstein ring
radii $(\theta_p/\theta_E)^2$. Observationally this can
be estimated very roughly by the square of the ratio of the
planetary anomaly duration to the primary event duration, $(t_p/t_E)^2$.
The time difference between the primary and anomalous
peaks (normalized to the Einstein time) gives an indication of the
placement of the caustic structure within the Einstein ring and thus
the position of the planet relative to the primary lens, $b$.
The amplitude of the anomaly $\delta \equiv (A - A_0)/A_0$, where
$A_0$ is the unperturbed amplitude,
indicates the closest approach to the caustic
structure and, together with the temporal placement of the anomaly,
yields the source trajectory angle through the magnification pattern.
Since the magnification pattern associated with planetary caustics
for $b > 1$ and $b < 1$ planets is qualitatively different, detailed
dense monitoring should resolve
any ambiguity in the planetary position.
Light curve anomalies associated with $b > 1$ planets, like the one
in Fig.~15, will have relatively large central peaks in $\delta$ surrounded by
shallow valleys; $b < 1$ anomalies will generally have more rapidly
varying alterations of positive and negative excess magnification,
though individual exceptions can certainly be found.
From the shape of light curve anomalies alone, the mass of the
planet is determined as a fraction of the primary lens
mass; its instantaneous projected separation is determined
as a fraction of the primary Einstein radius.
Reasonable assumptions about the kinematics, distribution, and
masses of the primary stellar lenses, together with measurements of the
primary event duration $2 t_E$ and fraction of blended light
from the lens should allow $r_E$ and $M_*$
to be determined to within a factor $\sim3 - 5$.
Detailed measurements of the planetary anomaly would then yield
the absolute projected separation and planetary mass to about
the same precision.
\subsubsection{Durations and Amplitudes of Planetary Anomalies}
It is clear from Figs.~14 and 16 that, depending on the source
trajectory, a range of anomaly durations $t_p$ and
amplitudes $\delta$ are possible for a planetary
system of given $q$ and $b$ (see also Wambsganss 1977).
Nevertheless, rough scaling relations
can be developed to estimate the time scales and amplitudes that
will determine the photometric sampling rate and precision required
for reasonable detection efficiencies to microlensing planetary systems.
For small mass ratios $q$,
the region of excess magnification associated with
the planetary caustic is a long, roughly linear region with a width
approximately equal to the Einstein ring of the planet, $\theta_p$, and
a length along the planet-lens axis several times larger.
Since $\theta_p = \sqrt{q} \, \theta_E$,
both the time scale of the duration and the cross section
presented to a (small) source vary linearly with $\theta_p/\theta_E$
and thus with $\sqrt{q}$.
Assuming a typical $t_E = 20$ days, the duration of the planetary
anomaly is given roughly by the time to cross the planetary
Einstein diameter, $2 \, \theta_p$,
\begin{equation}
{\rm planet~anomaly~duration~} = 2 \, t_p \approx {\rm 1.7 \, hrs} \,
(m/ \rm {M_{\oplus}} )^{1/2} (M/ \rm {M_{\odot}} )^{-1/2}.
\end{equation}
Caustic crossings can occur for any planetary mass ratio and should be
easy to detect as long as the temporal sampling is well matched to
the time scales above. Most anomalies, however, will be more gentle
perturbations associated with crossing lower amplitude excess
magnification contours.
At the most favorable
projected lens-planet separation of $b=1.3$, and the most ideal
lens location (halfway to the Galactic Center), well-sampled
observations able to detect 5\% perturbations in the light curve
would have planet sensitivities given roughly by (Gould \& Loeb 1992):
\begin{equation}
{\rm ideal~detection~sensitivity~} \approx 1\%~
(m/ \rm {M_{\oplus}} )^{1/2} (M/ \rm {M_{\odot}} )^{-1/2}
\end{equation}
\noindent
This ideal sensitivity is relevant only for planets at $b=1.3$; at the
edges of the lensing zone the probabilities are about halved.
Detection with this sensitivity requires photometry at the 1\%
level, well-sampled over the duration of the planetary event.
\vglue -0.5cm
\hglue 0.25cm
\epsfxsize=11.5cm\epsffile{m13.ps}
\vglue -1cm{\small Fig.~17 --- PLANET collaboration
monitoring of MACHO-BLG-95-13 in the I (upper) and V (lower) bands.
Insets show a zoom around the peak of the event; arrows indicate
points taken many months to more than a year later. Vertical scale is
magnitudes; horizontal scale is days (Albrow {\it et~al.}\ 1998a).}
\vskip 0.4cm
Can such photometric precision and temporal
sampling be obtained in the crowded fields
of the Galactic bulge where nearly all microlensing events are discovered?
Fig.~17 shows observations of one bright microlensing event
monitored by the PLANET collaboration during its month-long pilot
season in 1995 (Albrow {\it et~al.}\ 1998a).
The residuals from the single point-lens/point source
light curve are less than 1\% for this event, and the typical
sampling rate is on the order of once every 1-2 hours, even
accounting for unavoidable longitudinal and weather gaps.
A true calculation of detection probabilities must integrate over the
projected orbital separations and the distribution of
lenses along the line of sight, must take into account
the actual distribution of source trajectories probed by
a particular set of observations, and the effect of uneven temporal
sampling and photometric precision (Gaudi \& Sackett 1998).
In the following section we
discuss the additional complication of finite source effects that
is encountered for very small mass planets for which the size of
the planetary Einstein ring is comparable to or smaller than the source size,
$\theta_p \mathrel{\raise.3ex\hbox{$<$}\mkern-14mu\lower0.6ex\hbox{$\sim$}} \theta_*$.
\subsection{Observational Rewards and Challenges}
What can be learned by observing a planetary anomaly in a microlensing
light curve? The duration, temporal placement relative to the event
peak, and relative amplitude of the anomaly can be used to determine
the mass ratio $q$ of the planetary companion to the primary (presumably
stellar) lens and their projected angular separation $b$ in units of
the Einstein ring radius $\theta_E$.
Since in general the lens will be far too distant to type spectrally
against the bright background source (except possibly with
very large apertures, see Mao, Reetz \& Lennon 1998), the
absolute mass and separation must be determined statistically by
fitting the properties of an ensemble of events with reasonable
Galactic assumptions. Measurements of other sorts of microlensing
anomalies associated with source resolution, observer motion, or
lens blending can produce additional constraints on the lens properties
and thus on the absolute planetary characteristics.
Except for very large $a$ planets orbiting in nearly face-on
($i \approx 0$) orbits, cooperative lensing effects between the lens and companion boost the detectability of lensing planets over that expected
for planets in isolation. Since current detection and monitoring schemes focus
on those events with an essentially random distribution of impact parameters
$u_{min}$ for $u_{min} < 1$, the anomaly sensitivity is primarily
restricted to planets in the ``lensing zone'' with projected separations of
0.6 --- 1.6 times the Einstein ring radii of the primary lens.
For typical distributions of lens masses, and lens and source distances,
this translates into the highest probabilities for planets with
instantaneous orbital separations {\it projected onto the sky plane\/} of
$a_p \approx 5 \,$AU, with a zone of reduced detectability extending to
higher $a_p$. Since the efficiency of planetary detection in these zones
is likely to be a few or a few tens of percent (Eq.~21), many
microlensing events must be monitored with $\sim$1\% photometry on
the $\sim$hourly time scales (Eq.~20) to achieve statistically meaningful
constraints on the number planets in the Milky Way, and their distribution
in mass and orbital radius.
What limits the photometric precision and temporal sampling?
Round-the-clock monitoring of events, necessary for maximum sensitivity
to the 1 -- 2 day durations of Jupiter-mass anomalies requires telescopes
scattered in geographical longitude, at the South Pole, or in space.
Temporal sampling is limited by the number of
events monitored at any given time, their brightness, and the desired level of
photometric precision. Higher signal-to-noise can generally be
obtained for brighter stars in less exposure time, but
ultimately, in the crowded fields that typify any
microlensing experiment, photometric precision is limited by
confusion from neighboring sources, not photon statistics. Pushing
below $\sim$1\% relative photometry with current techniques
has proven very difficult in crowded fields.
If an anomaly is detected, it must be distinguished from other
intrinsic effects that could be confused with a lensing planet.
Stellar pulsation on daily to sub-daily time scales in giant
and sub-giant bulge stars is unlikely,
but this and any form of regular variability
would easily be recognized as periodic ({i.e.,}\ non-microlensing)
with the dense and precise
sampling that is required in microlensing monitoring designed to
detect planets. Star-spot activity may be non-negligible in giants,
but will have a time scale characteristic of the rotation period
of giants, and thus much longer than typical planetary anomalies.
In faint dwarf stars spotting activity produces flux changes below
the photometric precision of current experiments. Flare activity
should not be significant for giants and is expected to be chromatic,
whereas the microlensing signal will always be achromatic
(except in the case of source resolution by exceedingly low-mass
planets).
Blending (complete photometric confusion) by random,
unrelated stars along the line-of-sight can dilute the apparent
amplitude of the primary lensing event. This will have some effect
on the detection efficiencies, but most significantly
--- with data of insufficient sampling and photometric
precision --- will lead to underestimates for the time scale
$t_E$ and impact parameter $u_{min}$ of the primary event, and thus also to
mis-estimates of the planetary mass ratio $q$ and projected separation $b$.
\subsubsection{Pushing the Limits: Earth-mass and Multiple Planets}
Planets with very small mass ratio will have caustic structure
smaller than the angular size of typical bulge giants. The ensuing
dilution of the anomaly by finite source effects will present a large,
but perhaps not insurmountable, challenge to pushing the microlensing
planet detection technique into the regime of terrestrial-mass planets
(Peale 1997, Sackett 1997).
Near small-mass planetary caustics,
different parts of the source simultaneously cross regions
of significantly different excess magnification;
an integration over source area is required in order to derive the
total magnification. The severity of the effect can be seen in
Fig.~18. Earth-mass caustic crossings against even the smaller-radii bulge
sources will present a challenge to current photometry in
crowded fields, which is generally limited by seeing to stars above
the turn-off mass.
The most numerous, bright microlensing sources in the bulge
are clump giants with radii
about 13 times that of the Sun (13 $R_ {\odot} $), and thus angular radii
of 7.6 microarcseconds ($\mu$as) at 8 kpc.
Since a Jupiter-mass planet with $m_p = 10^{-3} \rm {M_{\odot}} $ has an
angular Einstein ring radius of 32 $\mu$as at 4 kpc and 19 $\mu$as at 6 kpc,
its caustic structure is large compared to the size of the source.
An Earth-mass
planet with $m = 3 \times 10^{-6} \rm {M_{\odot}} $, on the other hand,
has an angular Einstein ring radius of 1.7 $\mu$as at 4 kpc and
1 $\mu$as at 6 kpc, and will thus suffer slight finite source effects
even against turn-off stars (1.7 $\mu$as), though
the effect will be greatly reduced compared to giant sources
(Bennett \& Rhie 1996).
\vglue -0.75cm
\hglue -0.75cm
\epsfxsize=7.5cm\epsffile{eplanetpaths.ps}
\vglue -7.5cm
\hglue 5.5cm
\epsfxsize=7.5cm\epsffile{eplanetlcs.ps}
\vglue -0.75cm{\small Fig.~18 --- {\bf Left:} A source of angular
size $\theta_* = 0.001 \, \theta_E$, typical of turn-off stars in the bulge,
crosses the central caustic caused by a terrestrial-mass planet with
mass ratio $q = 10^{-5}$. {\it Right:} Due to source resolution effects,
the resulting anomaly differs from single-lens microlensing only
at the $\sim$1\% -- 3\% level. Note the expanded spatial and temporal
scales. (Adapted from Paczy\'nski 1996.)}
\vskip 0.4cm
For extreme source resolution, in which the entire planetary caustic
lies inside the projected source radius, the {\it excess fractional\/}
magnification associated with the planetary anomaly scales with the square of
the ratio of the planetary Einstein ring radius to the angular source size,
$\delta \propto (\theta_p/\theta_*)^2$.
On the other hand, source-resolved small $q$ anomalies will have
longer durations than implied by Eq.~20, since the characteristic time scale
is the time to cross the source $\theta_*$ (not $\theta_p$).
Furthermore, the cross section for magnification at a given threshold now
roughly scales with $\theta_*/\theta_E$ (not $\theta_p/\theta_E$),
and is thus approximately independent of planetary mass.
Because the anomaly amplitude is suppressed by source resolution,
unless the photometry is excellent and continuous, small-mass
planetary caustic crossings can be confused
with large impact parameter large-mass planetary anomalies. This
degeneracy can be removed by performing multi-band observations
(Gaudi \& Gould 1997): large impact parameter events
will be achromatic, but sources resolved by small-mass caustics
will have a chromatic signal due to source limb-darkening that is similar
to (but of opposite sign from) that expected for planetary transits
(\S2.3.1). Source limb-darkening and chromaticity
have now been observed during a caustic crossing of a stellar binary
(Albrow {\it et~al.}\ 1998b).
\vglue -0.5cm
\hglue 1.5cm
\epsfxsize=9.5cm\epsffile{multzoneprobs.ps}
\vglue -1.0cm{\small Fig.~19 --- {\bf Top:} Probability that
two planets with true orbital radii $a_1$ and $a_2$
(in units of the Einstein ring $r_E$)
simultaneously have projected separations,
$b_1$ and $b_2$, in the standard ``lensing
zone,'' defined as $0.6 < b < 1.6$. The probability is shown
as function of $a_2$, for fixed $a_1=1.5$ (solid),
$a_1=0.6$ (dotted) and $a_1=2.7$ (dashed).
The probability for two planets with orbital radii of Jupiter and Saturn
around solar-mass primary (star) and a $0.3M_{\odot}$ primary (dot)
are shown.
{\bf Middle:} Same, but for the extended ``lensing zone,''
$0.5 < b < 2.0$. {\bf Bottom:} The conditional probability that
both $b_1$ and $b_2$ lie in the extended ``lensing
zone,'' given that either $b_1$ or $b_2$ satisfies this criterion.
(Gaudi, Naber \& Sackett 1998.)}
\vskip 0.3cm
Finally, since all planetary lenses create a central caustic, low-impact
parameter (high magnification) microlensing events that project the source
close to the central caustic are especially efficient in producing
detectable planetary anomalies (Griest \& Safizadeh 1998).
For the same reason, however, the central caustic is affected by {\it all\/}
planets in the system, and so --- if possible degeneracies due to the
increased caustic complexity can be removed ---
rare, low impact parameter events offer a promising way of simultaneously
detecting multiple planets brought into or near the lensing zone by
their orbital motion around the primary lens (Gaudi, Naber \& Sackett 1998).
As Fig.~19 demonstrates, the statistical probabilities are large
that a Jupiter or 47UMa
orbiting a solar-mass star (solid and dotted lines, respectively) will
instantaneously share the lensing zone with other planets of
orbital radii of several AU. However, the light curves resulting from crossing
a multiple-planet central caustic may be difficult to interpret since
the caustic structure is so complicated (Fig.~20).
\vglue -0.1cm
\hglue 0.5cm
\epsfxsize=9.5cm\epsffile{multcaustics.ps}
\vglue -0.25cm{\small Fig.~20 ---
Contours of 5\% and 20\% fractional deviation $\delta$,
as a function of source position in units of the $\theta_E$.
The parameters of planet 1 are held fixed
at $q_1=0.003$, $b_1=1.2$; the projected separation
$b_2$ and the angle between the axes, $\Delta\theta$, are varied
for a second planet with $q_2=0.001$. Only planet 1 is present
in the bottom offset panel. Positive (red), negative (blue) and caustic
($\delta=\infty$, thick line) contours are shown.
(Gaudi, Naber \& Sackett 1998.) }
\section{Photometric Mapping of Unseen Planetary Systems: \\
$~~~~~~$Matching the Tool to the Task}
Both the transit and microlensing techniques use frequent, high-precision
monitoring of many stars to discover the presence of the
unseen extra-solar planets. The transit method monitors the light
from the parent star in search of occultation by an unseen planet;
the microlensing technique monitors light from an unrelated
background source in search of lensing by an unseen planet orbiting
an unseen parent star.
Indeed, microlensing is the only extra-solar planetary search technique
that requires {\it no photons from either the planet or the parent star\/}
and for this reason is the method most sensitive to the
distant planetary systems in our Galaxy.
The two techniques are complementary, both in terms of the information
that they provide about discovered systems, and in terms of the
range of planetary system parameters to which they are most sensitive.
Multiple transit measurements of the same planet will yield its planetary
radius $R_p$ and orbital radius $a$. Characterization of a microlensing
planetary anomaly gives the mass ratio $q \equiv m_p/M_*$
of the planet to lensing star
and their projected separation $b$ at the time of the anomaly in units
of $\theta_E$.
\hglue 2.0cm
\epsfxsize=8cm\epsffile{vrastlglg.ps}
\vglue -0cm{\small Fig.~21 --- Current detection thresholds
for long-running programs that rely on planet orbital
motion, shown as a function of planetary mass ratio and orbital radius.
The occultation threshold must be multiplied by the
appropriate geometric probability of a transit to derive detection
efficiencies.
Selected Solar System planets are shown.}
\vskip 0.4cm
Current ground-based photometry is sensitive to jovian-size
occultations; space-based missions may extend this into the
terrestrial regime. The transit method is sensitive
to planets with small $a$ because they create a detectable
transit over a wider range of inclinations
of their orbital planes, and because they transit more often
within a typical 5-year experiment. These constraints limit the
range of orbital radii to about $0.02 \mathrel{\raise.3ex\hbox{$<$}\mkern-14mu\lower0.6ex\hbox{$\sim$}} a \mathrel{\raise.3ex\hbox{$<$}\mkern-14mu\lower0.6ex\hbox{$\sim$}} 1.5 \, $AU
for jovian-size planets. If improvements in photometric precision would
allow the detection of Earth-size planetary transits, this range would
still be possible, but would suffer from noise due to star spot
activity on time scales that could be confused with transits by small-mass
planets with $a \mathrel{\raise.3ex\hbox{$<$}\mkern-14mu\lower0.6ex\hbox{$\sim$}} 0.3\, {\rm AU}$.
Since the planets in
own solar system fall roughly on the same log$ \, m_p$-log$ \, R_p$
relationship, it is reasonable to assume that jovian-size planets
may also have jovian masses. This assumption was used to place
the current transit detection capability on the same plot (Fig.~21)
with the current detection thresholds for radial velocity
and astrometric techniques. All three of these techniques require
long-term projects to detect long-period (large $a$) planets since
the measurements of velocity, position, or flux must be collected
over at least one full orbital period. The occultation threshold
must be convolved with the geometric transit probability to
derive efficiencies {\it per observed star\/}.
Photometric precision in crowded fields together with
source resolution effects limit current microlensing planet searches
to Neptune masses and above. The actual efficiency with which
a given planetary system can be detected depends on its mass ratio
and projected orbital separation. Fig.~22 shows estimates of
microlensing detection efficiency contours for planets of a given mass
ratio and true orbital separation (in units of the Einstein radius).
The contours are based on the work of Gaudi \& Gould (1997) for
high-mass ratios, Gould \& Loeb (1992) and
Gaudi \& Sackett (1998) for intermediate mass ratios,
and Bennett \& Rhie (1996) for small ratios. Integrations have
been performed over the unknown but presumably randomly oriented
inclinations and orbital phases.
Although planets with $a$ in
the lensing zone and orbiting in the sky plane
are the easiest to detect, a tail of sensitivity extends to
larger $a$ as well because inclination effects will bring large-$a$ planets
into the projected lensing zone for some phases of
their orbits. The efficiencies assume $\approx$1\% photometry
well-sampled over the post-alert part of the microlensing light curve.
Examination of Fig.~22 makes it clear that different indirect
planetary
search techniques will are sensitive to different portions of the
log$ \, m_p$-log$ \, a$ domain. Current ground-based
capabilities favor the radial velocity method for short-period
($a \mathrel{\raise.3ex\hbox{$<$}\mkern-14mu\lower0.6ex\hbox{$\sim$}} 3$~AU) planets (see also Queloz, this proceedings).
The occultation method will help populate the short-period part
of the diagram, and if the programs are carried into space, will
begin to probe the regime of terrestrial-sized planets in terrestrial
environments.
Ground-based astrometry is favored for very
long-period ($a \mathrel{\raise.3ex\hbox{$>$}\mkern-14mu\lower0.6ex\hbox{$\sim$}} 40$~AU) planets,
although the time scales for detection and confirmation are
then on the order of decades. Space-based astrometry promises
to make this method substantially more efficient, perhaps by a factor
of 100 (see also Colavita, this proceedings).
Microlensing is the only technique
capable of detecting in the near term substantial numbers of
intermediate-period ($3 \mathrel{\raise.3ex\hbox{$<$}\mkern-14mu\lower0.6ex\hbox{$\sim$}} a \mathrel{\raise.3ex\hbox{$<$}\mkern-14mu\lower0.6ex\hbox{$\sim$}} 12$~AU) planets.
Somewhat longer period planets may also be discovered by
microlensing survey projects as independent ``repeating'' events
in which the primary lens and distant planet act as independent lenses
(DiStefano \& Scalzo 1999).
Very short-period planets interior to 0.1~AU may be detectable using the
light echo technique (Bromley 1992, Seager \& Sasselov 1998,
Charbonneau, Jhu \& Noyes 1998), at least for parent
stars with substantial flare activity, such as late M dwarfs.
\vglue -0.75cm
\hglue 1.5cm
\epsfxsize=10cm\epsffile{scottmicroeff.ps}
\vglue -8.66cm
\hglue 2.2cm
\epsfxsize=7.85cm\epsffile{vrastcompare.ps}
\vglue 0.25cm{\small Fig.~22 --- Estimated detection efficiency
contours for microlensing planet searches as a function of the logarithm
of the planetary mass ratio $q \equiv m_p/M_*$ and the true orbital
separation $a$ in units of the Einstein ring radius. Efficiencies
have been integrated over the phase and inclination of the orbits,
under the assumption that they are circular.
To make comparisons with other techniques, the Einstein ring radius
is taken to be 3.5~AU. Solid lines indicate
what can be achieved in an observational program of 5-years
duration or less. Note that the vertical scale remains logarithmic,
but the horizontal scale is now linear. }
\vskip 0.4cm
The techniques are complementary in another sense as well.
Those that rely on light from the parent star will be limited to
nearby planetary systems, but will benefit from the ability to
do later follow-up studies, including spectroscopy and interferometry.
Microlensing, on the other hand, will see the evidence of a given
planetary system only once, but can probe planetary frequency in
distant parts of the Milky Way, and can collect statistics over a wide range
of orbital separations in a relatively short time.
\vskip 0.25cm
{\small
\begin{center}
\begin{tabular}{l c c}
\noalign{\medskip\hrule\smallskip}
\multicolumn{3}{c}{TABLE II. Comparison of Current Ground-Based Capabilities}\\
\noalign{\hrule}
\noalign{\smallskip\hrule\bigskip}
\medskip
~~~~ & OCCULTATION & MICROLENSING \\
Parameters Determined & $R_p$, $a$, $i$
& $q \equiv m_p/M_*$, \\
& & $b \equiv a_p/R_E$ \\
Photometric Precision of & 0.1\% & 1\% \\
~~~Limits $R_p$ or $m_p$ to & Neptune & Neptune \\
Orbital Radius Sensitivity &$\sim 0.02 - 1.5\,$AU
& $\sim 1 - 12\,$AU \\
Typical Distance of Systems & $ < 1\,$kpc & $ 4 - 7\,$kpc \\
Number of Stars to be Monitored & few $10^3$ & few $10^2$ \\ ~~~for Meaningful Jovian Sensitivity at & $\mathrel{\raise.3ex\hbox{$<$}\mkern-14mu\lower0.6ex\hbox{$\sim$}}$1 AU &$\sim$5 AU \\
& & \\
{\it In Principle Possible to Detect:\medskip} & & \\
Multiple Planets & yes & yes \\
Planets around Binary Parent Stars & yes & yes \\
Earth-mass Planets in Future & yes (space) & yes \\
\noalign{\medskip\hrule\smallskip}
\end{tabular}
\end{center}
}
\subsection{Toward the Future}
The field of extra-solar planets is evolving rapidly.
The number of groups conducting transit and microlensing planet
searches, planning future programs, and providing theoretical support
is growing at an ever-increasing rate.
For that reason, this series of lectures has centered on the
principles of the techniques rather than reviewing the current
players. In order to help the reader keep pace with this
accelerating activity, however, a list of relevant Internet Resources
with links to occultation and microlensing planet search groups
is provided at the end of this section.
What can we expect in the next decade from these research teams?
Several ground-based transit searches are already underway
(Villanova University, TEP, WASP, Vulcan, and EXPORT).
Some focus on high-quality
photometry of known eclipsing binaries. This is likely to increase
the probability of transits --- if planets exist in such binary systems.
Two transit-search teams recently issued
(apparently contradictory) claims for a possible planet detection
in the eclipsing M-dwarf system known as CM Draconis
(IAU circulars 6864 and 6875, see also Deeg {\it et~al.}\ 1998), but
no clear, undisputed planetary signal has seen.
One class of planets known to exist in reasonable numbers and also relatively
easy to detect via occultation is the ``hot jupiter;'' the planet in
51~Peg is a prototype of this class.
If such a planet is the size of Jupiter, its orbital plane would have a
$\sim$10\% chance of being sufficiently inclined to produce
a detectable eclipse of a solar-type parent as seen from Earth.
An aggressive ground-based program should be able to detect large
numbers of such planets in the next decade --- planets that could be
studied with the radial velocity technique thereby yielding both planetary
mass and radius.
Space-based missions (COROT and KEPLER) planned for launch within
this decade should have the sensitivity to detect transits from
terrestrial-mass objects, but in order to detect Earth-like planets
in Earth-like environments ({i.e.,}\ orbiting solar-type stars at 1~AU) they
will need long mission times.
Microlensing planet searches are being conducted or aided by international
collaborations (PLANET, GMAN, MPS, MOA, and EXPORT) that intensely monitor the
events discovered by microlensing search teams (EROS, MACHO, OGLE, and MOA).
MACHO and OGLE electronically alert on-going microlensing in the direction of
the Galactic Bulge on a regular basis: at given time during the
bulge observing season several events are in progress and
available for monitoring. Both the PLANET and GMAN collaborations
have issued real-time secondary alerts of anomalous behavior (including
binary lenses, source resolution, `` lensing parallax''),
but to date no clear detection of a lensing planet has been announced.
Especially if caustics are crossed, it may be possible to obtain
additional information on microlensing planets from the sky
motion of the caustics during the event that is induced by planetary
motion (Dominik 1998).
The number of high-quality microlensing light curves monitored
by the PLANET collaboration is already beginning to approach that required
for reasonable jovian detection sensitivities (Albrow {\it et~al.}\ 1998a),
so meaningful results
on Jupiter look-alikes can be expected within the next few years.
As more telescopes, more telescope time, and wider-field detectors are
dedicated to dense, precise photometric monitoring capable of
detecting planetary transits and planetary microlensing, we can feel certain
that --- if jovian planets with orbital radii less than $\sim$6~AU
exist in sufficient numbers --- they will
be detected in the next few years by these techniques.
\newpage
\vglue 0.1cm
\subsection*{Acknowledgments}
I am grateful to NATO for financial support and to the Institute's
efficient and gracious scientific organizers,
Danielle Alloin and (the sorely missed) Jean-Marie Mariotti, for
a productive and pleasant school.
It is also a pleasure to thank B. Scott Gaudi
for assistance in the preparation of some of the figures in the
microlensing section and for permission to
show the results of our work before publication.
\vskip 0.5cm
\subsection*{INTERNET RESOURCES}
\vskip 0.3cm
\noindent
{\it General Extra-Solar Planet News:\/}
\vskip 0.2cm
\noindent
Extrasolar Planets Encyclopedia (maintained by J. Schneider):\\
http://www.obspm.fr/departement/darc/planets/encycl.html
\vskip 0.2cm
\noindent
and the mirror site in the U.S.A.:\\
http://cfa-www.harvard.edu/planets/
\vskip 0.4cm
\noindent
{\it Occultation:\/}
\vskip 0.3cm
\noindent
EXPORT: http://pollux.ft.uam.es/export/
\vskip 0.2cm
\noindent
TEP: http://www.iac.es/proyect/tep/tephome.html
\vskip 0.2cm
\noindent
Villanova University: http://www.phy.vill.edu/astro/index.htm
\vskip 0.2cm
\noindent
VULCAN: http://www.iac.es/proyect/tep/tephome.html
\vskip 0.2cm
\noindent
WASP: http://www.psi.edu/~esquerdo/wasp/wasp.html
\vskip 0.4cm
\noindent
{\it Microlensing:\/}
\vskip 0.3cm
\noindent
EROS: http://www.lal.in2p3.fr/EROS
\vskip 0.2cm
\noindent
MACHO: http://wwwmacho.anu.macho.edu
\vskip 0.2cm
\noindent
MACHO Alert Page: http://darkstar.astro.washington.edu
\vskip 0.2cm
\noindent
OGLE: http://www.astrouw.edu.pl/$\sim$ftp/ogle
\vskip 0.2cm
\noindent
MOA: http://www.phys.vuw.ac.nz/dept/projects/moa/index.html
\vskip 0.2cm
\noindent
MPS: http://bustard.phys.nd.edu/MPS/
\vskip 0.2cm
\noindent
PLANET: http://www.astro.rug.nl/$\sim$planet
\newpage
{\small
| proofpile-arXiv_065-8795 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{Notations and conventions}
The notations and conventions used in this paper are as
follows: the metric signature is --~+~+~...~+. To facilitate the
comparison with the literature on inflationary cosmology, we use units in
which the speed of light and the reduced Planck constant assume the value
unity. $G$ is Newton's constant and the Planck mass is $m_{pl}=G^{-1/2}$
in these units. Greek indices assume the values 0,~1,~2,~...,~$n-1$, where
$n$ is the dimension of spacetime. When $n=4$, small Latin indices assume
the values 1,~2,~3. While we allow for $n$ spacetime dimensions (only one
of which is timelike), in most of this paper the value $n=4$ is assumed,
except when discussing Kaluza--Klein and string theories prior to
compactification.
A comma denotes ordinary differentiation, and $\nabla_{\mu}$ is the
covariant derivative operator. Round and square brackets around indices
denote, respectively, symmetrization and antisymmetrization, which
include division by the number of permutations of the indices: e.g.
$A_{( \mu\nu )}=\left( A_{\mu\nu}+A_{\nu\mu} \right) /2$.
The Riemann and Ricci tensors are given in terms of the Christoffel
symbols $\Gamma_{\alpha\beta}^{\delta}$ by
$$
{R_{\alpha\beta\gamma}}^{\delta}=\Gamma_{\alpha\gamma
,\beta}^{\delta}-\Gamma_{\beta\gamma , \alpha}^{\delta}
+\Gamma_{\alpha\gamma}^{\sigma} \Gamma_{\sigma\beta}^{\delta}-
\Gamma_{\beta\gamma}^{\sigma}\Gamma_{\sigma\alpha}^{\delta} \; ,
$$
$$R_{\mu\rho}=
\Gamma^{\nu}_{\mu\rho ,\nu}-\Gamma^{\nu}_{\nu\rho ,\mu}+
\Gamma^{\alpha}_{\mu\rho}\Gamma^{\nu}_{\alpha\nu}-
\Gamma^{\alpha}_{\nu\rho}\Gamma^{\nu}_{\alpha\mu} \; , $$
and $R\equiv g^{\alpha\beta}R_{\alpha\beta}$ is the Ricci curvature.
$\Box \equiv g^{\mu\nu}\nabla_{\mu}\nabla_{\nu}$ is d'Alembert's
operator. A tilde denotes quantities defined in the Einstein
frame, while a caret denotes quantities defined in a higher--dimensional
space prior to
the compactification of the extra dimensions.
\section{Introduction}
If $(M, g_{\mu\nu})$ is a spacetime, the point--dependent
rescaling of the metric tensor
\begin{equation} \label{CT}
g_{\mu\nu} \rightarrow \tilde{g}_{\mu\nu}=\Omega^2 g_{\mu\nu} \; ,
\end{equation}
where $\Omega =\Omega(x) $ is a nonvanishing,
regular function, is called a {\em
Weyl} or {\em conformal transformation}. It
affects the lengths of time [space]--like
intervals and the norm of time [space]--like vectors, but it leaves
the light cones unchanged: the spacetimes $(M, g_{\mu\nu})$ and
$(M, \tilde{g}_{\mu\nu})$ have the same causal structure. The converse is
also true (Wald 1984). If $v^{\mu}$ is a null, timelike, or spacelike
vector with respect to the metric $g_{\mu\nu}$, it is
also a null, timelike, or spacelike vector, respectively, in the rescaled
metric $\tilde{g}_{\mu\nu}$.
Denoting by $g$ the determinant det$(g_{\mu\nu})$ one has, under the
action of (\ref{CT}), $\tilde{g}^{\mu\nu}=\Omega^{-2}
g^{\mu\nu}$ and $\tilde{g} \equiv \mbox{det}\left(
\tilde{g}_{\mu\nu}\right) =\Omega^{2n} g $.
It will be useful to remember the transformation properties of the
Christoffel symbols, Riemann and Ricci tensor, and of the Ricci
curvature $R$ under the rescaling (\ref{CT})
(Synge 1955; Birrell and Davies 1982; Wald 1984):
\begin{equation}
\tilde{\Gamma}^{\alpha}_{\beta\gamma}=
\Gamma^{\alpha}_{\beta\gamma}+\Omega^{-1}\left(
\delta^{\alpha}_{\beta} \nabla_{\gamma}\Omega +
\delta^{\alpha}_{\gamma} \nabla_{\beta} \Omega
-g_{\beta\gamma}\nabla^{\alpha} \Omega \right) \; ,
\end{equation}
\begin{eqnarray}
& & \widetilde{ {R_{\alpha\beta\gamma}}^{\delta}}=
{R_{\alpha\beta\gamma}}^{\delta}+2\delta^{\delta}_{[\alpha} \nabla_{\beta
]}\nabla_{\gamma} ( \ln \Omega )
-2g^{\delta\sigma} g_{\gamma [ \alpha}\nabla_{\beta ]}\nabla_{\sigma}
( \ln
\Omega ) +2 \nabla_{[ \alpha} ( \ln \Omega ) \delta^{\delta}_{\beta ]}
\nabla_{\gamma}( \ln \Omega ) \nonumber \\
& & -2\nabla_{[ \alpha}( \ln \Omega )g_{\beta ]
\gamma} g^{\delta \sigma} \nabla_{\sigma}( \ln \Omega )
-2g_{\gamma [ \alpha} \delta^{\delta}_{\beta ]} g^{\sigma \rho}
\nabla_{\sigma} ( \ln \Omega ) \nabla_{\rho} ( \ln \Omega ) \; ,
\end{eqnarray}
\begin{eqnarray}
& & \tilde{R}_{\alpha\beta }=R_{\alpha\beta }
-(n-2) \nabla_{\alpha}\nabla_{\beta }
( \ln \Omega )
-g_{\alpha\beta } g^{\rho\sigma } \nabla_{\rho} \nabla_{\sigma}
( \ln \Omega )
+(n-2) \nabla_{\alpha} ( \ln \Omega ) \nabla_{\beta}( \ln \Omega )
\nonumber \\
& & -(n-2) g_{\alpha\beta }\, g^{\rho\sigma}
\nabla_{\rho}( \ln \Omega ) \nabla_{\sigma}( \ln \Omega ) \; ,
\end{eqnarray}
\begin{equation}
\tilde{R} \equiv \tilde{g}^{\alpha\beta} \tilde{R}_{\alpha\beta }=
\Omega^{-2} \left[ R-2 \left( n-1 \right) \Box \left( \ln \Omega \right) -
\left( n-1 \right) \left( n-2 \right)
\frac{g^{\alpha\beta} \nabla_{\alpha} \Omega \nabla_{\beta}
\Omega}{\Omega^2}
\right] \; ,
\end{equation}
where $n$ ($n \geq 2 $) is the dimension of the spacetime manifold $M$.
In the case $n=4$, the scalar curvature has the expressions
\begin{equation}
\tilde{R} =\Omega^{-2} \left[ R-\frac{6 \Box \Omega}{\Omega} \right] =
\Omega^{-2} \left[ R-\frac{12 \Box ( \sqrt{\Omega})}{\sqrt{\Omega}} -
\frac{3g^{\alpha\beta} \nabla_{\alpha} \Omega \nabla_{\beta} \Omega}
{\Omega^2} \right] \; ,
\end{equation}
which are useful in many applications. The Weyl tensor
${C_{\alpha\beta\gamma}}^{\delta} $ (beware of the position of the
indices~!) is conformally invariant:
\begin{equation}
\widetilde{ {C_{\alpha\beta\gamma}}^{\delta}}=
{C_{\alpha\beta\gamma}}^{\delta} \; ,
\end{equation}
and the null geodesics are also conformally invariant (Lorentz 1937).
The conservation equation $\nabla^{\nu} T_{\mu\nu} =0 $ for a symmetric
stress--energy tensor $T_{\mu\nu} $ is not conformally invariant
unless the
trace $T \equiv {T^{\mu}}_{\mu}$ vanishes (Wald 1984). The Klein--Gordon
equation
$\Box \phi =0 $ for a scalar field $\phi$ is not conformally invariant, but
its
generalization
\begin{equation}
\Box \phi-\frac{n-2}{4(n-1)} \,R \, \phi=0
\end{equation}
($n \geq 2$) is conformally invariant (note that the introduction of a
nonzero cosmological constant in the Einstein action for gravity creates
an effective mass, and a length scale, in the Klein--Gordon equation,
which spoils the conformal invariance (Madsen 1993)). Maxwell's
equations in four dimensions are
conformally invariant (Cunningham 1909; Bateman 1910), but the
equations for the electromagnetic four--potential are
not (it is to be noted that, at the quantum level, the conformal invariance
of the Maxwell equations may be broken by quantum
corrections like the generation of mass or the conformal anomaly).
The conditions for conformal invariance of fields of
arbitrary spin in any
spacetime dimensions were discussed in (Iorio {\em et al.} 1997).
In this review paper, we will limit ourselves to consider special conformal
transformations, in which the dependence of the conformal factor $\Omega
(x) $ on the spacetime point $x$ is obtained via a functional dependence
(usually a power law) on a scalar field $\phi (x) $ present in the theory:
\begin{equation} \label{specialCT}
\Omega (x) =\Omega \left[ \phi (x) \right] \; .
\end{equation}
A redefinition of the scalar field $\phi$ accompanies the conformal
transformation (\ref{CT}).
Theories in which a fundamental scalar field appears and generates
(\ref{CT})
include scalar--tensor and
nonlinear theories of gravity (in which $\phi$ is a Brans--Dicke--like
field) and
Kaluza--Klein theories (in which $\phi$ is the determinant of the metric
of the extra
compact dimensions). Fundamental scalar fields in quantum theories include
$SO(N)$ bosons in dual models, Nambu--Goldstone bosons, Higgs fields, and
dilatons in superstring theories. In addition, almost
all\footnote{The exception is $R^2$ inflation
(Starobinsky 1980; Starobinsky 1986; Maeda, Stein--Schabes
and Futamase 1989), in which the
Lagrangian term $R^2$ itself drives inflation. However, a scalar field is
sometimes added to this scenario to ``help'' inflation
(Maeda 1989; Maeda, Stein--Schabes and Futamase 1989) and the scenario is
often recast as power--law inflation by using a conformal transformation
(Liddle and Lyth 1993).} scenarios of cosmological
inflation (Linde 1990; Kolb and Turner 1990; Liddle and Lyth 1993; Liddle
1996) are based on scalar fields, either in the context of a classical or
high energy theory, or in a phenomenological approach in which a scalar
field is introduced as a source of gravitation in the field equations of
the theory (usually
the Einstein equations of general relativity). By means of a transformation
of the form (\ref{CT}), many of these scenarios are
recast in the form of Einstein gravity with
the scalar field(s) as a source of gravity and a power--law inflationary
potential. The investigation of this
mathematical equivalence has far--reaching consequences, and in many cases
the
mathematical equivalence provides a means to go from a physically
inconsistent
theory to a viable one. Unfortunately, the use of conformal transformations
in gravitational theories is haunted by confusion and ambiguities,
particularly in relation
to the problem of identifying the conformal frame which correctly
describes the physics.
Despite early work on the subject, confusion still persists in the
literature
and considerably detracts from papers that use conformal
techniques incorrectly.
It must be stressed that, in general, conformal transformations are not
diffeomorphisms of the manifold $M$, and the
rescaled metric $ \tilde{g}_{\mu\nu}$ is not simply the metric $g_{\mu\nu}$
written in a different coordinate system: the metrics $
\tilde{g}_{\mu\nu}$ given by Eq.~(\ref{CT}) and the metric
$g_{\mu\nu}$ describe different gravitational fields and different
physics. Special conformal
transformations originating from diffeomorphisms are called {\em conformal
isometries} (Wald 1984). The reader should not be confused by
the fact that some authors use the name ``conformal transformation'' for
special coordinate transformations relating inertial and accelerated
observers
(e.g. Fulton, Rorlich and Witten
1962{\em a,b}; Wood, Papini and Cai 1989;
Mashoon 1993). In this case the metric is left unchanged,
although its coordinate representation varies.
The possibility of different conformal rescalings for different metric
components has also been considered (Mychelkin 1991), although it appears
doubtful that this procedure can be given a covariant formulation and a
physically sound motivation.
Historically, interest in conformal transformations arose after the
formulation of Weyl's (1919) theory
aimed at unifying gravitation and electromagnetism, expecially after its
reformulation by Dirac (1973). Moreover, a conformally
invariant version of special relativity was formulated
(Page 1936{\em a,b}; Page and Adams 1936), but the conformal invariance in
this case was recognized to be meaningless (Pauli 1958). Further
developments of Weyl's theory are more appealing; for example, the
self--consistent, scale--invariant theory of Canuto {\em et al.} (1977),
so far, is not in contraddiction with the observations. It requires that
the astronomical unit of length is related to the atomic unit by a scalar
function which depends on the spacetime point. The theory contains a
time--dependent cosmological ``constant'' $ \Lambda (t)=\Lambda_0
(t_0/t )^2 $, which is sought after by many authors in modern cosmology and
astroparticle physics.
\section{Conformal transformations as a mathematical tool}
\setcounter{equation}{0}
Conformal rescalings and conformal techniques have been widely used in
general
relativity for a long time, expecially
in the theory of asymptotic flatness and in the initial value formulation
(Wald 1984
and references therein), and also in studies of the
propagation of massless fields,
including Fermat's principle (Perlick 1990; Schneider, Ehlers and
Falco 1992), gravitational lensing in
the (conformally flat) Friedmann--Lemaitre--Robertson--Walker universe
(Perlick 1990; Schneider, Ehlers and Falco 1992),
wave equations (Sonego and Faraoni 1992; Noonan 1995),
studies of the optical geometry near black hole horizons
(Abramowicz, Carter and Lasota 1988; Sonego and Massar 1996; Abramowicz
{\em et al.} 1997{\em a,b}), exact solutions (Van
den Bergh 1986{\em a,b,c,d,e}, 1988) and in other contexts.
Conformal techniques and conformal invariance are important also for
quantum field
theory in curved spaces (Birrell and Davies 1982), for statistical
mechanics and for
string theories (e.g. Dita and Georgescu 1989).
A conformal transformation is often used as a mathematical tool to map the
equations of motion of physical
systems into mathematically equivalent sets of equations
that are more easily solved and
computationally more convenient to study. This situation arises mainly in
three
different areas of gravitational physics: alternative (including nonlinear)
theories of gravity, unified
theories in multidimensional spaces, and studies of scalar fields
nonminimally coupled to gravity. \\ \\
{\bf Brans--Dicke theory:} The conformal rescaling to the
minimally coupled case for the Brans--Dicke field in Brans--Dicke theory
was
found by Dicke (1962). One starts with the Brans--Dicke action in
the so--called ``Jordan frame''
\begin{equation} \label{BDaction}
S_{BD}=\frac{1}{16\pi}\int d^4 x \sqrt{-g} \left[
\phi \, R -\frac{\omega}{\phi} \, \nabla^{\mu}
\phi \nabla_{\mu} \phi \right] + S_{matter} \; ,
\end{equation}
which corresponds to the field equations
\begin{equation} \label{BD1}
R_{\mu\nu}-\frac{1}{2} g_{\mu\nu} R= \frac{8\pi}{\phi} T_{\mu\nu}
+\frac{\omega}{\phi^2} \left( \nabla_{\mu} \phi \nabla_{\nu} \phi
-\frac{1}{2} g_{\mu\nu} \nabla^{\alpha} \phi
\nabla_{\alpha} \phi \right) +\frac{1}{\phi} \left(
\nabla_{\mu}\nabla_{\nu}\phi -g_{\mu\nu} \Box \phi \right) \; ,
\end{equation}
\begin{equation} \label{BD2}
\Box \Phi =\frac{8\pi T}{3+2\omega} \; .
\end{equation}
The conformal transformation (\ref{CT}) with
\begin{equation} \label{OmBD}
\Omega =\sqrt{G \phi}
\end{equation}
and the redefinition of the scalar field given in differential form by
\begin{equation} \label{8}
d\tilde{\phi}=\sqrt{ \frac{2\omega +3}{16 \pi G}} \, \, \frac{d\phi}{\phi}
\end{equation}
($\omega > -3/2$) transform the action (\ref{BDaction}) into the
``Einstein frame'' action
\begin{equation} \label{actionBDEframe}
S=\int d^4 x \left\{ \sqrt{-\tilde{g}} \left[
\frac{\tilde{R}}{16 \pi G} -\frac{1}{2} \tilde{\nabla}^{\mu}
\tilde{\phi} \tilde{\nabla}_{\mu} \tilde{\phi} \right] + \exp \left( -8
\sqrt{\frac{\pi G}{2\omega+3}} \tilde{\phi} \right)
{\cal L}_{matter}( \tilde{g}) \right\} \; ,
\end{equation}
where $\tilde{\nabla}_{\mu}$ is the covariant derivative operator
of the rescaled metric $ \tilde{g}_{\mu\nu}$.
The gravitational part of the action now contains only Einstein gravity,
but a free scalar field acting as a source of gravitation {\em always}
appears. It permeates spacetime in a way that cannot be eliminated, i.e.
one cannot contemplate solutions of the vacuum Einstein equations
$R_{\mu\nu}=0$ in the Einstein frame.
In the Jordan frame, the gravitational field is described by the metric
tensor
$g_{\mu\nu}$ {\em and} by the Brans--Dicke field $\phi$. In the Einstein
frame, the gravitational field is described only by the metric tensor $
\tilde{g}_{\mu\nu}$, but the scalar field $\tilde{\phi}$, which is now a
form of matter, is always present, a reminiscence of its fundamental role
in the ``old'' frame. In addition, the rest of the matter part of the
Lagrangian is multiplied by an exponential factor,
thus displaying an anomalous coupling to the scalar
$\tilde{\phi}$. This anomalous coupling will be discussed in Sec.~6.\\\\
{\bf Nonminimally coupled scalar field:} By means
of a conformal rescaling, the study of a nonminimally coupled scalar
field can also be reduced to that of a minimally coupled scalar.
The transformation relating a massless conformally coupled and
a minimally coupled scalar field was found by Bekenstein
(1974) and
later rediscovered and generalized to massive fields and arbitrary
values of the coupling constant
(Deser 1984; Schmidt 1988; Maeda 1989; Futamase and Maeda 1989;
Xanthopoulos and Dialynas 1992; Klimcik 1993;
Accioly {\em et al.} 1993). In
this case, the starting point is the action for canonical gravity plus a
scalar field in the Jordan frame:
\begin{equation} \label{nonmincoupl}
S=\int d^4 x \sqrt{-g}\left[ \left( \frac{1}{16\pi G} -\frac{\xi \phi^2}{2}
\right) R
-\frac{1}{2} \nabla^{\mu} \phi \nabla_{\mu} \phi -V( \phi ) \right] \; ,
\end{equation}
where $V( \phi) $ is the scalar field potential (possibly
including a mass
term and the cosmological constant) and $\xi$ is a dimensionless coupling
constant. Note that the dimensions of the scalar field are $ \left[ \phi
\right] = \left[ G^{-1/2} \right] = \left[ m_{pl}\right] $. The equation
satisfied by the scalar $\phi$ is
\begin{equation} \label{ABAB}
\Box \phi-\xi R \phi -\frac{dV}{d\phi}=0 \; .
\end{equation}
Two cases occur most frequently in the literature: ``minimal coupling''
($\xi=0$) and ``conformal coupling'' ($\xi=1/6$); the latter makes the
wave equation (\ref{ABAB}) conformally invariant in four dimensions if
$V=0$ or $V=\lambda \phi^4$ (the
latter potential being used in the chaotic inflationary scenario).
The conformal transformation (\ref{CT}) with
\begin{equation} \label{OM}
\Omega^2=1-8\pi G \xi \phi^2
\end{equation}
and the redefinition of the scalar field, given in differential form by
\begin{equation} \label{redefNMC}
d\tilde{\phi}=\frac{\left[ 1- 8\pi G \xi \left( 1-6\xi \right) \phi^2
\right]^{1/2}} {1- 8\pi G \xi \phi^2} \, d\phi \; ,
\end{equation}
reduce (\ref{nonmincoupl}) to the Einstein frame action
\begin{equation} \label{mincoupl}
S=\int d^4 x \sqrt{-\tilde{g}} \left[ \frac{\tilde{R}}{16\pi G}
-\frac{1}{2} \tilde{\nabla}^{\mu} \tilde{\phi}
\tilde{\nabla}_{\mu}\tilde{\phi}
-\tilde{V}( \tilde{\phi} ) \right] \; ,
\end{equation}
where the scalar field $\tilde{\phi}$ is now minimally coupled and
satisfies the equation
\begin{equation}
\tilde{g}^{\mu\nu} \nabla_{\mu} \nabla_{\nu} \tilde{\phi}
-\frac{d\tilde{V}}{d\tilde{\phi}}=0 \; .
\end{equation}
The new scalar field potential is given by
\begin{equation}
\tilde{V} ( \tilde{\phi})=\frac{V( \phi)}{\left( 1-8\pi G \xi \phi^2
\right)^2} \; ,
\end{equation}
where $\phi=\phi \left( \tilde{\phi} \right)$ is obtained by integrating
and inverting
Eq.~(\ref{redefNMC}). The field equations of
a gravitational theory in the case of a minimally coupled scalar
field as a source of
gravity are computationally much easier to solve than the corresponding
equations for nonminimal coupling, and the transformation (\ref{CT}),
(\ref{OM}), (\ref{redefNMC}) is widely used for this purpose.
The stress--energy tensor of a scalar field can be put in the form
corresponding to a fluid, but the $T_{\mu\nu} $ for a nonminimally
coupled field is considerably more complicated than the minimal coupling
case, for which the form of the $T_{\mu\nu} $ reduces to that of a perfect
fluid (Madsen 1988). It is generally assumed that the scalar field $\phi$
assumes values in a range that makes the right hand side of
Eq.~(\ref{OM}) positive. For $\xi >0$, this range is limited by the
critical values $\phi_{1,2} =\pm \left( 8\pi G \xi \right)^{-1/2}$.
Nonminimal couplings of the electromagnetic field to gravity have also been
considered
(Novello and Salim 1979; Novello and Heintzmann 1984; Turner and Widrow
1988; Novello and Elbaz 1994; Novello, Pereira and Pinto--Neto 1995;
Lafrance and Myers 1995), but
conformal techniques analogous to those developed for scalar fields are
presently unknown. A formal method alternative to conformal transformations is
sometimes useful
for nonminimally coupled scalar fields, which are equivalent to an
effective
flat space field theory with a scalar mass that is $\xi$--dependent
(Hochberg and Kephart 1995).
\\ \\
{\bf Nonlinear theories of gravity:} The mathematical equivalence between a
theory described by the gravitational Lagrangian density
${\cal L}_g=\sqrt{-g} f(R) $ (``higher order theory'') and
Einstein gravity was found in
(Teyssandier and Tourrenc 1983; Schmidt 1987; Starobinsky
1987; Barrow and Cotsakis 1988; Maeda 1989;
Gott, Schmidt and Starobinsky 1990; Schmidt 1990; Cotsakis
and Saich 1994; Wands 1994). The field equations for this theory are of
fourth order:
\begin{equation}
\left( \frac{df}{dR} \right) R_{\mu\nu}-\frac{1}{2} \, f(R)g_{\mu\nu}
-\nabla_{\mu}
\nabla_{\nu} \left( \frac{df}{dR} \right) + g_{\mu\nu} \Box \left(
\frac{df}{dR} \right) =0 \; ,
\end{equation}
and are reduced to the Einstein equations by the conformal transformation.
Quadratic Lagrangian densities with $R^2$ terms arising from quantum
corrections are the most frequently studied cases of
nonlinear gravitational theories; they
can be reduced to the Einstein Lagrangian density
(Higgs 1959; Teyssandier and Tourrenc 1983; Whitt 1984;
Ferraris 1986; Berkin and Maeda 1991). These
results were generalized to supergravity, Lagrangians with terms
$\Box^k R$ ($k \geq 1$) and polynomial Lagrangians in $R$ (Cecotti
1987);
the two--dimensional case was studied in (Mignemi and Schmidt 1995).
This class of theories includes Weyl's theory (Weyl 1919; Dirac 1973)
described by the Lagrangian density $
{\cal L}=\sqrt{-g} ( R^2+\beta F_{\mu\nu}F^{\mu\nu} ) $, and theories of
the form ${\cal L} =R^k$ ($k \geq 1$).
For nonlinear theories of gravity, the conformal transformation that maps the
theory into Einstein gravity becomes a Legendre transformation
(Ferraris, Francaviglia
and Magnano 1988;
Jakubiec and Kijowski 1988; Ferraris,
Francaviglia and Magnano 1990; Magnano, Ferraris
and Francaviglia 1990; Magnano and Sokolowski 1994).
There are obvious advantages in performing this transformation because the
higher order field equations of the nonlinear theory are reduced to the
second order Einstein equations with matter.
One starts with a purely gravitational nonlinear theory described by the
action
\begin{equation} \label{nonlin}
S=\int d^m x \sqrt{-g}\left[ F( \phi, R) -\frac{\epsilon}{2}
\nabla^{\mu} \phi \nabla_{\mu} \phi \right] \; ,
\end{equation}
in $m$ spacetime dimensions, where $F( \phi, R )$ is an arbitrary (but
sufficiently regular) function of $\phi$ and $R$, and $\epsilon $ is a free
parameter (normally $0$ or $1$).
The corresponding field equations (Maeda 1989) are
\begin{eqnarray}
\left( \frac{\partial F}{\partial R} \right) \left( R_{\mu\nu}
-\frac{1}{2} \, g_{\mu\nu} R \right) & = & \frac{\epsilon}{2} \left(
\nabla_{\mu} \phi \nabla_{\nu} \phi
-\frac{1}{2} g_{\mu\nu} \nabla^{\alpha} \phi \nabla_{\alpha} \phi \right)
+ \frac{1}{2} g_{\mu\nu} \left( F-\frac{\partial F}{\partial R} R \right)
\nonumber \\
& & + \nabla_{\mu}
\nabla_{\nu} \left( \frac{\partial F}{\partial R} \right) - g_{\mu\nu}
\Box \left( \frac{\partial F}{\partial R} \right) \; ,
\end{eqnarray}
\begin{equation}
\epsilon \Box \phi = - \, \frac{\partial F}{\partial \phi} \; .
\end{equation}
The conformal rescaling (\ref{CT}), where
\begin{equation} \label{17}
\Omega^2 =\left[ 16\pi G \left| \frac{\partial F}{\partial R} \right| +
\mbox{constant} \right]^{2/ (m-2)} \; ,
\end{equation}
and the redefinition of the scalar field
\begin{equation} \label{18}
\tilde{\phi} = \frac{1}{\sqrt{8\pi G}} \sqrt{ \frac{m-1}{m-2}} \ln \left[
\sqrt{32\pi} G\left| \frac{\partial F}{\partial R} \right| \right]
\end{equation}
(Maeda 1989) reduce the action (\ref{nonlin}) to
\begin{equation} \label{lin}
S=\alpha \int d^m x \sqrt{-\tilde{g}}\left[ \frac{\tilde{R}}{16\pi G}
-\frac{1}{2} \tilde{\nabla}^{\mu} \tilde{\phi} \tilde{\nabla}_{\mu}
\tilde{\phi}
-\frac{\epsilon \alpha}{2} \exp \left[ - \sqrt{8 \pi G \, \frac{m-2}{m-1}}
\, \tilde{\phi} \right] -U( \phi, \tilde{\phi} ) \right]
\end{equation}
where the two scalar fields $\phi $ and $\tilde{\phi}$ appear and
\begin{equation}
\alpha = \frac{ \partial F/\partial R}{\left| \partial F/\partial R
\right|} \; ,
\end{equation}
\begin{equation}
U( \phi, \tilde{\phi})= \alpha \exp \left( -\, \frac{m \sqrt{8\pi G} \tilde{\phi}}
{\sqrt{(m-1)(m-2)}} \right)
\left[ \frac{\alpha}{16\pi G} R( \phi, \tilde{\phi}) \exp\left(
\sqrt{\frac{m-2}{m-1} 8\pi G}\, \tilde{\phi} \right) -F( \phi, \tilde{\phi})
\right] \; ,
\end{equation}
and $ F( \phi, \tilde{\phi})= F( \phi, R( \phi,\tilde{\phi})) $.
The resulting system is of
nonlinear $\sigma$--model type, canonical gravity with two scalar fields
$\phi$, $\tilde{\phi} $.
In the particular case in which $F( \phi, R)$ is a linear function of the
Ricci curvature,
\begin{equation}
F( \phi, R)=f( \phi) R -V( \phi) \; ,
\end{equation}
the redefinition of the scalar field
\begin{equation} \label{22}
\tilde{\phi}=\frac{1}{\sqrt{8 \pi G}} \int d\phi \left\{ \frac{\epsilon
(m-2) f( \phi)
+2(m-1) \left[ df( \phi) /d\phi \right]^2}{2(m-2)f^2( \phi)}
\right\}^{1/2}
\end{equation}
(where the argument of the square root is assumed to be positive) leads to
the
Einstein action with a single scalar field $\phi$:
\begin{equation} \label{actionEframe}
S=\frac{|f|}{f} \int d^m x \sqrt{-\tilde{g}}\left[ \frac{\tilde{R}}{16\pi
G}-\frac{1}{2} \tilde{\nabla}^{\mu} \tilde{\phi}
\tilde{\nabla}_{\mu} \tilde{\phi} -U( \tilde{\phi} ) \right] \; .
\end{equation}
This action is equivalent to the Einstein equations
\begin{equation}
\tilde{R}_{\mu\nu}-\frac{1}{2} \tilde{g}_{\mu\nu} \tilde{R}=8 \pi G
\, \tilde{T}_{\mu\nu}
\left[ \tilde{\phi} \right] \; ,
\end{equation}
\begin{equation}
\tilde{T}_{\mu\nu} \left[ \tilde{\phi} \right] = \nabla_{\mu} \tilde{\phi}
\nabla_{\nu} \tilde{\phi} -\frac{1}{2} \,
\tilde{g}_{\mu\nu}\tilde{g}^{\alpha\beta}
\nabla_{\alpha} \tilde{\phi}\nabla_{\beta} \tilde{\phi}+ U
\, \tilde{g}_{\mu\nu} \; ,
\end{equation}
where
\begin{equation}
U( \tilde{\phi})= \frac{|f|}{f} \left[ 16\pi G \left| f( \phi) \right|
\right]^{\frac{-m}{m-2}} \, V( \phi )
\end{equation}
and $\phi =\phi \left( \tilde{\phi} \right) $.
The transformations (\ref{OmBD}), (\ref{OM}) and (\ref{redefNMC}) are
recovered
as particular cases of (\ref{22}), (\ref{17}).
In addition, all the theories described by a four--dimensional action of
the form
\begin{equation}
S=\int d^4 x \sqrt{-g} \left[ f( \phi ) R +A( \phi) {\nabla}^{\mu} \phi
{\nabla}_{\mu} \phi +V( \phi ) \right]
\end{equation}
and satisfying the relation
\begin{equation}
2Af-3\left( \frac{df}{d\phi} \right)^2 =0\; , \;\;\;\;\;\;\; V(
\phi)=\lambda
f^2 ( \phi)
\end{equation}
($\lambda=$constant) are conformally related (Shapiro and Takata 1995);
particular
cases include general relativity and the case of a conformally coupled
scalar
field.
The conformal transformation establishes a {\em mathematical} equivalence
between the theories formulated in the two conformal frames;
the space of solutions of the theory in one frame is isomorphic to the
space of
solutions in the conformally related frame (which is mathematically more
convenient to study). The conformal transformation can also be used as a
solution--generating technique, if solutions are known in one conformal
frame but not in another
(Harrison 1972;
Belinskii and Kalatnikov 1973;
Bekenstein 1974;
Van den Bergh 1980, 1982, 1983{\em a,b,c,d};
Froyland 1982;
Accioly, Vaidya and Som 1983;
Lorentz--Petzold 1984;
Barrow and Maeda 1990;
Klimcik and Kolnik 1993;
Abreu, Crawford and Mimoso 1994).
It is to be stressed that the mathematical equivalence
between the two systems {\em a priori} implies nothing about
their physical equivalence
(Brans 1988; Cotsakis 1993; Magnano and Sokolowski 1994). Moreover,
only the gravitational
(vacuum) part of the action is conformally equivalent to Einstein gravity:
if ordinary matter
(i.e. matter different from the scalar field used in the conformal
transformation) is added to the theory, the coupling of this matter to
gravity and the conservation equations that it satisfies
are different in the two conformally related frames.
The advantage of the conformal transformation in a
non--purely vacuum theory is questionable: it has
been argued that, because the Einstein frame scalar field is
coupled to matter, a simplification of the equations
of motion in this case does not occur (Barrow and Maeda 1990).
Not only it is possible to map the classes of theories considered above
into
canonical Einstein gravity, but it is also possible to find conformal
transformations between each two of these theories (see
Magnano and Sokolowski 1994 for a table of possible transformations).
Indeed, one expects to be able to do that by
taking appropriate compositions of different maps from gravitational
theories to general relativity, and their inverse maps.
We conclude this section with a remark on the terminology: it has become
common to use the word ``frame'' to
denote a set of dynamical variables of the theory considered;
the term ``gauge'' instead of ``frame'' has been
(rather improperly) used (Gibbons and Maeda 1988; Brans 1988). In
some papers
(Cho 1987, 1990, 1992, 1994, 1997; Cho and Yoon 1993)
the metric $\tilde{g}_{\mu\nu} $ in the Einstein frame is called
``Pauli metric'', as opposed to the ``Jordan'' or ``atomic unit'' metric
$g_{\mu\nu} $ of the Jordan frame.
\section{Is the Einstein frame physical~?}
\setcounter{equation}{0}
Many high energy theories and many classical gravity theories are
formulated by
using a conformal transformation mapping the Jordan
frame to the Einstein frame. Typically, the conformal factor of
the transformation is a function of a dilaton, or
Brans--Dicke--like field already present in the theory.
The classical theories of gravity for which a conformal transformation maps
the system into a new conformal frame, in which the gravitational sector
of the
theory reduces to the canonical Einstein form,
include Brans--Dicke theory and its scalar--tensor
generalizations, non--linear gravity theories, classical Kaluza--Klein
theories
and in general, all theories which have an extended gravitational sector or
which involve a dimensional reduction and compactification of extra
spacetime
dimensions. Quantum theories incorporating the conformal transformation
include
superstring and supergravity theories and $\sigma$--models. The
transformation
to the Einstein frame seems to be universally accepted for
supergravity and superstring theories (although field redefinitions may be
an issue for debate (Tseytlin 1993)). It is unknown
whether physics is
conformally invariant at a sufficiently high energy scale, but there are
indications in this sense from string theories (Green, Schwarz and Witten
1987) and
from $SU(N)$ induced gravity models in which, in the high energy limit,
the scalar fields of the theory approach conformal coupling
(Buchbinder, Odintsov and Shapiro 1992; Geyer and Odintsov 1996).
We have no experiments capable of probing the energy scale of string
theories, and
conformal invariance at this energy scale cannot be directly tested.
While the low--energy Einstein gravity
contains a dynamical degree of freedom connected with the ``length'' of the
metric tensor (the determinant $g$), this is absent in conformally
invariant gravity (e.g. induced gravity described by the action
(\ref{inducedgravity})).
The conformal invariance of a theory implies that the latter contains no
intrinsic mass; a nonzero mass would introduce a preferred length scale in
the theory,
thus breaking the scale--invariance.
The physical inequivalence of conformal
frames at low energies reflects the fact that the non--negligible
masses of the fields present in the theory break the conformal symmetry
which
is present at higher energies. In classical gravity theories,
there is disagreement and confusion
on the long--standing (Pauli 1955; Fierz 1956)
problem of which conformal frame is the physical one. Is the Jordan frame
physical and the Einstein frame unphysical~? Is the conformal
transformation necessary, and the Einstein frame physical~?
Does any other choice of the conformal factor in Eq.~(\ref{CT}) map the
theory
into a physically significant frame, and how many of these theories are
possible~? Here the term
``physical'' theory denotes one that is theoretically consistent and
predicts the values of some observables that can, at least in principle,
be measured in experiments performed in four macroscopic spacetime
dimensions (definitions that differ from ours are
sometimes adopted in the literature, see e.g. (Garay
and Garcia--Bellido 1993; Overduin and Wesson 1997)). The ambiguity in the
choice of the physical conformal frame raises also problems
of an almost philosophical character (Weinstein 1996).
Before attempting to answer any of these questions, it is important
to recognize that, in general, the
reformulation of the theory in a new conformal frame leads to a different,
physically inequivalent theory.
If one restricts oneself to consider the metric tensor and physics that
does
not involve only conformally invariant fields (e.g. a stress--energy tensor
$T_{\mu\nu} $ with nonvanishing trace), or experiments involving massive
particles and timelike observers, it is obvious that metrics conformally
related by a nontrivial transformation of the kind (\ref{CT}) on a manifold
describe different gravitational
fields and different physical situations. For example, one could consider a
Friedmann--Lemaitre-Robertson--Walker metric with flat spatial sections,
given
by the line element
\begin{equation} \label{EdS}
ds^2=a^2 ( \eta ) \left( -d\eta^2+dx^2+dy^2+dz^2 \right) \; ,
\end{equation}
where $\eta$ is the conformal time and $(x,y,z)$ are spatial comoving
coordinates. The metric (\ref{EdS}) is conformally flat, but certainly it is
not
physically equivalent to the Minkowski metric $\eta_{\mu\nu}$, since it
exhibits a nontrivial dynamics and significant (observed) cosmological
effects.
The authors working in classical gravitational physics can be grouped
into five categories according to their attitude towards the issue of the
conformal frame (we partially follow a previous classification by
Magnano and Sokolowski (1994)):
\begin{itemize}
\item authors that neglect the issue
(Deruelle and Spindel 1990;
Garcia--Bellido and Quir\'{o}s 1990; Hwang 1990;
Gottl\"{o}ber, M\"{u}ller and Starobinsky 1991;
Suzuki and Yoshimura 1991;
Rothman and Anninos 1991;
Guendelman 1992;
Guth and Jain 1992;
Liddle and Wands 1992;
Capozziello, Occhionero and Amendola 1993;
Capozziello, de Ritis and Rubano 1993;
McDonald 1993{\em a,b};
Barrow, Mimoso and de Garcia Maia 1993;
Garcia--Bellido and Wands 1995;
Laycock and Liddle 1994;
Alvarez and Bel\'{e}n Gavela 1983;
Sadhev 1984;
Deruelle and Madore 1987;
Van den Bergh and Tavakol 1993;
Fabris and Sakellariadou 1997;
Kubyshin and Martin 1995;
Fabris and Martin 1993;
Chatterjee and Banerjee 1993;
Biesiada 1994;
Liddle and Lyth 1993;
Hwang 1996);
\item authors that explicitely support the view that a theory
formulated in one conformal frame is physically equivalent to the
reformulation of the same theory in a different conformal frame
(Buchm\"{u}ller and Dragon 1989; Holman, Kolb and
Wang 1990; Campbell, Linde and Olive 1991;
Casas, Garcia--Bellido
and Quir\'{o}s 1991; Garay
and Garcia--Bellido 1993; Levin 1995{\em
a,b}; Shapiro and Takata 1995; Kaloper and Olive 1998);
\item authors that are aware of the physical non--equivalence of
conformally
related frames but
do not present conclusive arguments in favour of one or the other of the
two versions of the theory (and/or perform computations both in the Jordan
and the Einstein frame)
(Brans 1988; Jakubiec
and Kijowski 1988; Kasper and Schmidt 1989; Deruelle and Spindel
1990; Hwang 1990; Kolb, Salopek and Turner 1990; Gottl\"{o}ber,
M\"{u}ller and Starobinsky 1991;
Suzuki and
Yoshimura 1991; Rothman and Anninos
1991; Guendelman 1992; Guth and Jain 1992; Liddle and Wands 1992;
Piccinelli,
Lucchin and
Matarrese 1992; Damour and Nordvedt 1993{\em a,b}; Cotsakis
and Saich 1994; Hu, Turner and Weinberg 1994;
Turner 1993; Mimoso and Wands
1995{\em b}; Faraoni 1996{\em a}; Weinstein 1996; Turner and Weinberg
1997; Majumdar
1997; Capozziello, de Ritis and Marino 1997;
Dick 1998);
\item authors that identify the Jordan frame as physical (possibly allowing
the use of the conformal transformation as a purely mathematical tool)
(Gross and Perry 1983; Barrow and
Maeda 1992; Berkin, Maeda and Yokoyama 1990;
Damour, Gibbons and Gundlach 1990;
Kalara, Kaloper and Olive 1990;
Berkin and Maeda 1991;
Damour and Gundlach 1991; Holman, Kolb and Wang 1991;
Mollerach and Matarrese 1992; Tao
and Xue 1992; Wu 1992; del Campo 1992; Tkacev 1992;
Mignemi and Whiltshire 1992; Barrow 1993; Bruckman and Velazquez 1993;
Cotsakis and Flessas 1993; Will and Steinhardt 1995; Scheel, Shapiro and
Teukolsky 1995; Barros and Romero 1998);
\item authors that identify the Einstein frame as the physical one
(Van den Bergh 1981, 1983{\em e}; Kunstatter, Lee and Leivo 1986;
Gibbons and Maeda 1988; Sokolowski
1989{\em a,b}; Pimentel and Stein--Schabes 1989; Kubyshin, Rubakov
and Tkachev 1989;
Salopek, Bond and
Bardeen 1989; Kolb, Salopek and Turner 1990; Cho
1990; Deruelle, Garriga and Verdaguer 1991; Cho 1992;
Amendola {\em et al.} 1992;
Amendola, Bellisai and
Occhionero 1993; Cotsakis 1993; Cho and Yoon 1993; Alonso {\em et al.} 1994;
Magnano and Sokolowski
1994; Cho 1994; Occhionero and Amendola 1994;
Lu and Cheng 1996; Fujii 1998;
Cho 1997; Cho and Keum 1998).
\end{itemize}
Sometimes, works by
the same author(s) belong to two different groups; this illustrates the
confusion on the issue that is present in the literature.
The two conformal frames, however, are substantially different.
Furthermore, if a preferred conformal
frame does not exist, it is possible to generate an infinite number of
alternative theories and of cosmological inflationary scenarios by
arbitrarily choosing the conformal
factor (\ref{specialCT}) of the transformation (\ref{CT}). Only
when a physical frame is
uniquely determined the theory and
its observable predictions are meaningful.
Earlier attempts to solve the problem in Brans--Dicke theory
advocated the equivalence principle: to
this end it is essential to consider not only the gravitational, but also
the matter part of the Lagrangian. The use of the equivalence principle
requires a careful analysis (Brans 1988; Magnano and Sokolowski 1994); by
including the Lagrangian for ordinary matter in the
Jordan frame action, one finds that, after the conformal transformation has
been
performed, the scalar field in the Einstein frame couples minimally to
gravity,
but nonminimally to matter (``non--universal coupling''). Historically,
the Jordan frame was selected as physical because the dilaton couples
minimally to ordinary matter in this frame (Brans and Dicke 1961).
Attempts were also made to derive conclusive results
from the conservation laws for the
stress--energy tensor of matter, favouring the Jordan frame (Brans 1988)
or
the Einstein frame (Cotsakis 1993; Cotsakis 1995 -- see
(Teyssandier 1995; Schmidt 1995; Magnano and Sokolowski 1994) for
the correction of a flaw in the proof of
(Cotsakis 1993; Cotsakis 1995)). Indeed,
the conservation laws do not allow one to
draw definite conclusions (Magnano and Sokolowski 1994).
However, the point of view that selects the Jordan frame as physical
is untenable because it leads to a negative definite, or indefinite
kinetic energy for the scalar field; on the contrary,
the energy density is positive definite in the Einstein frame. This
result was initially proved for Brans--Dicke and for Kaluza--Klein
theories, and later generalized to gravitational theories with Lagrangian
density ${\cal L}=f(R) \sqrt{-g}
+{\cal L}_{matter}$ (Magnano and Sokolowski 1994). This
implies that the theory does not have a stable ground state, and
that the system decays into a lower energy state {\em ad infinitum}
(Gross and Perry 1983; Appelquist and Chodos 1983; Maeda 1986{\em
b}; Maeda 1987).
While a stable ground state may not be required for certain particular
solutions of the theory (e.g. cosmological solutions (Padmanabhan
1988)), or
for Liouville's theory (D'Hoker and Jackiw 1982), it is certainly
necessary for a viable theory of classical gravity. The ground state of
the system must
be stable against
small fluctuations and not fine--tuned, i.e. nearby solutions of the theory
must have similar properties
(Strater and Wightman 1964; Epstein, Glaser and Jaffe 1965; Abbott
and Deser 1982). The fact that the
energy is not positive definite is usually associated with the formulation
of
the theory in unphysical variables. On the contrary, the energy conditions
(Wald 1984) are
believed to be satisfied by all classical matter and fields (not so in
quantum theories~--~see Sec.~8). This decisive
argument was first used to select the Einstein frame in Kaluza--Klein and
Brans--Dicke theories
(Bombelli {\em et
al.} 1987; Sokolowski and Carr 1986;
Sokolowski 1989{\em a,b}; Sokolowski and Golda 1987; Cho 1990; Cho
1994), and later generalized to scalar--tensor theories
(Cho 1997) and nonlinear gravity theories (Magnano and Sokolowski 1994).
Also, the uniqueness of a physical conformal frame was proved
(Sokolowski 1989{\em a,b}; Magnano and Sokolowski 1994).
For completeness, we mention other arguments supporting the Einstein
frame as physical that have appeared in the literature: however, they are
either highly questionable (sometimes to the point of not being valid),
or not as compelling as the one based on the positivity of
energy. The Hilbert and the Palatini actions for scalar--tensor theories are
equivalent in the Einstein but not in the Jordan frame (Van den Bergh 1981,
1983{\em e}). Some authors choose the Einstein frame on the basis
of the resemblance of its
action with that of general relativity (Gibbons and Maeda
1988; Pimentel and Stein--Schabes
1989; Alonso {\em et al.} 1994; Amendola, Bellisai and Occhionero 1993);
others (Salopek, Bond and Bardeen 1989; Kolb, Salopek and Turner 1990)
find difficulties in quantizing the scalar field fluctuations in the linear
approximation in the Jordan frame, but not in the Einstein frame;
quantization and the conformal transformation do not commute
(Fujii and Nishioka 1990; Nishioka and Fujii 1992; Fakir and Habib 1993).
Other authors claim
that the Einstein frame is forced upon us by the compactification of the
extra dimensions in higher dimensional theories
(Kubyshin, Rubakov and Tkachev 1989; Deruelle, Garriga and Verdaguer 1991).
A possible alternative to the Einstein frame formulation of the complete
theory (gravity plus matter) has been supported (Magnano and Sokolowski
1994), and consists in starting with the introduction of matter
non--minimally coupled to the Brans--Dicke scalar in the Jordan frame,
with
the coupling tuned in such a way that the Einstein frame action exhibits
matter minimally coupled to the Einstein frame scalar field, after the
conformal
transformation has been performed. This procedure arises
from the observation (Magnano and Sokolowski 1994) that the traditional
way of prescribing
matter minimally coupled in the Jordan frame relies on the implicit
assumptions that \\
{\em i)} the equivalence principle holds;\\
{\em ii)} the Jordan frame is the physical one. \\
While these assumptions are not justified {\em a priori}, as noted
by Magnano and Sokolowski (1994), the possibility of adding matter
in the Jordan frame with a
coupling that exactly balances the exponential factor appearing in the
Einstein frame
appears to be completely {\em ad hoc} and is not physically motivated;
by proceeeding along these lines, one could arbitrarily change the theory
without theoretical justification.
As a summary, the Einstein frame is the physical one (and the Jordan frame
and
all the other conformal frames are unphysical) for the following classes of
theories:
\begin{itemize}
\item scalar--tensor theories of gravity described by the Lagrangian
density
\begin{equation}
{\cal L}=\sqrt{-g} \left[ f( \phi) R-\frac{\omega ( \phi )}{\phi}
\nabla^{\mu}
\phi \nabla_{\mu} \phi +\Lambda ( \phi) \right]
+{\cal L}_{matter} \; ,
\end{equation}
which includes Brans--Dicke theory as a special case (see Sec. 4 for the
corresponding field equations);
\item classical Kaluza--Klein theories;
\item nonlinear theories of gravity whose gravitational part is described
by the Lagrangian density $ {\cal L}=\sqrt{-g} f(R) $ (see Sec. 4 for the
corresponding field equations).
\end{itemize}
Since the Jordan frame formulation of alternative theories of gravity is
unphysical, one reaches the conclusion that the Einstein frame formulation
is
the only possible one for a classical theory. In other words, this amounts
to
say that Einstein gravity is essentially the only viable classical
theory of gravity (Bicknell 1974; Magnano and Sokolowski 1994; Magnano
1995; Sokolowski 1997). We remark that this statement is strictly correct
only if the purely gravitational part of the action (without matter) is
considered: in fact, when matter is included into the action, in general
it exhibits an anomalous coupling to the scalar field which does not
occur in general relativity.
Finally, we comment on the case of a nonminimally coupled scalar field
described by the action (\ref{nonmincoupl}). From the above discussion,
one may be induced to believe that the
Einstein frame description is necessary also in this case: this conclusion
would be incorrect because the kinetic term in the action
(\ref{nonmincoupl})
is
canonical and positive definite, and the problem discussed above for
other theories
of gravity of the form ${\cal L}=\sqrt{-g} f(R)$ does not arise.
It is, however, still true that the Einstein and the Jordan frame are
physically inequivalent: the conformal transformation (\ref{CT}),
(\ref{OM}), (\ref{redefNMC}) implies only a
mathematical, not a physical equivalence, despite strong statements on this
regard that point to the contrary (Accioly {\em et al.} 1993).
\section{Conformal transformations in gravitational theories}
\setcounter{equation}{0}
In this section, we review in greater detail the arguments that led to the
conclusions of the previous section, devoting more attention to specific
classical theories of gravity.\\ \\
{\bf Brans--Dicke theory:} The Jordan--Fierz--Brans--Dicke theory
(Jordan 1949; Jordan 1955; Fierz 1956; Brans and Dicke 1961) described by
the action (\ref{BDaction}) (where $\phi $ has the dimensions of the
inverse gravitational constant,
$ \left[ \phi \right] =\left[ G^{-1} \right] $)
has been the subject of renewed interest, expecially in cosmology in the
extended inflationary scenario
(La and Steinhardt 1989; Kolb, Salopek and Turner 1990; Laycock and Liddle
1994). The recent surge of interest appears to be motivated by a
restricted
conformal invariance of the gravitational part of the
Lagrangian that mimics the conformal invariance of string theories before
conformal symmetry is broken (Cho 1992; Cho 1994; Turner 1993; Kolitch and
Eardley 1995; Brans 1997; Cho and Keum 1998).
The conformal transformation (\ref{CT}) with
$\Omega=\left( G\phi \right)^{\alpha}$,
together with the redefinition of the scalar field
$\tilde{\phi}=G^{-2\alpha} \phi^{1-2\alpha}$ ($\alpha \neq 1/2$)
maps the Brans--Dicke action (\ref{BDaction}) into an action of
the same form, but with parameter
\begin{equation} \label{omegatilde}
\tilde{\omega}=\frac{\omega -6\alpha \left( \alpha -1 \right)}{\left(
1-2\alpha \right)^2} \; .
\end{equation}
If $\omega=-3/2 $, the action (\ref{BDaction}) is invariant under the
conformal
transformation; this case corresponds to the singularity $\alpha
\rightarrow -1/2 $ in the expression (\ref{omegatilde}), but the field
equations (\ref{BD1}), (\ref{BD2}) are not defined in this case.
This conformal invariance is broken when a term describing
matter with $T \equiv {T^{\mu}}_{\mu} \neq 0$
is added to the purely gravitational part of the Brans--Dicke
Lagrangian. This property of conformal invariance of the
gravitational sector of the theory is enjoyed also by a subclass of more
general tensor--multiscalar theories of gravity (Damour and
Esposito--Far\`{e}se 1992) and has not yet been investigated in depth in the
general case. The study of the conformal invariance property of
Brans--Dicke theory helps to solve the problems arising in the
$\omega \rightarrow \infty $ limit of Brans--Dicke theory (Faraoni 1998).
This limit is supposed to give back general relativity, but it fails to do
so when $T=0$. The differences between the Jordan
and the Einstein frame formulations of Brans--Dicke theory
have been pointed out clearly in (Guth and Jain 1992). It has been noted
in studies of gravitational collapse to a black hole in Brans--Dicke
theory
that the noncanonical form of the Brans--Dicke scalar energy--momentum
tensor in the Jordan frame violates the null energy
condition ($ R_{\alpha\beta} l^{\alpha} l^{\beta}\geq 0 $ for all null
vectors $ l^{\alpha}$). This fact is responsible for a {\em decrease} in
time of the horizon area during the dynamical phase of the collapse,
contrarily to the case of general relativity (Scheel, Shapiro and
Teukolsky 1995).
The violation of the weak energy condition in the Jordan frame has also
been pointed out (Weinstein 1996; Faraoni and Gunzig 1998{\em a}).
Brans--Dicke theory must necessarily be
reformulated in the Einstein frame; the strongest argument supporting
this conclusion is obtained by observing that the kinetic energy term for
the Brans--Dicke field in the Jordan frame Brans--Dicke Lagrangian
does not have the canonical form for a scalar field, and it is negative
definite
(Gross and Perry 1983; Appelquist and Chodos
1983; Maeda 1986{\em b}; Maeda 1987; Sokolowski and Carr 1986;
Maeda and Pang 1986;
Sokolowski 1989{\em a,b}; Cho 1992, 1993; Magnano and Sokolowski
1994).
The fact that this energy argument was originally developed for
Brans--Dicke and for Kaluza--Klein theories is
not surprising, owing to the fact that
Brans--Dicke theory can be derived from a Kaluza--Klein
theory with $n$ extra dimensions and Brans--Dicke parameter
$\omega=-(n-1) / n $
(Jordan 1959; Brans and Dicke 1961; Bergmann 1968; Wagoner 1970;
Harrison 1972;
Belinskii and Kalatnikov 1973; Freund 1982; Gross and Perry 1983; Cho
1992); this derivation is
seen as a motivation for Brans--Dicke theory, and provides a useful way
of generating exact solutions in one theory from known solutions in the
other (Billyard and Coley 1997). Despite this derivation from
Kaluza--Klein theory, the Jordan frame Brans--Dicke theory is sometimes
considered in $ D>4 $ spacetime dimensions (e.g. Majumdar 1997).
An independent argument supporting the choice of the Einstein
frame as the
physical one is obtained by considering (Cho 1992; Damour and
Nordvedt 1993{\em a,b})
the linearized version of the theory. In the Jordan frame the metric is
$\gamma_{\mu\nu}=\eta_{\mu\nu} +h_{\mu\nu} $ (where $\eta_{\mu\nu} $ is
the Minkowski metric), while in the Einstein frame the conformally
transformed metric is
\begin{equation}
\tilde{\gamma}_{\mu\nu}=\gamma_{\mu\nu}
\exp \left( \sqrt{\frac{16\pi G}{2\omega +3}} \, \tilde{\phi} \right)
\simeq \eta_{\mu\nu} +\rho_{\mu\nu} \; ,
\end{equation}
where
\begin{equation}
\rho_{\mu\nu}=h_{\mu\nu} + \left( \sqrt{ \frac{16\pi G}{2\omega +3}} \,
\tilde{\phi} \right) \eta_{\mu\nu} \; .
\end{equation}
The canonical action for a spin~2 field is not obtained from
the metric $h_{\mu\nu}$, but it is instead given by $\rho_{\mu\nu}$, and
the
spin~2 gravitational field is described by the Einstein frame corrections
$\rho_{\mu\nu} $ to the flat metric. The Jordan frame corrections
$h_{\mu\nu}$
to $\eta_{\mu\nu}$ describe a mixture of spin~0 and spin~2 fields (the
fact that spin~0 and spin~2 modes are mixed together can also be seen from
the full equations of motion of the theory).
A third argument has been proposed against the choice of the Jordan frame
as the physical one: when quantum corrections are taken into account,
one cannot
maintain the minimal coupling of ordinary (i.e. other than the dilaton)
matter to the Jordan metric (Cho 1997). This nullifies the
traditional
statement that the Jordan frame is to be preferred because the scalar
couples
minimally to all forms of matter in this frame.
These results are of the outmost importance for the experiments aimed at
testing Einstein's theory: the Jordan frame versions of alternative
classical theories of gravity are simply nonviable. However,
despite the necessity of
formulating Brans--Dicke theory in the Einstein frame, the
classical
tests of gravity for this theory are studied only for the Jordan frame
formulation. In general, the authors working on the
experimental tests of general relativity and alternative gravity theories
do
not seem to be aware of this paradoxical situation (e.g.
Reasenberg {\em et al.} 1979; Will 1993).
The conformal rescaling has been used as a mathematical technique to
generate exact
solutions of Brans--Dicke theory from known solutions of the Einstein
equations (Harrison 1972; Belinskii and Kalatnikov 1973; Lorentz--Petzold
1984) and approximate solutions in the linearized theory (Barros and Romero
1998). \\\\
{\bf (Generalized) scalar--tensor theories:} This class of theories
(Bergmann 1968; Wagoner 1970; Nordvedt 1970; Will 1993) is described by
the Lagrangian density
\begin{equation} \label{nonlin2}
S= \int d^4 x \sqrt{ -g}\left[ f( \phi ) R -\frac{\omega}{2}
\nabla^{\alpha} \phi \nabla_{\alpha} \phi -V( \phi ) \right] +S_{matter} \; ,
\end{equation}
where $\omega=\omega( \phi)$ and
$V =V ( \phi)$ (or by the more
general action (\ref{nonlin})). The corresponding field equations are
\begin{eqnarray}
f( \phi) \left( R_{\mu\nu} -\frac{1}{2} g_{\mu\nu} R \right) & = &
\frac{1}{2}\,
T_{\mu\nu} + \frac{\omega}{2} \left( \nabla_{\mu} \phi \nabla_{\nu} \phi -\frac{1}{2}
\, g_{\mu\nu} \nabla^{\alpha} \phi \nabla_{\alpha} \phi \right)
\nonumber \\
& & +\frac{1}{2}\, g_{\mu\nu} \left[ Rf( \phi) -2V \right] +
\nabla_{\mu} \nabla_{\nu} f -g_{\mu\nu} \Box f \; ,
\end{eqnarray}
\begin{equation}
\Box \phi +\frac{1}{\omega}\left( \frac{1}{2} \, \frac{ d\omega}{d\phi}
\nabla^{\alpha}\phi \nabla_{\alpha}\phi + \frac{df}{d\phi} - \frac{d
V}{d\phi}\right) =0 \; ,
\end{equation}
where $T_{\mu\nu} = 2 (-g )^{-1/2} \delta {\cal L}_{matter}/\delta
g^{\mu\nu}$. The action
(\ref{nonlin2}) contains Brans--Dicke
theory (\ref{BDaction}) and the nonminimally coupled scalar field theory
(\ref{nonmincoupl}) as particular cases. Theories with more
than one scalar field have also been investigated
(Damour and Esposito--Far\`{e}se 1992; Berezin {\em et al.} 1989; Rainer
and Zuhk 1996).
A revival of interest in scalar--tensor theories
was generated by the fact that
in supergravity and superstring theories, scalar fields are associated to
the
metric tensor field, and that a coupling between a
scalar field and gravity seems unavoidable in string theories
(Green, Schwarz and Witten 1987). Indeed, scalar fields have been present in
relativistic gravitational theories even before general relativity was
formulated (see Brans 1997 for an historical perspective).
The necessity of
the conformal transformation to the Einstein frame has been advocated in
(Cho 1992, 1997) by investigating which linearized
metric describes the physical spin~2
gravitons. A similar argument was presented in
(Damour and Nordvedt 1993{\em a,b}),
although these authors did not see it as a
compelling reason to select the
Einstein frame as the physical one. It has also been pointed out
(Teyssandier and Tourrenc 1983; Damour and
Esposito--Far\`{e}se 1992; Damour and Nordvedt 1993{\em a,b})
that the mixing of
$g_{\mu\nu}$
and $ \phi$ in the Jordan frame equations of motion
makes the Jordan frame
variables an inconvenient set for formulating the Cauchy problem.
Moreover, the generalization to the case of tensor--multi scalar theories
of gravitation, where several scalar fields instead of a single one
appear, is straightforward
in the Einstein frame but not so in the Jordan frame (Damour and
Esposito--Far\`{e}se 1992). In the Einstein frame, ordinary matter does not
obey the conservation law
$\tilde{\nabla}^{\nu} \tilde{T}_{\mu\nu}=0$ (with the exception of a
radiative fluid with $\tilde{T}=0$, which is conformally invariant)
because of the coupling to the dilaton $\phi$. Instead, the equation
\begin{equation} \label{qq}
\tilde{\nabla}_{\nu} \tilde{T}^{\mu\nu}=-\frac{1}{\Omega}
\frac{\partial \Omega}{\partial \phi} \,
\tilde{T} \, \tilde{\nabla}^{\mu}\phi
\end{equation}
is satisfied. The total energy--momentum tensor of matter plus the
scalar field is conserved (see Magnano and Sokolowski 1994 for a detailed
discussion of conservation laws in both conformal frames).
The phenomenon of the propagation of light through scalar--tensor
gravitational waves and the resulting time--dependent amplification
of the light source provide an example of the physical difference between
the Jordan and the Einstein frame.. In the Jordan frame the amplification
effect is of first order in the gravitational wave amplitude (Faraoni
1996{\em a}), while it is only of second order in the Einstein frame
(Faraoni and Gunzig 1998{\em a}).
It is interesting to note that, while the observational constraints on the
Brans--Dicke parameter $\omega$ is $\omega > 500 $ (Reasenberg {\em et
al.} 1979),
Brans--Dicke theory in the Einstein frame is subject to the much more
stringent
constraint $ \omega > 10^8 $ (Cho 1997). However, since the Einstein
frame is the physical one, it is not very meaningful to present constraints
on the Jordan frame parameter $\omega $.
Other formal and physical differences occur in the Jordan and the Einstein
frame: the singular points $\omega \rightarrow \infty$ in the
$\omega$--parameter space of
the Jordan frame correspond to a minimum of the coupling factor $\ln
\Omega(
\phi) $ in the Einstein frame (Damour and Nordvedt 1993{\em a,b}).
Singularities of the
scalar--tensor theory may be smoothed out in the Jordan frame, but they
reappear in the Einstein frame and plague the theory again due to
the fact that the kinetic terms are canonical and the energy conditions
(which are crucial
in the singularity theorems) are satisfied in the Einstein frame
(Kaloper and Olive 1998).
In (Bose and Lohiya 1997), the quasi--local mass defined in general
relativity by the recent Hawking--Horowitz prescription (Hawking and
Horowitz 1996) was generalized to $n$--dimensional scalar--tensor
theories. It was shown that this quasi--local mass is invariant under the
conformal transformation that reduces the gravitational part
of the scalar--tensor theory to canonical Einstein gravity. The result
holds under the assumptions that the conformal factor $\Omega \left( \phi
\right) $ is a monotonic function of the scalar field $\phi$, and that a
global foliation of the spacetime manifold with spacelike hypersurfaces
exists, but it does not require asymptotic flatness. Conformal invariance
of the quasi--local mass was previously found in another generalization to
scalar--tensor theories of the quasi--local mass (Chan,
Creighton and Mann 1996).
The conformal transformation technique has been used to derive new
solutions to
scalar--tensor theories from known solutions of Einstein's theory (Van den
Bergh 1980, 1982, 1983{\em a,b,c,d}; Barrow and Maeda 1990).\\ \\ {\bf
Nonlinear gravitational theories:} a yet more general class of theories
than the previous one is described by the Lagrangian density
\begin{equation} \label{f}
{\cal L}=f(R) \sqrt{-g} +{\cal L}_{matter} \; ,
\end{equation}
which generates the field equations
\begin{equation}
\left( \frac{df}{dR} \right) R_{\mu\nu} -\frac{1}{2}\, f(R) g_{\mu\nu}
-\nabla_{\mu}
\nabla_{\nu} \left( \frac{df}{dR} \right) +g_{\mu\nu} \Box \left( \frac{df}{dR}
\right) = T_{\mu\nu} \; ,
\end{equation}
\begin{equation}
T_{\mu\nu}=\frac{2}{\sqrt{-g}} \frac{\delta {\cal L}_{matter}}{\delta
g^{\mu\nu}} \; .
\end{equation}
It is claimed in (Magnano and Sokolowski 1994) that the Einstein frame is
the only physical one for this class of theories, using the energy
argument of Sec.~3. The idea
underlying the proof is to expand the function $f(R)$ as
\begin{equation}
f(R)=R+aR^2+{\mbox ~...}\; , \;\;\;\;\;\;\;\; a>0\; ,
\end{equation}
and then prove a positive energy theorem in the Einstein frame and the
indefiniteness of the energy sign in the Jordan frame.
The occurrence of singularities in higher order theories of gravity of the
form (\ref{f}) has been studied in
(Barrow and Cotsakis 1988; Miritzis and Cotsakis 1996; Kaloper and Olive
1998), both in the Jordan and in the Einstein frame.\\ \\
{\bf Kaluza--Klein theories:} In
classical Kaluza--Klein theories
(Appelquist, Chodos and Freund 1987; Bailin and Love 1987; Overduin
and Wesson 1997), the scalar field (dilaton) has a geometrical
origin and corresponds to the scale factor of the extra spatial
dimensions.
The extra dimensions manifest themselves as matter (scalar fields) in the
4--dimensional spacetime. In
the simplest version of the theory with a single scalar
field\footnote{See
e.g. (Berezin {\em et al.} 1989; Rainer and Zuhk 1996) for Kaluza--Klein
theories with multiple dilatons.},
one starts with the $(4+d)$--dimensional action of vacuum general relativity
\begin{equation}
\hat{S}=\frac{1}{16 \pi G}\int d^{(4+d)}x \left( \hat{R}+
\hat{\Lambda} \right) \sqrt{-\hat{g}} \; ,
\end{equation}
where a caret denotes higher--dimensional quantities,
the $(4+d)$--dimensional metric has the form
\begin{equation}
\left(\hat{g}_{AB}\right)=\left(
\begin{array}{cc}
\hat{g}_{\mu\nu} & 0 \\
0 & \hat{\phi}_{ab}
\end{array} \right) \; ,
\end{equation}
and $\hat{\Lambda} $ is the cosmological constant of the
$(4+d)$--dimensional
spacetime manifold. The latter is assumed to have the structure
$ M\otimes K$, where $M$ is 4--dimensional and
$K$ is $d$--dimensional. Here the notations depart from those introduced
at the beginning of this paper: the indices $A,
B$,~...~$=0,1,2,3,$~...~,$(4+d)$;
$\mu,\nu$,~...~$=0,1,2,3$, and $a,b,... =4,5,$~...,~$(4+d)$.
Dimensional reduction and the conformal transformation (\ref{CT}) with
$\Omega =\sqrt{ \phi}$, $\phi =\left| \hat{\phi}_{ab} \right| $,
together with the redefinition of the scalar field
\begin{equation} \label{sigma}
d \sigma= \frac{1}{2} \left( \frac{d+2}{16\pi G\, d} \right)^{1/2} \frac{d
\phi}{\phi} \; ,
\end{equation}
leads to the Einstein frame action
\begin{equation}
S=\int d^4 x \left[\frac{R}{16\pi G} -\frac{1}{2}\, \nabla_{\mu}\sigma
\nabla^{\mu} \sigma -V( \sigma ) \right] \sqrt{-g} \; ,
\end{equation}
\begin{equation} \label{VKK}
V( \sigma )=\frac{R_K}{16\pi G} \exp \left( -\sqrt{\frac{16\pi G (d+2)}{d}}
\,\sigma \right) +\frac{\hat{\Lambda}}{16\pi G}
\exp \left(-\sqrt{\frac{16\pi Gd}{d+2} }\, \sigma \right) \; ,
\end{equation}
were $R_K$ is the Ricci curvature of the metric on the submanifold $K$.
Note that $\phi$ is dimensionless. However the redefined scalar field
$\sigma $ has the dimensions $\left[ \sigma \right] =\left[ G^{-1/2}
\right] $, and is usually measured in Planck masses.
Unfortunately, the omission of a factor $1/\sqrt{16\pi G}$ in the right
hand side of Eq.~(\ref{sigma}) seems to be common in the
literature on Kaluza--Klein cosmology (cf. (Faraoni, Cooperstock and
Overduin 1995)
and footnote~11 of (Kolb, Salopek and Turner 1990)) and it leads to
a non--canonical kinetic term
$(16 \pi G )^{-1} \nabla_{\mu}\sigma \nabla^{\mu} \sigma $ instead of
$ \nabla_{\mu}\sigma \nabla^{\mu} \sigma /2 $ in the final
action, and to a dimensionless field $\sigma$ instead of one with the
correct
dimensions $\left[ \sigma \right]=\left[ G^{-1/2} \right]$.
The error is perhaps due to the different notations used
by particle physicists and by relativists; however insignificant it may
appear to be, it profoundly affects the viability of the Kaluza--Klein
cosmological model
considered, since the spectral index of density perturbations is affected
through the arguments of the exponentials in the scalar field potential
(\ref{VKK}) (Faraoni, Cooperstock and Overduin 1995). In the Jordan
frame, the
scalar field originating from the presence of the extra dimensions has
kinetic energy that is negative definite or indefinite and an energy
spectrum
which is unbounded from below, implying that the ground state is unstable
(Maeda 1986{\em a}; Maeda and Pang 1986; Sokolowski and Carr 1986;
Sokolowski
and Golda 1987). These defects are removed by the conformal rescaling
(\ref{CT}) of the 4--dimensional metric. The requirement that
the conformally rescaled system in 4 dimensions has positive definite
energy (a
purely classical argument) singles out a {\em unique} conformal factor.
A proof of the uniqueness in 5--dimensional Kaluza--Klein theory was
given in (Bombelli {\em et al.} 1987) and later generalized to an
arbitrary number of extra spatial dimensions (Sokolowski 1989{\em a,b}).
From a quantum point of view, arguments in favour of the conformal
rescaling have been pointed out (Maeda 1986{\em b}) and, in the context of
10-- and 11--dimensional supergravity, the need for a conformal
transformation in order
to identify the physical fields was recognized
(Scherk and Schwarz 1979; Chamseddine 1981; Dine {\em et al.} 1985). The
requirement that the supersymmetry
transformation of 11--dimensional supergravity take an $SU(8)$ covariant
form leads to the same conformal factor (de Witt and Nicolai 1986). The
conformal transformation which works as a cure for the dimensionally
reduced $(4+d)$--dimensional Einstein gravity does not work for the
dimensionally reduced Gauss--Bonnet theory (Sokolowski {\em et al.} 1991).
It is unfortunate that in the literature on Kaluza--Klein theories many
authors
neglected the conformal rescaling and only performed computations in the
Jordan frame. Many results of classical Kaluza--Klein theories should be
reanalysed in the Einstein frame (e.g.
Alvarez and Bel\'{e}n Gavela 1983; Sadhev 1984; Deruelle and Madore 1987;
Van den Bergh and Tavakol 1993; Fabris and Sakellariadou 1997; Kubyshin
and Martin 1995; Fabris and Martin 1993; Chatterjee and Banerjee 1993;
Biesiada 1994).\\ \\
{\bf Torsion gravity:} Theories of gravity with torsion have been studied
in
order to incorporate the quantum mechanical spin of elementary particles,
or in attempts to formulate gauge theories of gravity (Hehl {\em et
al.} 1976). An
example is given by a theory of gravity with torsion, related to string
theories, recently formulated both in
the Jordan and in the Einstein frame
(Hammond 1990, 1996). Torsion acts as a source of the scalar
field; ordinary (i.e. different from the scalar
field appearing in (\ref{specialCT})) matter is added to the theory
formulated in the Jordan or in the
Einstein frame. This possibility differs from a conformal transformation
of the total (gravity plus matter) system to the Einstein frame, and it
does not appear to be legitimate since ordinary matter cannot be created
as an effect of a conformal transformation.
Although
mathematically possible, this procedure appears to be very artificial, and
it has been considered also in (Magnano and Sokolowski 1994) by including a
nonminimal coupling of the scalar field to matter in the Jordan frame. The
coupling was tuned in such a way that the Einstein frame matter is
minimally coupled to the corresponding scalar field.
The Jordan frame formulation of this theory is unviable because the large
effects of the dilaton contradict the observations (Hammond 1996), and the
Einstein frame version of this theory is the only possible option.
Induced gravity, which is described by the action
\begin{equation} \label{inducedgravity}
S=\int d^4x \sqrt{-g} \left[ -\, \frac{\xi}{2} R\phi^2 -\frac{1}{2}
\nabla^{\mu} \phi \nabla_{\mu} \phi - V( \phi) \right] \; ,
\end{equation}
is conformally invariant if $\xi=1/6$. The field equations are
\begin{equation}
R_{\mu\nu}-\frac{1}{2} g_{\mu\nu} R = -\frac{1}{\xi \phi^2} \left[ \left(
1-4\xi \right) \nabla_{\mu}
\nabla_{\nu} \phi + g_{\mu\nu} \left( 2\xi -\frac{1}{2}
\right) \nabla^{\alpha} \phi \nabla_{\alpha} \phi - V g_{\mu\nu} +2 \xi
g_{\mu\nu} \phi \Box \phi \right] \; ,
\end{equation}
\begin{equation}
\Box \phi -\xi R \phi - \frac{dV}{d\phi} =0 \; .
\end{equation}
Induced gravity with torsion in Riemann--Cartan spacetimes has been
studied in (Park and Yoon 1997), and a generalization of the concept of
conformal invariance has been
formulated.\\ \\
{\bf Superstring theories:} Although superstring theories are not
classical theories of gravity, the effective action in the low
energy limit is used to make predictions in the classical domain, and we
comment upon this. The low--energy effective action for the
bosonic string theory is given by (Callan {\em et al.} 1985)
\begin{equation} \label{stringaction}
S=\int d^{10}x \sqrt{-g} \left[ {\mbox e}^{-2\Phi} R+4 \nabla^{\mu}\Phi
\nabla_{\mu} \Phi \right] + S_{matter} \; ,
\end{equation}
where $\Phi$ is the dimensionless string dilaton and the totally
antisymmetric 3--form
$H_{\mu\nu\lambda}$ appearing in the theory has been set equal to zero
together
with the cosmological constant (however, this is not always the case in
the literature). By means of dimensional reduction and a conformal
transformation,
this model is reduced to 4--dimensional canonical gravity with two scalar
fields:
\begin{equation}
\psi_1=\frac{1}{\sqrt{16\pi G}} \left( 6\ln b - \frac{\Phi}{2} \right) \; ,
\end{equation}
\begin{equation}
\psi_2=\sqrt{\frac{3}{8\pi G}} \left( 2\ln b + \frac{\Phi}{2} \right) \; ,
\end{equation}
where $b$ is the radius of the manifold of the compactified extra
dimensions.
The action (\ref{stringaction}) has provided theoreticians with several
cosmological models
(Gasperini, Maharana and Veneziano 1991;
Garcia--Bellido and Quir\`{o}s 1992;
Gasperini and Veneziano 1992;
Gasperini, Ricci and Veneziano 1993;
Gasperini and Ricci 1993;
Copeland, Lahiri and Wands 1994, 1995;
Batakis 1995;
Batakis and Kehagias 1995;
Barrow and Kunze 1997). The issue of which conformal frame is physical in
the low energy limit of string theories was raised in (Dick 1998).
\section{Conformal transformations in cosmology}
\setcounter{equation}{0}
The standard big--bang cosmology based on general relativity is
a very successful description of the universe that we observe,
although cosmological solutions have been studied also in alternative
theories of
gravity. However, the need to solve the horizon, flatness and monopole
problem, and to find a viable mechanism for the generation of density
fluctuations evolving into the structures that we see today (galaxies,
clusters, supeclusters and voids) motivated research beyond the big--bang
model and led to the idea of cosmological inflation (see Linde 1990; Kolb
and Turner 1990; Liddle and Lyth 1993; Liddle 1996 for reviews). There is
no universally accepted
model of inflation, and several scenarios based either on general
relativity or on alternative theories of gravity have been proposed.
Since many of the alternative theories used involve a conformal
transformation to a new conformal frame, it is natural that the problem of
whether the Jordan or the Einstein frame is the physical one resurfaces in
cosmology,
together with the use of conformal rescalings to simplify the study of the
equations of motion.
It is possible that general relativity behaves as an attractor for
scalar--tensor theories of
gravity, and that a theory which departs from general
relativity at early times in the history of the universe approaches general
relativity during the matter--dominated era
(Garcia--Bellido and Quir\'{o}s 1990;
Damour and Nordvedt 1993{\em a,b};
Mimoso and Wands 1995{\em a}; Oukuiss 1997) or even during inflation
(Bekenstein and Meisels 1978;
Garcia--Bellido and Quir\'{o}s 1990;
Barrow and Maeda 1990;
Steinhardt and Accetta 1990;
Damour and Vilenkin 1996) (unfortunately only the
Jordan frame was considered in (Garcia--Bellido and
Quir\'{o}s 1990; Mimoso and Wands 1995{\em a})). The convergence to
general relativity cannot occur during the radiation--dominated era
(Faraoni 1998).
One of the most important predictions of an inflationary scenario is the
spectral index of density perturbations, which can
already be compared with the observations of cosmic microwave background
anisotropies and of large scale structures (Liddle and Lyth 1993). The
spectral index is, in general,
different in versions of the same scalar--tensor theory formulated
in different conformal frames. For example,
it is known that most classical Kaluza--Klein inflationary models based
on the Jordan frame are allowed by
the observations but are theoretically unviable
(Sokolowski 1989{\em a,b}; Cho 1992) because of the energy argument
discussed in
Sec.~3; on the contrary, their Einstein frame counterparts are
theoretically consistent but they are severely restricted or even
forbidden by the
observations of cosmic microwave background anisotropies
(Faraoni, Cooperstock and Overduin 1995).
In extended (La and Steinhardt 1989; Kolb, Salopek and Turner 1990;
Laycock and Liddle 1994) and
hyperextended
(Steinhardt and Accetta 1990; Liddle and Wands 1992; Crittenden
and Steinhardt 1992) inflation, differences between the density
perturbations in the two frames have been pointed out
(Kolb, Salopek and Turner 1990).
The existing confusion on the problem of whether the
Jordan or the Einstein frame is the physical one is particularly
evident
in the literature on inflation, and deeply affects the viability of the
inflationary scenarios based on a theory of gravity which has a conformal
transformation as an ingredient. Among these
are extended (La and Steinhardt 1989; Laycock and Liddle 1994)
and hyperextended
(Kolb, Salopek and Turner 1990;
Steinhardt and Accetta 1990;
Liddle and Wands 1992;
Crittenden and Steinhardt 1992) inflation,
Kaluza--Klein (Yoon and Brill 1990; Cho and Yoon 1993; Cho 1994),
$R^2$--inflation (Starobinski
1980; Starobinski 1986; Maeda, Stein--Schabes and Futamase
1989; Liddle and Lyth 1993), soft and induced gravity inflation
(Accetta, Zoller and Turner 1985; Accetta and
Trester 1989; Salopek, Bond and Bardeen 1989).
While several authors completely neglect the problem of which frame is
physical, other authors present calculations
in only one frame, and others again perform calculations in both frames,
without deciding whether one of
the two is physically preferred. Sometimes, the two frames are
implicitely
treated as if they both were simultaneously physical,
and part of the results are presented in the Jordan frame, part in
the Einstein frame. It is often remarked that all models of inflation based on a
first order phase transition can be recast as slow--roll inflation using a
conformal transformation (Kolb, Salopek and
Turner 1990; Kalara, Kaloper and Olive 1990; Turner 1993; Liddle 1996),
but the conformal rescaling is often performed without physical
motivation. The justification for studying the original (i.e. prior to the
conformal
transformation) theory of gravity or inflationary scenario, which often
relies on a specific theory of high energy physics, is then completely
lost in this way. For example, one can start with a perturbatively
renormalizable potential in the Jordan frame and most likely one
ends up with a non--renormalizable potential in the Einstein frame.
The conformal rescaling has even been used to vary the Jordan frame
gravitational theory in order to obtain a pre--determined scalar field
potential in the Einstein frame (Cotsakis and Saich 1994).
It is to be noted that the conformal transformation to a new conformal
frame is
sometimes used as a purely mathematical device to compute cosmological
solutions by reducing the problem to a familiar (and computationally more
convenient) scenario. The conformal transformation technique has been
used to study also cosmological perturbations in generalized
gravity theories
(Hwang 1990; Mukhanov, Feldman and Brandenberger 1992; Hwang 1997{\em a}).
This technique is certainly legitimate and convenient at the classical
level, but it leads to problems when quantum fluctuations of the inflaton
field are computed in the new conformal frame, and the result is mapped
back into the ``old'' frame. Problems arise already at the
semiclassical level (Duff 1981).
This difficulty does not come as a surprise, since
the conformal transformation introduces a mixing
of the degrees of freedom corresponding to the scalar and the
tensor modes. In general, the fluctuations in the two frames are
physically
inequivalent (Fujii and Nishioka 1990; Makino and Sasaki 1991; Nishioka
and Fujii 1992; Fakir and Habib 1993; Fabris and Tossa 1997).
There is ambiguity in the choice of vacuum states for the quantum fields:
if a vacuum is chosen in one frame, it is unclear into what state the
field is mapped in the other conformal frame, and one will end up, in
general, with two
different quantum states. The use of gauge--invariant quantities does not
fix this problem (Fakir, Habib and Unruh 1992).
The problem that plagues quantum fluctuations becomes relevant for
present--day observations because the quantum perturbations eventually
become classical (Kolb and Turner 1990; Liddle and Lyth 1993; Tanaka and
Sakagami 1997) and
seed galaxies, clusters and superclusters.
Although the problem is not solved in general, the situation is not so
bad in certain specific inflationary scenarios. In (Sasaki 1986; Makino
and Sasaki
1991; Fakir, Habib and Unruh 1992),
chaotic inflation with the quartic potential $V=\lambda \phi^4$ and
nonminimal coupling of the scalar field was studied, and it was found that
the amplitude of the density perturbations does not change under the
conformal
transformation. This result, however, relies on the assumption that one can
split the inflaton field into a classical background plus quantum
fluctuations
(preliminary results when the decomposition is not possible have been
obtained
in (Nambu and Sasaki 1990)).
Under slow--roll conditions in induced gravity inflation, the spectral
index of density perturbations is frame--independent to first order in the
slow--roll parameters (Kaiser 1995{\em a}).
When the expansion of the universe is de Sitter--like,
$a(t) \propto $e$^{Ht}$, $\dot{H} \approx 0$, it was found that the
magnitude of the two--point correlation function is affected by the
conformal
transformation, but its dependence on the wavenumber, and consequently
also the spectral index, is not affected (Kaiser 1995{\em b}). The
spectral indices
differ in the two conformal frames when
the expansion of the scale factor is close to a power
law\footnote{If inflation occurs in the early universe, it
is not necessarily of
the slow--roll type. The most well studied case of inflation without slow
rolling is power law inflation which occurs for exponential potentials,
obtained in almost
all theories formulated in the Einstein frame.} (Kaiser 1995{\em b}); often,
workers in the field have not been sufficiently careful in this
regard. Certain gauge--invariant quantities related to the cosmological
perturbations turn out to be also conformally invariant under a
mathematical condition satisfied by power law inflation and by the
pole--like inflation encountered in the pre--big bang scenario of low
energy string theory (Hwang 1997{\em b}).
At the level of the classical, {\em unperturbed}
cosmological model, the occurrence of slow--roll inflation in the
Einstein frame does
not necessarily imply that inflation occurs also in the Jordan frame, or
that
it is of the slow--roll type, and the expansion law of the scale factor is
in
general different in the two conformal frames\footnote{For extended
inflation
in Brans--Dicke theory with $\omega >>1$, it has been proved that
slow--roll
inflation in the Einstein frame implies slow--roll inflation in the Jordan
frame (but not viceversa) (Lidsey 1992; Green and Liddle 1996).}
(see Abreu, Crawford and Mimoso 1994 for an example).
Possible approaches to this problem are outlined in
(Fakir and Habib 1993).
Even if the same expansion law is achieved in the Jordan and the
Einstein frame, the corresponding scalar field potentials can be quite
different in the two frames. For example, power--law inflation is achieved
by an exponential potential for a minimally coupled scalar field in the
Einstein frame, and by a polynomial potential for its nonminimally coupled
cousin in the Jordan frame (Abreu, Crawford and Mimoso 1994;
Futamase and Maeda 1989; Faraoni 1996{\em b}).
Another cosmologically relevant aspect of the scalar field appearing in
(\ref{specialCT}) is that it may contribute a significant fraction of the
dark matter in the universe
(Cho 1990; Cho
and Yoon 1993; Delgado 1994; McDonald 1993{\em a,b};
Gasperini and Veneziano 1994; Gasperini 1994; Cho and Keum 1998).
If one accepts the idea that the scalar field appearing in the expression
for
the conformal factor (\ref{specialCT}) is the field driving inflation
(Salopek 1992; Cho 1992, 1994), then the inflationary scenario is
completely determined. In fact, the conformal transformation to the
Einstein frame in cosmology leads to either $a)$ an exponential potential
for the scalar field and to power--law inflation; $b)$ a potential with
more than one exponential term in Kaluza--Klein theories
(Yoon and Brill 1990; Cho 1990; Cho and Yoon 1993), and to a kind of
inflation that interpolates
between power--law and de Sitter inflation (Easther 1994).
It is also to be noted that, if a cosmological constant is present in a
theory formulated in the Jordan frame, the new version of the theory in the
Einstein frame has no cosmological constant
(Collins, Martin and Squires 1989; Fujii 1998; Maeda 1992)
but, instead, it exhibits an exponential term in the potential for the
``new'' scalar field.
The problem of whether a Noether symmetry is preserved by the conformal
transformation has been analysed in
(de Ritis {\em et al.} 1990;
Demianski {\em et al.} 1991;
Capozziello, de Ritis and Rubano 1993;
Capozziello and de Ritis 1993;
Capozziello, de Ritis and Marino 1997;
Capozziello and de Ritis 1996, 1997{\em b}).
The asymptotic evolution to an isotropic state of anisotropic Bianchi
cosmologies in higher order theories with Lagrangian density of the
form $ {\cal L}=f(R) \sqrt{-g} +{\cal L}_{matter}$ was studied in
(Miritzis and Cotsakis 1996) using the conformal rescaling
as a mathematical
tool. This study is relevant to the issue of cosmic no--hair theorems in
these gravitational theories. In the Einstein frame, a homogeneous
universe with matter satisfying the strong and dominant energy conditions
and with a
scalar field with a potential $V( \phi)$ locally convex
and with zero minimum, can isotropize
only if it is of Bianchi type I, V or VII. This result holds also in the
Jordan
frame if, in addition, the pressure of matter is positive
(Miritzis and Cotsakis 1996).
\section{Experimental consequences of the Einstein frame reformulation of
gravitational theories}
\setcounter{equation}{0}
In most unified field theories, the conformal factor used
in the conformal transformation
is constructed using a physical field present in the gravitational theory
(like a dilaton or a Brans--Dicke field) and therefore
it is not surprising that it has certain physical effects which are, in
principle, susceptible of experimental verification. The reality of the
interaction with gravitational strength described by the dilaton was
already stressed by Jordan (1949; 1955).
The dilaton field in the Einstein frame couples differently to gravity and
to matter (e.g. Horowitz 1990; Garfinkle, Horowitz and Strominger 1991),
and the
anomalous coupling results in a violation
of the equivalence principle. Consider for example the action
(\ref{nonlin2})
plus a matter term in the Jordan frame: after the rescaling (\ref{CT}),
(\ref{17}), (\ref{22}) has been performed, the scalar $\tilde{\phi}$
is minimally coupled to gravity,
but it couples nonminimally to the other forms of matter via a
field--dependent exponential factor:
\begin{equation}
S=\int d^4 x \left\{ \sqrt{-\tilde{g}} \left[
\frac{\tilde{R}}{16 \pi G} -\frac{1}{2} \, \tilde{\nabla}^{\mu}
\tilde{\phi} \tilde{\nabla}_{\mu} \tilde{\phi} \right] + {\mbox e}^{-\alpha
\sqrt{G}\, \tilde{\phi}} {\cal L}_{matter} \right\} \; .
\end{equation}
This leads to a violation of the equivalence principle which can, in
principle, be tested by free fall experiments
(Taylor and Veneziano 1988;
Brans 1988;
Cvetic 1989;
Ellis {\em et al.} 1989;
Cho and Park 1991;
Cho 1992;
Damour and Esposito--Far\`{e}se 1992;
Cho 1994;
Damour and Polyakov 1994{\em a,b};
Brans 1997).
It is probably this anomalous coupling and the subsequent violation of
the equivalence principle that explain the prejudice of many
theoreticians against the use of the Einstein frame (which is not,
however, a matter of taste, but is motivated by the independent energy
arguments of Sec.~3). However, it is well known that although the
Brans--Dicke scalar couples universally to all forms of ordinary matter
in the Jordan frame, the {\em strong} equivalence principle is violated
in this frame. This is sometimes understood as the fact that gravity
determines a {\em local} value of the effective gravitational
``constant'' $G=\phi^{-1}$ (e.g. Brans 1997). In any case, the dilaton
dependence of the coupling
constants is to be regarded as an important prediction of string theories
in the low energy limit, and as a new motivation for improving the
present precision of tests of the equivalence principle.
By describing the gravitational interaction between two point masses $m_1$
and
$m_2$ with the force law
\begin{equation}
F=\frac{G m_1 m_2}{r^2}\left( 1+\lambda {\mbox e}^{-\mu r} \right) \; ,
\end{equation}
where $\lambda $ and $\mu $ are, respectively, the strength and the range
of the fifth force, one obtains constraints on the range of these
parameters. Due to the smallness of the values of $\lambda $ allowed by
the theory, the null results of the
experiments looking for a fifth force still leave room for a theory
formulated in the Einstein frame and with anomalous coupling
(Cho 1992, 1994; Damour and Polyakov 1994{\em a,b}; Cho and Keum 1998).
There are also post--Newtonian effects and departures from general
relativity
in the strong gravity regime (Damour and Esposito--Far\`{e}se 1992), as
well as differences in
the gravitational radiation emitted and absorbed as compared to
general relativity
(Eardley 1975;
Will and Eardley 1977;
Will 1977;
Will and Zaglauer 1989;
Damour and Esposito--Far\`{e}se 1992).
If $\alpha ( \phi )=\partial ( \ln \Omega ) /\partial \phi$, where
$\Omega$ is the conformal factor in (\ref{CT}), then the post--Newtonian
parameters $\gamma$ and $\beta$ (Will 1993) are given by (Damour and
Esposito--Far\`{e}se 1992; Damour and Nordvedt 1993{\em a,b})
\begin{equation}
\gamma -1=\left. -\, \frac{2 \alpha^2}{1+\alpha^2} \right|_{\phi_0} \; ,
\end{equation}
\begin{equation}
\beta=1+ \frac{\alpha^2}{2\left( 1+\alpha^2 \right)^2}\, \left.
\frac{\partial^2 ( \ln \Omega )}{\partial \phi^2} \right|_{\phi_0} \; ,
\end{equation}
where $\phi_0=\phi (t_0 )$ is the value of the scalar field at the present
time
$t_0$, and it is assumed that the Brans--Dicke field only depends on time.
The 1$\sigma$
limits on $\gamma $ from the Shapiro time delay experiment in the
Solar System (Will 1993) are $|\gamma -1 |< 2\cdot 10^{-3}$ (which implies
$\alpha^2 <10^{-3}$) and the combination $\eta \equiv 4\beta -\gamma -3$
is subject to the constraint $|\eta|< 5\cdot 10^{-3}$.
By contrast, in a scalar--tensor theory, one expects
$\alpha \approx 1$. This value of $\alpha$ could have been realistic early
in the history of the universe with scalar--tensor gravity converging to
general relativity at a later time during the matter--dominated epoch
(Damour and
Nordvedt 1993{\em a,b}). Accordingly, the Jordan and the Einstein frame
would coincide
today, the rescaling (\ref{CT}) differing from the identity only before the
end of the matter--dominated era.
\section{Nonminimal coupling of the scalar field}
\setcounter{equation}{0}
The material contained in this section is a summary of the state of the
art on issues that have been only partially explored, results whose
consequences are largely unknown, and problems that are still open. We
try to point out the directions that, at present, appear most
promising for future research. The reader should be aware of the fact
that due to
the nature of such a discussion, the selection of topics presented here
does not exhaust all the aspects involved.
The generalization to a curved spacetime of the flat space Klein--Gordon
equation for a scalar field $\phi$,
\begin{equation} \label{KleinGordon}
\Box \phi -\xi R \phi -\frac{dV}{d\phi}=0 \; ,
\end{equation}
includes the possibility of an explicit coupling term $\xi R\phi $
between the field $\phi$ and the Ricci curvature
of spacetime (Callan, Coleman and Jackiw 1970). There are many reasons to
believe
that a nonminimal (i.e. $\xi \neq 0$) coupling term appears:
a nonminimal coupling is generated by quantum corrections even if it is
absent
in the classical action, or it is required in order to
renormalize the theory (Freedman, Muzinich and Weinberg 1974;
Freedman and Weinberg 1974). It has also been argued in quantum
field theory in curved spaces that a nonminimal coupling term is to be
expected whenever the spacetime curvature is large. This leads to what we
will
call the ``$\xi$--problem'', i.e. the problem
of whether physics uniquely determines the value of $\xi$. The
answer to this question is affirmative in many theories; several
prescriptions for the coupling constant $\xi$ exist and they differ
according
to the theory of the scalar field adopted. In general relativity and
in all metric theories of gravity in which the scalar field $\phi$ has a
non--gravitational origin, the value of $\xi$ is fixed to the value $1/6$
by
the Einstein equivalence principle
(Chernikov and Tagirov 1968;
Sonego and Faraoni 1993;
Grib and Poberii 1995;
Grib and Rodrigues 1995;
Faraoni 1996{\em b}).
This is in contrast with a previous claim that
nonminimal coupling spoils the equivalence principle (Lightman
{\em et al.} 1975).
However this claim has been shown to be based on flawed arguments;
instead, it is the minimal coupling of the scalar field that leads to
pathological behaviour (Grib and Poberii 1995; Grib and Rodrigues 1995).
It is interesting
that the derivation of the value $\xi=1/6$ is completely
independent of conformal
transformations, the conformal structure of spacetime, the spacetime metric
and the field equations for the metric tensor of the theory. The fact that the
conformal coupling constant $\xi=1/6$ emerges from these considerations is
extremely
unlikely to be a coincidence, but at present there is no
satisfactory understanding of the reason why this happens, apart from the
following naive consideration. No preferred length scale is present in the
flat space massless Klein--Gordon equation and therefore no such scale
must appear in the limit of the corresponding curved space massless
equation when small regions of spacetime are considered, if the Einstein
equivalence principle holds.
In all theories formulated in the Einstein frame, instead, the scalar field
is minimally coupled ($\xi =0$) to the Ricci curvature, as is evident from
the actions (\ref{actionBDEframe}), (\ref{mincoupl}), (\ref{lin}),
(\ref{actionEframe}).
In many quantum theories of the scalar field $\phi$ there is a unique
solution
to the $\xi$--problem, or there is a restricted range of values of $\xi$.
If $\phi$ is a Goldstone boson in a theory with spontaneous symmetry
breaking,
$\xi=0$ (Voloshin and Dolgov 1982).
If $\phi$ represents a composite particle,
the value of $\xi$ should be fixed by the known dynamics of its
constituents:
for example, for the Nambu--Jona--Lasinio model,
$\xi=1/6$ in the large $N$
approximation (Hill and Salopek 1992). In the $O(N)$--symmetric model with
$V=\alpha \phi^4$, in which the constituents of the $\phi$ boson are
scalars
themselves, $\xi$ depends on the coupling constants of the elementary
scalars (Reuter 1994): if the coupling of the elementary scalars is
$\xi_0=0$, then $
\xi\in \left[ -1,0 \right]$ while, if $\xi_0=1/6 $, then $\xi =0 $.
For Higgs scalar fields in the standard model and canonical gravity, the
allowed
range of values of $\xi$ is $\xi \leq 0$, $\xi \geq 1/6$ (Hosotani 1985).
The back reaction of gravity on the stability of the scalar $\phi$ in the
potential $ V( \phi)=\eta \phi^3 $ leads to $\xi=0$ (Hosotani 1985).
The stability of a nonminimally coupled scalar field with the
self--interaction
potential
\begin{equation}
V( \phi)=\alpha \phi + m^2 \phi^2/2 +\beta \phi^3 +\lambda \phi^4 -
\Lambda
\end{equation}
was shown to restrict the possible values of $\xi $ and of the other
parameters of this model (Bertolami 1987). Quantum corrections lead to a
typical value of $\xi$ of order $10^{-1}$ (Allen 1983; Ishikawa 1983).
In general, in a quantum theory $\xi$ is renormalized together with the
other coupling constants of the theory and the particles' masses
(Birrell and Davies 1980; Nelson and Panangaden 1982;
Parker and Toms 1985; Hosotani 1985; Reuter 1994);
this makes an unambiguous solution of the
$\xi$--problem more difficult. In the context of cosmological inflation, a
significant simplification occurs due to the fact that inflation is a
classical, rather than quantum, phenomen: the energy scale involved is
well below the Planck
scale. The potential energy density of the inflaton field 50 e--folds
before the end of inflation is subject to the constraint $V_{50} \leq 6
\cdot 10^{-11} m_{Pl}^4$, where $m_{Pl}$ is the Planck mass
(Kolb and Turner 1990; Turner 1993; Liddle and Lyth 1993). Moreover,
the trajectory of the inflaton is peaked around classical trajectories
(Mazenko, Unruh and Wald 1985;
Evans and McCarthy 1985;
Guth and Pi 1985; Pi 1985;
Mazenko 1985{\em a,b};
Semenoff and Weiss 1985).
Nevertheless, attempts have been made to begin the inflationary epoch in
the context of string theory or quantum cosmology. A running coupling
constant in inflationary
cosmology was introduced in (Hill and Salopek 1992) and used to
improve the chaotic inflationary scenario in (Futamase and Tanaka 1997).
Asymptotically free theories in an external gravitational field described
by
the Lagrangian density
\begin{equation}
{\cal L}=\sqrt{-g} \left( aR^2+b\, G_{GB} +c \, C_{\alpha\beta\gamma\delta}
\, C^{\alpha\beta\gamma\delta} +\xi R \phi^2 \right) +{\cal L}_{matter} \;
,
\end{equation}
where $G_{GB}$ is the Gauss--Bonnet invariant, have a coupling constant
$\xi (t) $ that
depends on time and tends to $1/6$ when $t \rightarrow \infty $
(Buchbinder 1986; Buchbinder, Odintsov and Shapiro 1986). In the
renormalization group
approach to grand unification theories in curved spaces it was found that,
at the one loop level, $ \xi (t) \rightarrow 1/6 $ or
$ \xi (t) \rightarrow \infty $ exponentially
(Buchbinder and Odintsov 1983, 1985;
Buchbinder, Odintsov and Lichzier 1989;
Odintsov 1991; Muta and Odintsov 1991; Elizalde and Odintsov 1994).
However, this result is not free of controversies (Bonanno 1995; Bonanno
and Zappal\`a 1997).
Nonminimal couplings of the scalar field have been widely used in
cosmology, and therefore the above prescriptions have important
consequences for the
viability of inflationary scenarios. In fact, the nonminimal coupling
constant $\xi$ becomes an extra parameter of inflation, and it is well
known that it affects the viability of many scenarios
(Abbott 1981; Starobinsky 1981; Yokoyama 1988; Futamase
and Maeda 1989; Futamase, Rothman and Matzner 1989; Amendola, Litterio
and Occhionero 1990;
Accioly and Pimentel 1990; Barroso {\em et al.}
1992; Garcia--Bellido and Linde 1995; Faraoni 1996{\em b}). The occurrence
of inflation in anisotropic spaces is also affected by the
value of $\xi$ (Starobinsky 1981; Futamase, Rothman and Matzner 1989;
Capozziello and de Ritis 1997{\em b}), which is
relevant for the cosmic no--hair theorems. In many papers on inflation,
the nonminimal coupling was used to improve the inflationary scenario;
however,
the feeling is that, in general, it actually works in the opposite
direction
(Faraoni 1997{\em a}). In some cases it may be possible to
compare the spectral
index of
density perturbations predicted by the inflationary theory with
the available observations of cosmic microwave background anisotropies
in order to determine the value of
$\xi $ (Kaiser 1995{\em a}; Faraoni 1996{\em b}), or to
obtain other observational constraints (Fukuyama {\em et al.} 1996).
In cosmology, for chaotic inflation with the potential $V=\lambda \phi^4$,
a nonminimal coupling to the curvature lessens the fine--tuning on the
self--coupling
parameter $\lambda$ imposed by the cosmic
microwave background anisotropies
(Salopek, Bond and Bardeen 1989;
Fakir and Unruh 1990{\em a,b};
Kolb, Salopek and Turner 1990;
Makino and Sasaki 1991),
$\lambda < 10^{-12}$. A nonminimal coupling term can also enhance the
growth of density perturbations (Maeda 1992; Hirai and Maeda 1994; Hirai
and Maeda 1997).
For scalar fields in a Friedmann universe, the long wavelenghts $\lambda$
do not scale with the usual reshift formula $\lambda /\lambda_0=a(t) /a(
t_0)$,
but exhibit diffractive corrections if $\xi \neq 1/6$
(Hochberg and Kephart 1991).
The value of the coupling constant $\xi$ affects also the success of
the so--called ``geometric reheating'' of the universe after inflation
(Bassett and Liberati 1998), which is achieved via a nonminimal coupling
of the inflaton with the Ricci curvature, instead of the usual coupling
to a second scalar field.
The ``late time mild inflationary'' scenario of the universe predicts very
short periods of exponential expansion of the universe interrupting the
matter era (Fukuyama {\em et al.} 1997). The model is based on a massive
nonminimally coupled scalar field
acting as dark matter. The success of the scenario depends on the value of
$\xi$, and a negative sign of $\xi$ is necessary. However, the mechanism
proposed in (Fukuyama {\em et al.} 1997) to achieve late time mild
inflation
turns out to be physically pathological from the point of view of wave
propagation in curved spaces (Faraoni and Gunzig 1998{\em b}). At present,
it is unclear whether alternative mechanisms can succesfully
implement the idea of late time mild inflation.
The case $\xi \neq 0$ for a scalar field in the Jordan frame of higher
dimensional models has been shown to have desirable properties in
shrinking the extra dimensions (Sunahara, Kasai and Futamase 1990;
Majumdar 1997), and has been used also for the Brans--Dicke field in
generalized theories of gravity (Linde 1994; Laycock and Liddle 1994;
Garcia--Bellido and Linde 1995). Exact solutions in cosmology have been
obtained by using the conformal transformation (\ref{CT}), (\ref{OM}),
(\ref{redefNMC}) and starting from known solutions in the Einstein frame,
in which the scalar field is minimally coupled (Bekenstein 1974; Froyland
1992; Accioly, Vaidya and Som 1983; Futamase and Maeda 1989; Abreu,
Crawford and Mimoso 1994). From what we have already said in the previous
sections, it is clear that, in general relativity with a nonminimally
coupled scalar field, the Einstein and the Jordan frames are physically
inequivalent but neither is physically preferred on the basis of energy
arguments.
Nonminimal couplings of the scalar field in cosmology have been explored
also in contexts different from inflation
(Dolgov 1983; Ford 1987; Suen and Will 1988;
Fujii and Nishioka 1990;
Morikawa 1990;
Hill, Steinhardt and Turner 1990;
Morikawa 1991;
Maeda 1992;
Sudarsky 1992;
Salgado, Sudarsky and Quevedo 1996, 1997;
Faraoni 1997{\em b}) during the
matter--dominated era (in the
radiation--dominated era of a Friedmann--Lemaitre--Robertson--Walker
solution, or in any spacetime with Ricci curvature $R=0$, the
explicit coupling of the
scalar field to the curvature becomes irrelevant). In particular, a
nonself--interacting, massless scalar field nonminimally coupled to the
curvature with negative $\xi $ has been considered as a mechanism to damp
the cosmological constant (Dolgov 1983; Ford 1987; Suen and Will 1988) and
solve the cosmological constant problem.
Another property of the nonminimally coupled scalar field is remarkable:
while a big--bang singularity is present in many inflationary scenarios
employing a minimally coupled scalar field, it appears that a
nonminimally coupled scalar is a form of matter that can circumvent
the null energy condition and avoid the initial singularity (Fakir 1998).
From the mathematical point of view, the action (\ref{nonmincoupl})
is the only action such that the nonminimal coupling of $\phi$ to $R$
involves
only the scalar field but not its derivatives, and the coupling is
characterized by a dimensionless constant (Birrell and Davies 1982). The
Klein--Gordon equation arising from the action ({\ref{nonmincoupl})
is conformally invariant if $\xi=1/6$ and $V( \phi)=0$, or
$V( \phi)=\lambda \phi^4$. Many authors choose
to reason in terms of an effective renormalization of the
gravitational coupling constant
\begin{equation}
G_{eff}=\frac{G}{1-8\pi G \xi \phi^2} \; .
\end{equation}
If $\phi=\phi (t) $, as in spatially homogeneous cosmologies or in
homogeneous regions of spacetime, then the effective gravitational
coupling $G_{eff}=G_{eff}(t)$ varies on a cosmological time scale.
The possibility of a negative $G_{eff}$ at high energies, corresponding to
an antigravity regime in the early universe has also been considered
(Pollock 1982; Novello 1982), also at the semiclassical level
(Gunzig and Nardone 1984).
The solution of the $\xi$--problem is also relevant for the problem of
backscattering of waves of the scalar $\phi$ off the background curvature of
spacetime, and the creation of ``tails'' of radiation. If the
Klein--Gordon wave equation (\ref{KleinGordon}) is conformally invariant,
tails are absent in any conformally flat spacetime, including the
cosmologically relevant case of Friedmann--Lemaitre--Robertson--Walker
metrics (Sonego and Faraoni 1992; Noonan 1995).
Other areas of gravitational physics for which the solution of the
$\xi$--problem is relevant include the collapse of scalar fields
(Frolov 1998), the theory of the structure and
stability of boson stars (Van der Bij and Gleiser 1987; Liddle and Madsen
1992; Jetzer 1992), which is linked to inflation by the hypothesis
that particles associated with the inflaton field may survive as dark
matter in the form of boson stars. The $\xi$--problem is also relevant for
the field of classical and quantum wormholes, in
which negative energy fluxes are eliminated by restricting the allowed
range of values of $\xi$ (Ford 1987; Hiscock 1990; Coule 1992; Bleyer,
Rainer and Zhuk 1994). Also the Ashtekar formulation of
general relativity has been studied in the presence of nonminimally coupled
scalar fields using a conformal transformation; the field
equations in these variables are nonpolynomial, in contrast to the
polynomial case of minimal coupling (Capovilla 1992).
\section{Conclusions}
\setcounter{equation}{0}
Conformal transformations are extensively used in classical theories of
gravity, higher--dimensional theories and cosmology. Sometimes, the
conformal transformation is a purely mathematical tool that allows one to
map
complicated equations of motion into simpler equations, and constitutes an
isomorphism between spaces of solutions of these equations. In this sense,
the
conformal transformation is a powerful solution--generating technique. More
often, the conformal transformation to the Einstein frame is a map from a
nonviable classical theory of gravity formulated in the Jordan frame to a
viable one which, however, is not as well motivated as the starting one
from the
physical perspective. A key role in establishing the viability of the
Einstein frame version of the theory is played by the positivity of the
energy and by the existence and stability of a ground state in the
Einstein frame. It is to be remarked that the energy argument of Sec.~3
selecting the Einstein frame as the physical one
is not applicable to quantum theories; in fact, the positivity of energy
and the energy conditions do not hold for quantum theories. The weak energy
condition is violated by quantum states (Ford and Roman 1992, 1993, 1995)
and a theory can
be unstable in the semiclassical regime (Witten 1982), or not have a
ground
state (e.g. Liouville's theory (D'Hoker and Jackiw 1982)).
Conformal transformations, nonminimal coupling, and the related aspects are
important also for quantum and string theories (e.g. Stahlofen and Schramm
1989~--~see Fulton, Rorlich and Witten 1962 for an early review) and for
statistical mechanics (Dita and Georgescu 1989). For example, the
conformal degree of freedom of a conformally flat metric has been studied
in (Padmanabhan 1988) in order to get insight into the quantization of
gravity in the particularly simple case when the spacetime
metric is conformally flat:
$g_{\mu\nu}=\Omega^2 \eta_{\mu\nu}$. In the context of quantum gravity,
lower--dimensional theories of gravity have been under scrutiny for several
years: when the spacetime dimension is 2 or 3,
the metric has only the conformal degree of freedom
(Brown, Henneaux and Teitelboim 1986), because the Weyl tensor vanishes
and any two--
or three--dimensional metric is conformally equivalent to the
Minkowski spacetime of corresponding dimensionality (Wald 1984).
The properties of the quantum--corrected Vlasov equation under conformal
transformations have been studied in (Fonarev 1994).
A nonminimal coupling of a quantum scalar field in a curved space can
induce
spontaneous symmetry breaking without negative squared masses
(Madsen 1988; Moniz, Crawford and Barroso 1990; Grib and Poberii 1995).
However, all these topics are beyond the purpose of the present work,
which is limited to classical theories.
Many works that appeared and still appear in the literature are affected
by confusion about the conformal transformation technique and the issue
of which conformal frame is physical. Hopefully, these papers
will be reanalysed in the near future in the updated
perspective on the issue of conformal transformations summarized
in this article.
A change in the point of view is particularly urgent in the analysis of
experimental tests of gravitational theories: most of the current
literature refers to the Jordan frame formulation of Brans--Dicke and
scalar--tensor theories, but it is the Einstein frame which has been
established to be the physical one. A revision is also needed in the
applications of gravitational theories to inflation; the predicted
spectrum of density perturbations must be computed in the physical frame.
In fact, only in this case it is meaningful to compare the theoretical
predictions with the data from the high precision satellite experiments
which map the anisotropies in the cosmic microwave background -- those
already ongoing ({\em COBE} (Smoot {\em et al.} 1992; Bennet {\em et
al.} 1996), and those planned for the early 2000s (NASA's {\em MAP} (MAP
1998) and ESA's {\em PLANCK} (PLANCK 1998)), and from the observations of
large scale structures.
\section*{Acknowledgments}
We are grateful to M. Bruni for suggestions leading to improvements in the
manuscript. VF acknowledges also Y.M. Cho, S. Sonego, and the colleagues
at the Department of Physics and Astronomy, University of Victoria, for
helpful discussions. This work was partially supported by EEC grants
numbers PSS*~0992 and CT1*--CT94--0004 and by OLAM, Fondation pour la
Recherche Fondamentale, Brussels.
\clearpage
\section*{References}
\noindent Abbott, L.F. (1981), {\em Nucl. Phys. B} {\bf 185}, 233
\\Abbott, L.F. and Deser, S. (1982), {\em Nucl.
Phys. B} {\bf 195}, 76
\\Abramowicz, M.A., Carter, B. and
Lasota, J.P. (1988), {\em Gen. Rel. Grav.} {\bf 20}, 1173
\\Abramowicz, M.A., Lanza, A., Miller, J.C. and Sonego, S. (1997{\em a}),
{\em Gen. Rel. Grav.} {\bf 29}, 1585
\\Abramowicz, M.A., Andersson, N., Bruni, M., Gosh, P.. and Sonego, S.
(1997{\em b}), {\em Class. Quant. Grav.} {\bf 14}, L189
\\Abreu, J.P., Crawford, P. and Mimoso, J.P. (1994),
{\em Class. Quant. Grav.} {\bf 11}, 1919
\\Accetta, F.S. and Trester, J.S. (1989), {\em
Phys. Rev. D} {\bf 39}, 2854
\\Accetta, F.S., Zoller, D.J. and
Turner, M.S. (1985), {\em Phys. Rev. D} {\bf 31}, 3046
\\Accioly, A.J. and Pimentel, B.M. (1990), {\em Can.
J. Phys.} {\bf 68}, 1183
\\Accioly, A.J., Vaidya, A.N. and Som, M.M. (1983), {\em Phys.
Rev. D} {\bf 27}, 2282
\\Accioly, A.J., Wichowski, U.F., Kwok, S.F. and
Pereira da Silva, N.L. (1993), {\em Class. Quant. Grav.} {\bf 10},
L215
\\Allen, B. (1983), {\em Nucl. Phys. B} {\bf 226}, 282
\\Alonso, J.S., Barbero, F., Julve, J. and
Tiemblo, A. (1994), {\em Class. Quant. Grav.} {\bf 11}, 865
\\Alvarez, E. and Bel\'{e}n Gavela, M. (1983), {\em Phys.
Rev. Lett.} {\bf 51}, 931
\\Amendola, L., Bellisai, D. and
Occhionero, F. (1993), {\em Phys. Rev. D} {\bf 47}, 4267
\\Amendola, L., Capozziello, S., Occhionero, F. and Litterio, M.
(1992), {\em Phys. Rev. D} {\bf 45}, 417
\\Amendola, L., Litterio, M. and Occhionero, F. (1990), {\em Int.
J. Mod. Phys. A} {\bf 5}, 3861
\\Appelquist, T. and Chodos, A. (1983), {\em
Phys. Rev. Lett.} {\bf 50}, 141
\\Appelquist, T., Chodos, A. and
Freund, P.G.O. (Eds.) (1987), {\em Modern Kaluza--Klein
Theories}, Addison--Wesley, Menlo Park.
\\Bailin, D. and Love, A. (1987), {\em Rep. Prog. Phys.}
{\bf 50}, 1087
\\Barros, A. and Romero, C. (1998), {\em Phys. Lett. A} {\bf 245}, 31.
\\Barroso, A., Casayas, J., Crawford, P., Moniz, P. and
Nunes, A. (1992), {\em Phys. Lett. B} {\bf 275}, 264
\\Barrow, J.D. (1993), {\em Phys. Rev. D}
{\bf 47}, 5329
\\Barrow, J.D. and Cotsakis, S. (1988), {\em Phys. Lett. B} {\bf 214},
515
\\Barrow, J.D. and Kunze, K.E. (1997), preprint hep--th/9710018.
\\Barrow, J.D. and Maeda, K. (1990), {\em Nucl.
Phys. B} {\bf 341}, 294
\\Barrow, J.D., Mimoso, J.P. and de
Garcia Maia, M.R. (1993), {\em Phys. Rev. D} {\bf 48}, 3630
\\Bassett, A.B. and Liberati, S. (1998), {\em Phys. Rev. D} {\bf 58},
021302.
\\Batakis, N.A. (1995), {\em Phys. Lett. B} {\bf 353}, 450
\\Batakis, N.A. and Kehagias, A.A. (1995), {\em Nucl. Phys. B} {\bf 449},
248--264;
\\Bateman, M. (1910), {\em Proc. Lon. Math. Soc.}
{\bf 8}, 223.
\\Bekenstein, J.D. (1974), {\em Ann. Phys. (NY)}
{\bf 82}, 535
\\Bekenstein, J.D. and Meisels, A. (1978),
{\em Phys. Rev. D} {\bf 18}, 4378
\\Belinskii, V.A. and Khalatnikov,
I.M. (1973), {\em Sov. Phys. JETP} {\bf 36}, 591
\\Bennet {\em et al.} (1996), {\em Astrophys. J. (Lett.)} {\bf 464}, L1.
\\Berezin, V.A., Domenech, G., Levinas, M.L., Lousto, C.O.
and Um\'{e}rez, N.D. (1989), {\em Gen. Rel. Grav.} {\bf 21}, 1177
\\Bergmann, P.G. (1968), {\em Int. J. Theor. Phys.}
{\bf 1}, 25
\\Berkin, A.L. and Maeda, K. (1991), {\em
Phys. Rev. D} {\bf 44}, 1691
\\Berkin, A.L., Maeda, K. and
Yokoyama, J. (1990), {\em Phys. Rev. Lett.} {\bf 65}, 141
\\Bertolami, O. (1987), {\em Phys. Lett. B} {\bf 186}, 161
\\Bicknell, G. (1974), {\em J. Phys. A} {\bf 7}, 1061
\\Biesiada, M. (1994), {\em Class. Quant. Grav.} {\bf 11}, 2589
\\Billyard, A. and Coley, A. (1997), {\em Mod.
Phys. Lett. A} {\bf 12}, 2121
\\Birrell, ND. and Davies, P.C. (1980), {\em Phys. Rev. D} {\bf 22},
322
\\Birrell, N.D. and Davies, P.C. (1982), {\em Quantum
Fields in Curved Space}, Cambridge University Press, Cambridge.
\\Bleyer, U., Rainer, M. and Zhuk, A.
(1994), preprint gr--qc/9405011.
\\Bombelli, L., Koul, R.K., Kunstatter, G.,
Lee, J. and Sorkin, R.D. (1987), {\em Nucl. Phys. B} {\bf 289},
735
\\Bonanno A. (1995), {\em Phys. Rev. D} {\bf 52}, 969.
\\Bonanno, A. and Zappal\`a, D. (1997), {\em Phys. Rev. D} {\bf 55},
6135
\\Bose, S. and Lohiya, D. (1997), preprint IUCAA 44/97.
\\Brans, C.H. (1988), {\em Class. Quant. Grav.} {\bf 5},
L197
\\Brans, C.H. (1997), preprint gr-qc/9705069.
\\Brans, C.H. and Dicke, R.H. (1961), {\em Phys. Rev.}
{\bf 124}, 925
\\Brown, J.D., Henneaux, M. and
Teitelboim, C. (1986), {\em Phys. Rev. D} {\bf 33}, 319
\\Bruckman, W.F. and Velazquez, E.S.
(1993), {\em Gen. Rel. Grav.} {\bf 25}, 901
\\Buchbinder, I.L. (1986), {\em Fortschr. Phys.} {\bf 34}, 605.
\\Buchbinder, I.L. and S.D. Odintsov,
S.D. (1983), {\em Sov. J. Nucl. Phys.} {\bf 40}, 848
\\Buchbinder, I.L. and Odintsov, S.D. (1985),
{\em Lett. Nuovo Cimento} {\bf 42}, 379
\\Buchbinder, I.L., Odintsov, S.D. and
Lichzier, I.M. (1989), {\em Class. Quant. Grav.}, {\bf 6}, 605
\\Buchbinder, I.L., Odintsov, S.D. and
Shapiro, I.L. (1986), in {\em Group--Theoretical Methods in Physics},
Markov, M. (Ed.), Moscow. p.~ 115
\\Buchbinder, I.L., Odintsov, S.D. and Shapiro, I.L. (1992), {\em Effective
Action in Quantum Gravity}, IOP, Bristol.
\\Buchm\"{u}ller, W. and N. Dragon, N. (1989),
{\em Nucl. Phys. B} {\bf 321}, 207
\\Callan, C.G. Jr., Coleman, S. and
Jackiw, R. (1970), {\em Ann. Phys. (NY)} {\bf 59}, 42
\\Callan, C.G., Friedan, D., Martinec, E.J. and
Perry, M.J. (1985), {\em Nucl. Phys. B} {\bf 262}, 593
\\Campbell, B.A., Linde, A.D. and K.
Olive, K. (1991), {\em Nucl. Phys. B} {\bf 355}, 146
\\Canuto, V., Adams, P.J., Hsieh, S.--H. and Tsiang, E. (1977), {\em Phys.
Rev. D} {\bf 16}, 1643
\\Capovilla, R. (1992), {\em Phys. Rev. D} {\bf 46}, 1450
\\Capozziello, S. and de Ritis, R. (1993),
{\em Phys. Lett. A} {\bf 177}, 1
\\Capozziello, S. and de Ritis, R. (1996), preprint
astro-ph/9605070.
\\Capozziello, S. and de Ritis, R. (1997{\em a}),
{\em Int. J. Mod. Phys. D} {\bf 6}, 491
\\Capozziello, S. and de Ritis, R. (1997{\em b}), {\em Gen. Rel. Grav.}
{\bf 29}, 1425
\\Capozziello, S., de Ritis, R. and Marino, A.A. (1997),
{\em Class. Quant. Grav.} {\bf 14}, 3243
\\Capozziello, S., de Ritis, R. and Rubano, C.
(1993), {\em Phys. Lett. A} {\bf 177}, 8
\\Capozziello, S., Occhionero, F. and Amendola, L.
(1993), {\em Int. J. Mod. Phys. D} {\bf 1}, 615
\\Casas, Garcia--Bellido, J. and M. Quir\'{o}s, M.
(1991), {\em Nucl. Phys. B} {\bf 361}, 713
\\Cecotti, S. (1987), {\em Phys. Lett. B} {\bf 190}, 86
\\Chamseddine, A.H. (1981), {\em Nucl. Phys. B} {\bf 185}, 403
\\Chan, K.C.K., Creighton, J.D.E. and Mann, R.B. (1996), {\em Phys. Rev.
D} {\bf 54}, 3892.
\\Chatterjee, S. and Banerjee, A. (1993), {\em Class. Quant. Grav.} {\bf
10}, L1
\\Chernikov, N.A. and Tagirov, E.A. (1968),
{\em Ann. Inst. H. Poincar\`{e} A} {\bf 9}, 109.
\\Cho, Y.M. (1987), {\em Phys. Lett. B} {\bf 199}, 358
\\Cho, Y.M. (1990), {\em Phys. Rev. D} {\bf 41}, 2462
\\Cho, Y.M. (1992), {\em Phys. Rev. Lett.} {\bf 68},
3133
\\Cho, Y.M. (1994), in {\em Evolution of the
Universe and its Observational Quest}, Yamada, Japan 1993,
Sato, H. (Ed.), Universal Academy Press, Tokyo, p.~99
\\Cho, Y.M. (1997), {\em Class. Quant. Grav.} {\bf 14}, 2963
\\Cho, Y.M. and Keum, Y.Y. (1998), {\em Mod. Phys. Lett. A} {\bf 13}, 109.
\\Cho, Y.M. and Park, D.H. (1991), {\em Gen. Rel. Grav.} {\bf
23}, 741
\\Cho, Y.M. and J.H. Yoon, J.H. (1993), {\em Phys.
Rev. D} {\bf 47}, 3465
\\Collins, P.D.B., Martin, A.D. and Squires, E.J. (1989), {\em Particle
Physics and Cosmology}, J. Wiley, New York, p.~293.
\\Copeland, E.J., Lahiri, A. and Wands, D. (1994), {\em Phys. Rev. D}
{\bf 50}, 4868
\\Copeland, E.J., Lahiri, A. and Wands, D. (1995), {\em Phys. Rev. D}
{\bf 51}, 1569
\\Cotsakis, S. (1993), {\em Phys. Rev. D} {\bf 47},
1437;
{\em errata} (1994), {\em Phys. Rev. D} {\bf 49}, 1145.
\\Cotsakis, S. (1995), {\em Phys. Rev. D} {\bf 52}, 6199
\\Cotsakis, S. and Flessas, G. (1993), {\em
Phys. Rev. D} {\bf 48}, 3577
\\Cotsakis, S. and Saich, P.J. (1994), {\em
Class. Quant. Grav.} {\bf 11}, 383
\\Coule, D.H. (1992), {\em Class. Quant. Grav.} {\bf 9}, 2352
\\Crittenden, R. and Steinhardt, P.J.
(1992), {\em Phys. Lett. B} {\bf 293}, 32
\\Cunningham, E. (1909), {\em Proc. Lon. Math. Soc.}
{\bf 8}, 77.
\\Cvetic, M. (1989), {\em Phys. Lett. B} {\bf 229}, 41
\\D'Hoker, E. and Jackiw, R. (1982), {\em Phys.
Rev. D} {\bf 26}, 3517
\\Damour, T. and Esposito--Far\`{e}se, G. (1992), {\em
Class. Quant. Grav.} {\bf 9}, 2093
\\Damour, T., Gibbons, G. and
Gundlach, C. (1990), {\em Phys. Rev. Lett.} {\bf 64}, 123
\\Damour, T. and Gundlach, C. (1991), {\em Phys. Rev.
D} {\bf 43}, 3873
\\Damour, T. and Nordvedt, K. (1993{\em a}), {\em Phys. Rev.
Lett.} {\bf 70}, 2217
\\Damour, T. and Nordvedt, K. (1993{\em b}), {\em Phys. Rev. D} {\bf 48},
3436
\\Damour, T. and Polyakov, A.M. (1994{\em a}), {\em Nucl. Phys.
B} {\bf 423}, 532
\\Damour, T. and Polyakov, A.M. (1994{\em b}), {\em Gen. Rel. Grav.} {\bf
26}, 1171
\\Damour, T. and Vilenkin, A. (1996), {\em Phys.
Rev. D} {\bf 53}, 2981
\\del Campo, S. (1992), {\em Phys. Rev. D}
{\bf 45}, 3386
\\Delgado, V. (1994), preprint ULLFT--1/94, hep--ph/9403247.
\\Demianski, M., de Ritis, R., Marmo, G. Platania, G.,
Rubano, C., Scudellaro, P. and Stornaiolo, P. (1991), {\em Phys. Rev. D}
{\bf 44}, 3136
\\de Ritis, R., Marmo, G., Platania, G., Rubano, C.,
Scudellaro, P. and Stornaiolo, C. (1990), {\em Phys. Rev. D}
{\bf 42}, 1091
\\Deruelle, N., Garriga, J. and Verdaguer, E.
(1991), {\em Phys. Rev. D} {\bf 43}, 1032
\\Deruelle, N. and Madore, J. (1987), {\em Phys. Lett. B} {\bf 186},
25
\\Deruelle, N. and Spindel, P. (1990), {\em
Class. Quant. Grav.} {\bf 7}, 1599
\\Deser, S. (1984), {\em Phys. Lett. B} {\bf 134},
419
\\de Witt, B. and Nicolai, H. (1986), {\em Nucl.
Phys. B} {\bf 274}, 363
\\Dick, R. (1988), {\em Gen. Rel. Grav.} {\bf 30}, 435
\\Dicke, R.H. (1962), {\em Phys. Rev.} {\bf 125},
2163
\\Dine, M., Rohm, R., Seiberg, N. and Witten, E.
(1985), {\em Phys. Lett. B} {\bf 156}, 55
\\Dirac, P.A.M. (1973), {\em Proc. R. Soc. Lond. A}
{\bf 333}, 403
\\Dita, P. and Georgescu, V. (Eds.) (1989), {\em Conformal Invariance and
String Theory}, Proceedings, Poiana Brasov, Romania 1987, Academic Press,
Boston.
\\Dolgov, A.D. (1983), in {\em The Very Early Universe}, Gibbons, G.W.,
Hawking, S.W. and Siklos, S.T.C. (Eds.), Cambridge University Press,
Cambridge.
\\Duff, M.J. (1981), in {\em Quantum Gravity 2: A Second Oxford Symposium},
Isham, C.J., Penrose, R. and Sciama, D.W. (Eds.), Oxford University
Press, Oxford.
\\Eardley, D.M. (1975), {\em Astrophys. J. (Lett.)} {\bf 196}, L59.
\\Easther, R. (1994), preprint NZ--CAN--RE--94/1,
astro-ph/9405034.
\\Elizalde, E. and Odintsov, S.D. (1994), {\em
Phys. Lett. B} {\bf 333}, 331
\\Ellis, J. {\em et al.} (1989), {\em Phys. Lett. B} {\bf
228}, 264
\\Epstein, H., Glaser, V. and Jaffe, A.
(1965), {\em Nuovo Cimento} {\bf 36}, 1016
\\Evans, M. and McCarthy, J.G. (1985), {\em Phys. Rev. D} {\bf 31},
1799
\\Fabris, J.C. and Martin, J. (1993), {\em Phys. Lett. B} {\bf 316},
476
\\Fabris, J.C. and Sakellariadou, M. (1997), {\em Class. Quant. Grav.}
{\bf 14}, 725
\\Fabris, J.C. and Tossa, J. (1997), {\em Gravit. Cosmol.}
{\bf 3}, 165
\\ Fakir, R. 1998, preprint gr--qc/9810054.
\\Fakir, R. and Habib, S. (1993), {\em Mod. Phys. Lett.
A} {\bf 8}, 2827
\\Fakir, R., Habib, S. and Unruh, W.G. (1992), {\em
Astrophys. J.} {\bf 394}, 396
\\Fakir, R. and Unruh, W.G. (1990{\em a}), {\em Phys.
Rev. D} {\bf 41}, 1783
\\Fakir, R. and Unruh, W.G. (1990{\em b}), {\em Phys.
Rev. D} {\bf 41}, 1792
\\Faraoni, V. (1996{\em a}), {\em Astrophys. Lett.
Comm.} {\bf 35}, 305
\\Faraoni, V. (1996{\em b}), {\em Phys. Rev. D} {\bf
53}, 6813
\\Faraoni, V. (1997{\em a}), in {\em Proceedings of the 7th
Canadian Conference on General Relativity and Relativistic Astrophysics},
Calgary, Canada 1997, Hobill, D. (Ed.), in press.
\\Faraoni, V. (1997{\em b}), {\em Gen. Rel. Grav.} {\bf 29}, 251
\\Faraoni, V. (1998), preprint IUCAA 22/98, gr--qc/9805057, to appear in {\em
Phys. Lett. A}.
\\Faraoni, V., Cooperstock, F.I. and Overduin, J.M. (1995), {\em Int. J.
Mod. Phys. A} {\bf 4}, 387
\\Faraoni, V. and Gunzig, E. (1998{\em a}), {\em Astron. Astrophys.} {\bf
332}, 1154.
\\Faraoni, V. and Gunzig, E. (1998{\em b}), preprint IUCAA 23/98.
\\Ferraris, M. (1986), in {\em Atti del 6$^o$ Convegno
Nazionale di Relativit\`a Generale e Fisica della Gravitazione}, Firenze
1984, Modugno, M. (Ed.), Tecnoprint, Bologna, p.~127.
\\Ferraris, M., Francaviglia, M. and Magnano, G.
(1988), {\em Class. Quant. Grav.} {\bf 5}, L95
\\Ferraris, M., Francaviglia, M. and Magnano, G.
(1990), {\em Class. Quant. Grav.} {\bf 7}, 261
\\Fierz, M. (1956), {\em Helv. Phys. Acta} {\bf 29}, 128
\\Fonarev, O.A. (1994), {\em Class. Quant. Grav.}
{\bf 11}. 2597
\\Ford, L.H. (1987), {\em Phys. Rev. D} {\bf 35}, 2339
\\Ford, L.H. and Roman, T.A. (1992), {\em Phys.
Rev. D} {\bf 46}, 1328
\\Ford, L.H. and Roman, T.A. (1993), {\em Phys. Rev. D}
{\bf 48}, 776
\\Ford, L.H. and Roman, T.A. (1995), {\em Phys. Rev. D}
{\bf 51}, 4277
\\Freedman, D.Z., Muzinich, I.J. and Weinberg, E.J.
(1974), {\em Ann. Phys. (NY)} {\bf 87}, 95
\\Freedman, D.Z. and Weinberg, E.J. (1974), {\em Ann. Phys. (NY)}
{\bf 87}, 354
\\Freund, P.G.O. (1982), {\em Nucl. Phys. B} {\bf
209}, 146
\\Frolov, A.V. (1998), preprint gr--qc/9806112.
\\Froyland, J. (1982), {\em Phys. Rev. D} {\bf 25}, 1470
\\Fujii, Y. (1998), {\em Progr. Theor. Phys.} {\bf 99}, 599.
\\Fujii, Y. and Nishioka, T. (1990), {\em Phys. Rev. D}
{\bf 42}, 361
\\Fukuyama, T., Hatakeyama, M., Miyoshi, M.,
Morikawa, M. and Nakamichi, A. (1997), {\em Int. J. Mod. Phys. D} {\bf
6}, 69
\\Fulton, T., Rohrlich, F. and
Witten, L. (1962{\em a}), {\em Rev. Mod. Phys.} {\bf 34}, 442
\\Fulton, T., Rohrlich, F.
and Witten, L. (1962{\em b}), {\em Nuovo Cimento} {\bf 26}, 652
\\Futamase, T. and Maeda, K. (1989), {\em
Phys. Rev. D} {\bf 39}, 399
\\Futamase, T., Rothman, T. and Matzner, R. (1989), {\em Phys. Rev. D} {\bf
39}, 405
\\Futamase, T. and Tanaka, M. (1997), preprint
OCHA--PP--95, hep--ph/9704303.
\\Garay, L. and Garcia--Bellido, J. (1993),
{\em Nucl. Phys. B} {\bf 400}, 416
\\Garcia--Bellido, J. and Linde, A.D. (1995), {\em Phys.
Rev. D} {\bf 51}, 429
\\Garcia--Bellido, J. and Linde, A.D. (1995),
{\em Phys. Rev. D} {\bf 52}, 6730
\\Garcia--Bellido, J. and Quir\'{o}s, M.
(1990), {\em Phys. Lett. B} {\bf 243}, 45
\\Garcia--Bellido, J. and Quir\`{o}s, M.
(1992), {\em Nucl. Phys. B} {\bf 368}, 463
\\Garcia--Bellido, J. and Wands, D. (1995), {\em
Phys. Rev. D} {\bf 52}, 5636
\\Garfinkle, D., Horowitz, G. and Strominger, A. (1991), {\em Phys. Rev.
D} {\bf 43}, 3140; {\em erratum} (1992), {\em Phys. Rev. D} {\bf 45},
3888
\\Gasperini, M. (1994), {\em Phys. Lett. B} {\bf 327},
214
\\Gasperini, M., Maharana, J. and Veneziano, G. (1991),
{\em Phys. Lett. B} {\bf 272}, 277
\\Gasperini, M. and Ricci, R. (1993), {\em Class. Quant. Grav.} {\bf 12},
677
\\Gasperini, M., Ricci, R. and Veneziano, G. (1993), {\em Phys. Lett. B} {\bf
319}, 438
\\Gasperini, M. and Veneziano, G. (1992), {\em Phys. Lett. B} {\bf 277},
256
\\Gasperini, M. and Veneziano, G. (1994),
{\em Phys. Rev. D} {\bf 50}, 2519
\\Geyer, B. and Odintsov, S.D. (1996), {\em Phys. Rev. D} {\bf 53},
7321
\\Gibbons, G.W. and Maeda, K. (1988), {\em Nucl.
Phys. B} {\bf 298}, 741
\\Gott, S., Schmidt, H.--J. and
Starobinsky, A.A. (1990), {\em Class. Quant. Grav.} {\bf 7}, 893
\\Gottl\"{o}ber, S., M\"{u}ller, V. and A.A.
Starobinsky, A.A. (1991), {\em Phys. Rev. D} {\bf 43}, 2510
\\Green, A.M. and Liddle, A.R. (1996), {\em
Phys. Rev. D} {\bf 54}, 2557
\\Green, B., Schwarz, J.M. and Witten, E.
(1987), {\em Superstring Theory}, Cambridge University Press, Cambridge.
\\Grib, A.A. and Poberii, E.A. (1995), {\em Helv. Phys.
Acta} {\bf 68}, 380
\\Grib, A.A. and Rodrigues, W.A. (1995), {\em Gravit.
Cosmol.} {\bf 1}, 273
\\Gross, D.J. and Perry, M.J. (1983), {\em
Nucl. Phys. B} {\bf 226}, 29
\\Guendelman, E.I. (1992),
{\em Phys. Lett. B} {\bf 279}, 254
\\Gunzig, E. and Nardone, P. (1984), {\em Phys. Lett. B}
{\bf 134}, 412
\\Guth, A.H. and Jain, B. (1992), {\em Phys. Rev. D} {\bf 45}, 426
\\Guth, A.H. and Pi, S.--Y. (1985), {\em Phys. Rev. D} {\bf 32},
1899
\\Hammond, R.T. (1990), {\em Gen. Rel. Grav.} {\bf 7}, 2107
\\Hammond, R.T. (1996), {\em Class. Quant. Grav.} {\bf 13}, L73
\\Harrison, E.R. (1972), {\em Phys. Rev. D} {\bf 6}, 2077
\\Hawking, S.W. and Horowitz, G.T. (1996), {\em Class. Quant. Grav.} {\bf
13}, 1487.
\\Hehl, E.W., von der Heyde, P., Kerlick, G.D. and
Nester, J.M. (1976), {\em Rev. Mod. Phys.} {\bf 48}, 393
\\Higgs, P.W. (1959), {\em Nuovo Cimento} {\bf 11}, 816.
\\Hill, C.T. and Salopek, D.S. (1992), {\em Ann.
Phys. (NY)} {\bf 213}, 21
\\Hill, C.T., Steinhardt, P.J. and Turner, M.S. (1990), {\em Phys. Lett.
B} {\bf 252}, 343.
\\Hirai, T. and Maeda, K. (1993), preprint WU-AP/32/93.
\\Hirai, T. and Maeda, K. (1994), {\em Astrophys. J.} {\bf 431}, 6
\\Hirai, T. and Maeda, K. (1997), in {\em Proceedings of the 7th Marcel
Grossman
Meeting}, Stanford, USA 1994, World Scientific, Singapore, p.~477
\\Hiscock, W.A. (1990), {\em Class. Quant. Grav.} {\bf 7},
L35
\\Hochberg, D. and Kephart, T.W. (1991), {\em Phys.
Rev. Lett.} {\bf 66}, 2553
\\Hochberg, D. and Kephart, T.W. (1995),
{\em Phys. Rev. D} {\bf 51}, 2687
\\Holman, R., Kolb, E.W., Vadas, S. and
Wang, Y. (1991), {\em Phys. Rev. D} {\bf 43}, 995
\\Holman, R., Kolb, E.W. and Wang, Y.
(1990), {\em Phys. Rev. Lett.} {\bf 65}, 17
\\Horowitz, G. (1990), in {\em Proceedings of the 12th International
Conference on General Relativity and Gravitation}, Boulder, USA 1989, N.
Ashby, D. Bartlett and W. Wyss eds. (Cambridge University Press,
Cambridge).
\\Hosotani, Y. (1985), {\em Phys. Rev. D} {\bf 32}, 1949
\\Hu, Y., Turner, M.S. and
Weinberg, E.J. (1994), {\em Phys. Rev. D} {\bf 49}, 3830
\\Hwang, J. (1990), {\em Class. Quant. Grav.} {\bf 7}, 1613
\\Hwang, J. (1996), {\em Phys. Rev. D} {\bf 53}, 762
\\Hwang, J. (1997{\em a}), {\em Class. Quant. Grav.} {\bf 14},
1981
\\Hwang, J. (1997{\em b}), {\em Class. Quant. Grav.} {\bf 14},
3327
\\Iorio, A., O'Raifeartaigh, L., Sachs, I.
and Wiesendanger, C. (1997), {\em Nucl. Phys. B} {\bf 495}, 433
\\Ishikawa, J. (1983), {\em Phys. Rev. D} {\bf 28},
2445
\\Jakubiec, A. and Kijowski, J. (1988),
{\em Phys. Rev. D} {\bf 37}, 1406
\\Jetzer, P. (1992), {\em Phys. Rep.} {\bf 220}, 163
\\Jordan, P. (1949), {\em Nature} {\bf 164},
637
\\Jordan, P. (1955), {\em Schwerkraft und Weltall}, F. Vieweg und Sohn,
Braunschweig.
\\Jordan, P. (1959), {\em Z. Phys.} {\bf 157}, 112.
\\Kaiser, D.I. (1995{\em a}), preprint astro--ph/9507048.
\\Kaiser, D.I. (1995{\em b}), {\em Phys. Rev. D} {\bf 52},
4295
\\Kalara, S., Kaloper, N. and Olive,
K.A. (1990), {\em Nucl. Phys. B} {\bf 341}, 252
\\Kaloper, N. and K.A. Olive, K.A. (1998), {\em Phys. Rev. D} {\bf 57},
811
\\Kasper, U. and Schmidt, H.--J. (1989), {\em
Nuovo Cimento B} {\bf 104}, 563
\\Klimcik, C. (1993), {\em J. Math. Phys.} {\bf 34}, 1914
\\Klimcik, C.K. and Kolnik, P. (1993), {\em
Phys. Rev. D} {\bf 48}, 616
\\Kolb, E.W., Salopek, D. and
Turner, M.S. (1990), {\em Phys. Rev. D} {\bf 42}, 3925
\\Kolb, E.W. and Turner, M.S. (1990), {\em The
Early Universe}, Addison--Wesley, Reading, Mass.
\\Kolitch, S.J. and Eardley, D.M. (1995), {\em Ann.
Phys. (NY)} {\bf 241}, 128
\\Kubyshin, Yu. and Martin, J. (1995), preprint UB--ECM--PF 95/13, LGCR
95/06/05, DAMPT R95, gr--qc/9507010.
\\Kubyshin, Y., Rubakov, V. and Tkachev, I.
(1989), {\em Int. J. Mod. Phys. A} {\bf 4}, 1409
\\Kunstatter, G., Lee. H.C. and
Leivo, H.P. (1986), {\em Phys. Rev. D} {\bf 33}, 1018
\\La, D. and Steinhardt, P.J. (1989), {\em Phys.
Rev. Lett.} {\bf 62}, 376
\\Lafrance, R. and Myers, R.C. (1995), {\em Phys. Rev. D} {\bf 51}, 2584.
\\Laycock, A.M. and Liddle, A.R. (1994),
{\em Phys. Rev. D}, {\bf 49}, 1827
\\Levin, J.J. (1995{\em a}), {\em Phys. Rev. D} {\bf 51},
462
\\Levin, J.J. (1995{\em b}), {\em Phys. Rev. D} {\bf 51}, 1536
\\Liddle, A.R. (1996), preprint SUSSEX--AST~96/12--1, astro--ph/9612093,
to appear in Proceedings, {\em From Quantum Fluctuations to Cosmological
Structures}, Casablanca, Morocco 1996.
\\Liddle, A.R. and Lyth, D.H. (1993), {\em
Phys. Rep.} {\bf 231}, 1
\\Liddle, A.R. and Madsen, M.S. (1992), {\em Int. J. Mod. Phys.} {\bf 1},
101
\\Liddle, A.R. and Wands, D. (1992), {\em
Phys. Rev. D} {\bf 45}, 2665
\\Lidsey, E.J. (1992), {\em Class. Quant. Grav.}
{\bf 9}, 149
\\Lightman, A.P. Press, W.H., Price, R.H. and
Teukolsky, S.A. (1975), {\em Problem Book in Relativity and
Gravitation}, Princeton University Press, Princeton NJ, p.~85.
\\Linde, A.D. (1990), {\em Particle Physics and Inflationary Cosmology},
Hardwood, Chur, Switzerland.
\\Linde, A.D. (1994), {\em Phys. Rev. D} {\bf 49}, 748
\\Lorentz, H.A. (1937), {\em Collected
Papers}, Nijhoff, The Hague, vol. 5, p.~363.
\\Lorentz--Petzold, D. (1984), in {\em Lecture Notes in
Physics}, Vol.~105, C. Hoenselaers, C. and W. Dietz, W. (Eds.), Springer,
Berlin.
\\Lu, H.Q. and Cheng, K.S. (1996), {\em
Astrophys. Sp. Sci} {\bf 235}, 207
\\Madsen, M.S. (1988), {\em Class. Quant. Grav.} {\bf 5}, 627
\\Madsen, M.S. (1993), {\em Gen. Rel. Grav.} {\bf 25}, 855.
\\Maeda, K. (1986{\em a}), {\em Class. Quant. Grav.} {\bf 3}, 651
\\Maeda, K. (1986{\em b}), {\em Phys. Lett. B} {\bf 166}, 59
\\Maeda, K. (1987), {\em Phys. Lett. B} {\bf 186}, 33
\\Maeda, K. (1989), {\em Phys. Rev. D} {\bf 39},
3159
\\Maeda, K. (1992), in {\em Relativistic Astrophysics and
Cosmology}, Proceedings, Potsdam 1991, Gottl\"{o}ber, S., M\"{u}cket,
J.P. and M\"{u}ller, V. (Eds.), World Scientific, Singapore,
p.~157
\\Maeda, K. and Pang, P.Y.T. (1986), {\em Phys. Lett. B}
{\bf 180}, 29
\\Maeda,K., Stein--Schabes, J.A. and Futamase, T. (1989), {\em Phys.
Rev. D} {\bf 39}, 2848
\\Magnano, G. 1995, in {\em Proceedings of the XI Italian
Conference
on General Relativity and Gravitation}, Trieste, Italy 1994, in press
(preprint gr--qc/9511027).
\\Magnano, G., Ferraris, M. and
Francaviglia, M. (1990), {\em J. Math. Phys.} {\bf 31}, 378
\\Magnano, G. and Sokolowski, L.M. (1994),
{\em Phys. Rev. D} {\bf 50}, 5039
\\Majumdar, A.S. (1997), {\em Phys. Rev. D} {\bf 55}, 6092.
\\Makino, N. and Sasaki, M. (1991), {\em Progr. Theor.
Phys.} {\bf 86}, 103
\\MAP homepage (1998) http://map.gsfc.nasa.gov/
\\Mashoon, B. (1993), in {\em Quantum Gravity and Beyond,
Essays in Honour of Louis Witten on His Retirement}, Mansouri, F. and
Scanio, J. (Eds.), World Scientific, Singapore.
\\Mazenko, G.F. (1985{\em a}), {\em Phys. Rev. Lett.} {\bf 54},
2163
\\Mazenko, G.F. (1985{\em b}), {\em Phys. Rev. D} {\bf 34}, 2223.
\\Mazenko, G.F., Unruh, W.G. and Wald, R.M. (1985), {\em Phys.
Rev. D} {\bf 31}, 273
\\McDonald, J. (1993{\em a}), {\em Phys. Rev. D} {\bf 48},
2462
\\McDonald, J. (1993{\em b}), {\em Phys. Rev. D} {\bf 48},
2573
\\Mignemi, S. and Schmidt, H.--J. (1995), {\em
Class. Quant. Grav.} {\bf 12}, 849
\\Mignemi, S. and Whiltshire, D. (1992),
{\em Phys. Rev. D} {\bf 46}, 1475
\\Mimoso, J.P. and Wands, D. (1995{\em a}), {\em
Phys. Rev. D} {\bf 51}, 477
\\Mimoso, J.P. and Wands, D (1995{\em b}), {\em Phys.
Rev. D} {\bf 52}, 5612
\\Miritzis, J.M. and Cotsakis, S. (1996),
{\em Phys. Lett. B} {\bf 383}, 377
\\Mollerach, S. and Matarrese, S. (1992),
{\em Phys. Rev. D} {\bf 45}, 1961
\\Moniz, P., Crawford, P. and Barroso, A. (1990),
{\em Class. Quant. Grav.} {\bf 7}, L143
\\Morikawa, M. (1990), {\em Astrophys. J. (Lett.)} {\bf
362}, L37
\\Morikawa, M. (1991), {\em Astrophys. J.} {\bf 369}, 20
\\Mukhanov, V.F., Feldman, H.A. and
Brandenberger, R.H. (1992), {\em Phys. Rep.} {\bf 215}, 203
\\Muta, T. and Odintsov, S.D. (1991), {\em
Mod. Phys. Lett. A} {\bf 6}, 3641
\\Mychelkin, E.G. (1991), {\em Astrophys. Sp. Sci.}
{\bf 184}, 235
\\Nambu, Y. and Sasaki, M. (1990), {\em Progr. Theor. Phys.} {\bf 83},
37
\\Nelson, B. and Panangaden, P. (1982), {\em
Phys. Rev. D} {\bf 25}, 1019
\\Nishioka, T. and Fujii, Y. (1992), {\em Phys.
Rev. D} {\bf 45}, 2140
\\Noonan, T.W. (1995), {\em Class. Quant. Grav.} {\bf
12}, 1087
\\Nordvedt, K. (1970), {\em Astrophys. J.} {\bf 161}, 1059
\\Novello, M. (1982), {\em Phys. Lett. A} {\bf 90}, 347
\\Novello, M. and Elbaz, E. (1994), {\em Nuovo
Cimento} {\bf 109}, 741
\\Novello, M. and Heintzmann, H. (1984),
{\em Gen. Rel. Grav.} {\bf 16}, 535
\\Novello, M., Pereira, V.M.C.
and Pinto--Neto, N. (1995), {\em Int. J. Mod. Phys. D} {\bf 4},
673
\\Novello, M. and J.M. Salim, J.M. (1979), {\em
Phys. Rev. D} {\bf 20}, 377
\\Occhionero, F. and Amendola, L. (1994),
{\em Phys. Rev. D} {\bf 50}, 4846
\\Odintsov, S.D. (1991), {\em Fortschr. Phys.} {\bf 39},
621
\\Oukuiss, A. (1997), {\em Nucl. Phys. B} {\bf 486}, 413.
\\Overduin, J.M. and Wesson, P.S. (1997), {\em
Phys. Rep.} {\bf 283}, 303
\\Padmanabhan, T. (1988), in {\em Highlights in
Gravitation and Cosmology}, Proceedings, Goa, India 1987, Iyer, B.R.,
Kembhavi, A.K., Narlikar, J.V. and Vishveshwara, C.V. (Eds.), Cambridge
University Press, Cambridge, p.~156
\\Page, L. (1936{\em a}), {\em Phys. Rev.} {\bf 49}, 254.
\\Page, L. (1936{\em b}), {\em Phys. Rev.} {\bf 49}, 946.
\\Page, L. and Adams, N.I. (1936), {\em Phys. Rev.} {\bf 49}, 466.
\\Park, C.J. and Yoon, Y. (1997), {\em Gen. Rel.
Grav.} {\bf 29}, 765
\\Parker, L. and Toms, D.J. (1985), {\em Phys. Rev. D}
{\bf 32}, 1409
\\Pauli, W. (1955), quoted in {\em Schwerkraft und
Weltall}, F. Vieweg und Sohn, Braunschweig.
\\Pauli, W. (1958), {\em Theory of Relativity}, Pergamon
Press, New York, p.~224.
\\Perlick, V. (1990), {\em Class. Quant. Grav.}
{\bf 7}, 1849
\\Pi, S.--Y. (1985), {\em Nucl. Phys. B} {\bf 252}, 127
\\Piccinelli, G., Lucchin, F. and Matarrese, S.
(1992), {\em Phys. Lett. B} {\bf 277}, 58
\\Pimentel, L.O. and Stein--Schabes, J.
(1989), {\em Phys. Lett. B} {\bf 216}, 27
\\PLANCK homepage (1998) http://astro.estec.esa.nl/SA--general/Projects/Planck
\\Pollock, M.D. (1982), {\em Phys. Lett. B} {\bf 108}, 386
\\Rainer, M. and Zuhk, A. (1996), {\em Phys. Rev. D}
{\bf 54}, 6186
\\Reasenberg, R.D. {\em et al.} (1979), {\em
Astrophys. J. (Lett.)} {\bf 234}, L219
\\Reuter, M. (1994), {\em Phys. Rev. D} {\bf 49}, 6379
\\Rothman, T. and Anninos, P. (1991),
{\em Phys. Rev. D} {\bf 44}, 3087
\\Sadhev, D. (1984), {\em Phys. Lett. B} {\bf 137}, 155
\\Salgado, M., Sudarsky, D. and Quevedo, H. (1996), {\em Phys. Rev. D} {\bf
53}, 6771
\\Salgado, M., Sudarsky, D. and Quevedo, H. (1997), {\em Phys. Lett. B}
{\bf 408}, 69
\\Salopek, D.S. (1992), {\em Phys. Rev. Lett.} {\bf 69},
3602
\\Salopek, D.S., Bond, J.R. and
Bardeen, J.M. (1989), {\em Phys. Rev. D} {\bf 40}, 1753
\\Sasaki, M. (1986), {\em Progr. Theor. Phys.} {\bf 76}, 1036
\\Shapiro, L.L. and Takata, H. (1995), {\em Phys. Lett. B} {\bf 361}, 31.
\\Scheel, M.A., Shapiro, S.L. and Teukolsky, S.A. (1995), {\em Phys. Rev.
D} {\bf 51}, 4236
\\Scherk, J. and Schwarz, J.H. (1979), {\em Nucl.
Phys. B} {\bf 153}, 61
\\Schmidt, H.--J. (1987), {\em Astr. Nachr.} {\bf 308}, 183.
\\Schmidt, H.--J. (1988), {\em Phys. Lett. B} {\bf
214}, 519
\\Schmidt, H.--J. (1990), {\em Class. Quant. Grav.}
{\bf 7}, 1023
\\Schmidt, H.--J. (1995), {\em Phys. Rev. D} {\bf 52}, 6196.
\\Schneider, P., Ehlers, J. and Falco, E.E. (1992),
{\em Gravitational Lenses}, Springer, Berlin.
\\Semenoff, G. and Weiss, N. (1985), {\em Phys. Rev. D}
{\bf 31}, 699
\\Shapiro, I.L. and Takata, H. (1995), {\em
Phys. Lett. B} {\bf 361}, 31
\\Smoot, G.F. {\em et al.} (1992), {\em Astrophys. J. (Lett.)}
{\bf 396}, L1.
\\Sokolowski, L. (1989{\em a}), {\em Class. Quant. Grav.} {\bf 6}, 59
\\Sokolowski, L. (1989{\em b}), {\em Class. Quant. Grav.} {\bf 6},
2045
\\Sokolowski, L.M. (1997), in {\em Proceedings of the
14th International Conference on General Relativity and Gravitation},
Firenze, Italy
1995, M. Francaviglia, G. Longhi, L. Lusanna and E. Sorace eds. (World
Scientific, Singapore), P.~337
\\Sokolowski, L.M. and Carr, B. (1986), {\em Phys. Lett. B} {\bf 176},
334
\\Sokolowski, L.M. and Golda, Z.A. (1987), {\em Phys. Lett. B} {\bf
195}, 349
\\Sokolowski, L.M., Golda, Z.A., Litterio, A.M. and Amendola, L. (1991),
{\em Int. J. Mod. Phys. A} {\bf 6}, 4517
\\Sonego, S. and Faraoni, V. (1992), {\em J.
Math. Phys.} {\bf 33}, 625
\\Sonego, S. and Faraoni, V. (1993), {\em Class. Quant.
Grav.} {\bf 10}, 1185
\\Sonego, S. and Massar, M. (1996), {\em Mon. Not.
R. Astr. Soc.} {\bf 281}, 659
\\Stahlofen, A.A. and Schramm, A.J. (1989),
{\em Phys. Rev. A} {\bf 40}, 1220
\\Starobinsky, A.A. (1980), {\em Phys. Lett. B} {\bf 91}, 99
\\Starobinski, A.A. (1981), {\em Sov. Astron. Lett.} {\bf 7}, 36.
\\Starobinsky, A.A. (1986), {\em Sov. Astr. Lett.} {\bf 29}, 34.
\\Starobinsky, A.A. (1987), in {\em Proceedings of the
4th Seminar on Quantum Gravity}, Markov, M.A. and Frolov, V.P.
(Eds.), World Scientific, Singapore.
\\Steinhardt, P.J. and Accetta, F.S.
(1990), {\em Phys. Rev. Lett.} {\bf 64}, 2740
\\Streater, R. and Wightman, A. (1964), {\em
PCT, Spin and Statistics, and All That}, Benjamin, New York.
\\Sudarsky, D. (1992), {\em Phys. Lett. B} {\bf 281}, 281.
\\Suen, W.--M. and Will, C.M. (1988), {\em Phys. Lett. B} {\bf 205}, 447.
\\Sunahara, K., Kasai, M. and Futamase, T. (1990),
{\em Progr. Theor. Phys.} {\bf 83}, 353
\\Suzuki, Y. and Yoshimura, M. (1991),
{\em Phys. Rev. D} {\bf 43}, 2549
\\Synge, J.L. (1955), {\em Relativity: the General Theory}, North Holland,
Amsterdam.
\\Tanaka, T. and Sakagami, M. (1997), preprint
OU--TAP 50, kucp0107, gr--qc/9705054.
\\Tao, Z.--J. and Xue, X. (1992), {\em Phys. Rev. D}
{\bf 45}, 1878
\\Taylor, T.R. and Veneziano, G. (1988), {\em
Phys. Lett. B} {\bf 213}, 450
\\Teyssandier, P. (1995), {\em Phys. Rev. D} {\bf
52}, 6195
\\Teyssandier, P. and Tourrenc, P.
(1983), {\em J. Math. Phys.} {\bf 24}, 2793
\\Tkacev, I. (1992), {\em Phys. Rev. D}
{\bf 45}, 4367
\\Tseytlin, A.A. (1993), {\em Phys. Lett. B} {\bf 317}, 559
\\Turner, M.S. (1993), in {\em Recent Directions in
Particle Theory -- From
Superstrings and Black Holes
to the Standard Model}, Proceedings of the
Theoretical Advanced Study Institute in Elementary
Particle Physics, Boulder,
Colorado 1992, Harvey, J. and Polchinski, J. (Eds.), World Scientific,
Singapore (preprint FERMILAB--Conf--92/313--A, astro--ph/9304012).
\\Turner, M.S. and Weinberg, E.J. (1997), {\em Phys. Rev. D} {\bf 56},
4604
\\Turner, M.S. and Widrow, L.M. (1988), {\em Phys. Rev. D} {\bf 37}, 2743.
\\Van den Bergh, N. (1980), {\em Gen. Rel. Grav.} {\bf 12}, 863
\\Van den Bergh, N. (1982), {\em Gen. Rel. Grav.} {\bf 14}, 17
\\Van den Bergh, N. (1983{\em a}), {\em Gen. Rel. Grav.} {\bf 15}, 441
\\Van den Bergh, N. (1983{\em b}), {\em Gen. Rel. Grav.} {\bf 15}, 449
\\Van den Bergh, N. (1983{\em c}), {\em Gen. Rel. Grav.} {\bf 15},
1043
\\Van den Bergh, N. (1983{\em d}), {\em Gen. Rel. Grav.} {\bf 16}, 2191
\\Van den Bergh, N. (1986{\em a}), {\em J. Math. Phys.} {\bf 27}, 1076
\\Van den Bergh, N. (1986{\em b}), {\em Lett. Math. Phys.} {\bf 11},
141
\\Van den Bergh, N. (1986{\em c}), {\em Gen. Rel. Grav.} {\bf 18}, 649
\\Van den Bergh, N. (1986{\em d}), {\em Gen. Rel. Grav.} {\bf 18},
1105
\\Van den Bergh, N. (1986{\em e}), {\em Lett. Math. Phys.} {\bf 12}, 43
\\Van den Bergh, N. (1988), {\em J. Math. Phys.} {\bf 29}, 1451
\\Van den Bergh, N. and Tavakol, R.K. (1993), {\em Class. Quant. Grav.}
{\bf 10}, 183
\\Van der Bij, J.J. and Gleiser, M. (1987), {\em Phys. Lett. B} {\bf 194},
482.
\\Voloshin, M.B. and Dolgov, A.D. (1982), {\em Sov.
J. Nucl. Phys.} {\bf 35}, 120
\\Wagoner, R.V. (1970), {\em Phys. Rev. D} {\bf 1}, 3209
\\Wald, R.M. (1984), {\em General Relativity}, Chicago University Press,
Chicago.
\\Wands, D. (1994), {\em Class. Quant. Grav.} {\bf 11}, 269
\\Weinstein, S. (1996), {\em Phil. Sci.} {\bf 63}, S63.
\\Weyl, H. (1919), {\em Ann. Phys. (Leipzig)} {\bf 59}, 101.
\\Whitt, B. (1984), {\em Phys. Lett. B} {\bf 145}, 176
\\Will, C.M. (1977), {\em Astrophys. J.} {\bf 214}, 826
\\Will, C.M. (1993), {\em Theory and Experiment in Gravitational
Physics} (revised edition), Cambridge University Press, Cambridge.
\\Will, C.M. and Eardley, D.M. (1977), {\em Astrophys. J.
(Lett.)} {\bf 212}, L9
\\Will, C.M. and P.J. Steinhardt, P.J. (1995),
{\em Phys. Rev. D} {\bf 52}, 628
\\Will, C.M. and Zaglauer, H.W. (1989), {\em Astrophys. J.}
{\bf 346}, 366.
\\Witten, E. (1982), {\em Nucl. Phys. B} {\bf 195}, 481
\\Wood, R.W., Papini, G. and Cai, Y.Q.
(1989), {\em Nuovo Cimento B} {\bf 104}, 653
\\Wu, A. (1992), {\em Phys. Rev. D} {\bf 45}, 2653
\\Xanthopoulos, B.C. and
Dialynas, T.E. (1992), {\em J. Math. Phys.} {\bf 33}, 1463
\\Yokoyama, J. (1988), {\em Phys. Lett. B} {\bf 212}, 273
\\Yoon, J.H. and Brill, D.R. (1990), {\em Class.
Quant. Grav.} {\bf 7}, 1253
\end{document}
| proofpile-arXiv_065-8802 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
There has been remarkable interest in the theory of soliton
ever since the discovery of the Inverse Scattering Method
(ISM) for the Korteweg-de Vires(KdV) equation(see, for example, \cite{NMPZ}).
Although the ISM has been extended to many nonlinear systems
which describe phenomena in many branches of science,
the KdV equation still plays an important role in the development
of modern soliton theory (see, for example \cite{95}).
In particular, many concepts were established first for the KdV equation
and then generalized to other systems in different ways.
A convenient approach to formulate the KdV hierarchy relies on
the use of fractional-power pseudo-differential operators associated
with the scalar Lax operator $L=\partial^2+u$ which was further generalized
by Gelfand and Dickey \cite{D} to the $N$-th KdV hierarchy which has Lax
operator of the form $L_N=\partial^N+u_{N-1}\partial^{N-1}+\cdots+u_0$.
In the past few years, there are several works concerning the extensions of
the KdV hierarchy in Lax formulation, such as Drinfeld-Skolov theory \cite{DS}
and supersymmetric generalizations \cite{MR}, etc.
The common features of these extensions show that they preserve
the integrable structure of the KdV hierarchy and contain the KdV
as reductions in some limiting cases .
Recently, a new kind of extension called $q$-deformed KdV ($q$-KdV)
hierarchy has been proposed and studied \cite{Z,JM,F,FR,KLR,HI,Ia,Ib,AHM}.
In this extension the partial
derivative $\partial$ is replaced by the $q$-deformed differential operators ($q$-DO)
$\partial_q$ such that
\begin{equation}
(\partial_qf(x))=\frac{f(qx)-f(x)}{x(q-1)}
\label{diff}
\end{equation}
which recovers the ordinary differentiation $(\partial_xf(x))$ as $q$ goes to 1.
Even though many structures of the $q$-KdV hierarchy
such as infinite conservation laws \cite{Z},
bi-Hamiltonian structure \cite{JM}, tau-function \cite{HI,Ia,Ib,AHM},
Vertex operators\cite{AHM}, Virasoro and $W$-algebras \cite{FR}, etc.
have been studied however the $q$-version of Darboux-B\"acklund
transformations (DBTs) for this system are still unexplored.
It is well known that the DBT is an important property to characterize
the integrability of the hierarchy\cite{MS}.
Thus it is worthwhile to investigate the DBTs associated with
the $q$-KdV hierarchy. Once this goal can be achieved, it will deepen our
understanding on the soliton solutions of the hierarchy.
Our paper is organized as follows: In Sec. II, we recall
the basic facts concerning the $q$-deformed pseudodifferential
operators ($q$-PDO)
and define the $N$th $q$-deformed Korteweg-de Vries
(q-KdV) hierarchy. In Sec. III, we construct the
Darboux-B\"acklund transformations (DBTs) for the $N$th
q-KdV hierarchy, which preserve the form of the Lax operator
and the hierarchy flows. Iteration of these DBTs generates
the $q$-analogue of soliton solutions of the hierarchy.
In Sec. IV, the case for $N=2$ ($q$-KdV hierarchy) is studied
in detail to illustrate the $q$-deformed formulation.
Concluding remarks are presented in Sec. V.
\section{$q$-deformed pseudodifferential operators}
Let us define the $q$-shift operator $\theta$ as
\begin{equation}
\theta(f(x))=f(qx)
\label{shift}
\end{equation}
then it is easy to show that $\theta$ and $\partial_q$ do not commute
but satisfy
\begin{equation}
(\partial_q\theta^k(f))=q^k\theta^k(\partial_qf),\qquad k\in Z
\end{equation}
Using (\ref{diff}) and (\ref{shift}) we have
$(\partial_qfg)=(\partial_qf)g+\theta(f)(\partial_qg)$
which implies that
\begin{equation}
\partial_qf=(\partial_qf)+\theta(f)\partial_q
\end{equation}
We also denote $\partial_q^{-1}$ as the formal inverse of $\partial_q$
such that $\partial_q\partial_q^{-1}f=\partial_q^{-1}\partial_qf=f$ and hence
\begin{equation}
\partial_q^{-1}f=\sum_{k\geq 0}(-1)^kq^{-k(k+1)/2}\theta^{-k-1}(\partial_q^kf)
\partial_q^{-k-1}.
\end{equation}
In general one can justify the following $q$-deformed Leibnitz
rule:
\begin{equation}
\partial_q^nf=\sum_{k\geq 0}{n \brack k}_q\theta^{n-k}(\partial_q^kf)\partial_q^{n-k}\qquad
n\in Z
\end{equation}
where we introduce the $q$-number and the $q$-binomial as follows
\begin{eqnarray}
[n]_q&=&\frac{q^n-1}{q-1}\\
{n \brack k}_q&=&\frac{[n]_q[n-1]_q\cdots[n-k+1]_q}
{[1]_q[2]_q\cdots[k]_q},\qquad {n \brack 0}_q= 1
\end{eqnarray}
For a $q$-PDO of the form
\begin{equation}
P=\sum_{i=-\infty}^nu_i\partial_q^i
\end{equation}
it is convenient to separate $P$ into the differential
and integral parts as follows
\begin{equation}
P_+=\sum_{i\geq 0}u_i\partial_q^i\qquad P_-=\sum_{i\leq -1}u_i\partial_q^i
\end{equation}
and denote $(P)_0$ as the zero-th order term of $P$.
The residue of $P$ is defined by $\mbox{res}(P)=u_{-1}$ and the conjugate
operation ``$*$" for $P$ is defined by $P^*=\sum_i(\partial_q^*)^iu_i$
with
\begin{equation}
\partial_q^*=-\partial_q\theta^{-1}
\end{equation}
Then a straightforward calculation shows that
\begin{equation}
(PQ)^*=Q^*P^*
\end{equation}
where $P$ and $Q$ are any two $q$-PDOs.
Finally, for a set of functions $f_1,f_2,\cdots,f_n$,
we define $q$-deformed Wronskian determinant
$W_q[f_1,f_2,\cdots,f_n]$ as
\begin{equation}
W_q[f_1,f_2,\cdots,f_n]=
\left|
\begin{array}{ccc}
f_1 & \cdots & f_n\\
(\partial_qf_1)& \cdots &(\partial_qf_n)\\
\vdots& &\vdots\\
(\partial_q^{n-1}f_1)& \cdots &(\partial_q^{n-1}f_n)
\end{array}
\right|
\end{equation}
With these definitions in hand, we have the following
identities which will simplify the computations involving
compositions of $q$-PDOs.
{\bf Proposition 1:\/}
\begin{eqnarray}
& &(P^*)_+=(P_+)^*,\qquad (P^*)_-=(P_-)^* \\
& &(\partial_q^{-1}P)_-=\partial_q^{-1}(P^*)_0+\partial_q^{-1}P_-\\
& &(P \partial_q^{-1})_-=(P)_0\partial_q^{-1}+P_-\partial_q^{-1}\\
& &\mbox{res}(P^*)=-\theta^{-1}(\mbox{res}(P)),\qquad (\partial_q\mbox{res}(P))=\mbox{res}(\partial_qP)
-\theta(\mbox{res}(P \partial_q))\\
& &\mbox{res}(P \partial_q^{-1})=(P)_0\qquad \mbox{res}(\partial_q^{-1}P)=\theta^{-1}((P^*)_0)\\
& &\mbox{res}(\partial_q^{-1}P_1P_2\partial_q^{-1})
=\mbox{res}(\partial_q^{-1}(P_1^*)_0P_2\partial_q^{-1})+\mbox{res}(\partial_q^{-1}P_1(P_2)_0\partial_q^{-1})
\end{eqnarray}
where $P_1=(P_1)_+$ and $P_2=(P_2)_+$.
The $N$-th $q$-KdV hierarchy is defined, in Lax form, as
\begin{equation}
\partial_{t_n}L=[B_n, L],\qquad n=1,2,3,\cdots
\label{laxeq}
\end{equation}
with
\begin{equation}
L=\partial_q^N+u_{N-1}\partial_q^{N-1}+\cdots+u_0,
\qquad B_n\equiv L^{n/N}_+
\label{lax}
\end{equation}
where the coefficients $u_i$ are functions of the variables $(x,t_1,t_2,\cdots)$
but do not depend on $(t_N,t_{2N},t_{3N},\cdots)$.
In fact, we can rewrite the hierarchy equations (\ref{laxeq}) as follows:
\begin{equation}
\partial_{t_m}B_n-\partial_{t_n}B_m-[B_n,B_m]=0
\label{zero}
\end{equation}
which is called the zero-curvature condition and is equivalent to
the whole set of equations of (\ref{laxeq}).
If we can find a set of functions $\{u_i, i=0,1,\cdots,N-1\}$ and
hence a corresponding Lax operator $L$ (or $B_n$)
satisfying (\ref{laxeq}) (or (\ref{zero})),
then we have a solution to the $N$-th $q$-KdV hierarchy.
For the Lax operator (\ref{lax}), we can formally expand $L^{1/N}$
in powers of $\partial_q$ as follows
\begin{equation}
L^{1/N}=\partial_q+s_0+s_1\partial_q^{-1}+\cdots
\end{equation}
such that $(L^{1/N})^N=L$ which gives all the $s_i$ being
$q$-deformed differential polynomials in $\{u_i\}$.
Especially, for the coefficient of $\partial_q^{N-1}$ we have
\begin{equation}
u_{N-1}=s_0+\theta(s_0)+\cdots+\theta^{N-1}(s_0).
\end{equation}
The Lax equations (\ref{laxeq}) can be viewed as the
compatibility condition of the linear system
\begin{equation}
L\phi=\lambda\phi,\qquad \partial_{t_n}\phi=(B_n\phi)
\label{wave}
\end{equation}
where $\phi$ and $\lambda$ are called wave function and eigenvalue of
the linear system, respectively.
On the other hand, we can also introduce
adjoint wave function $\psi$ which satisfies the adjoint
linear system
\begin{equation}
L^*\psi=\mu\psi,\qquad \partial_{t_n}\psi=-(B_n^*\psi)
\label{adwave}
\end{equation}
For convenience, throughout this paper, $\phi_i$ $(\psi_i)$
will stand for (adjoint) wave functions with eigenvalues $\lambda_i (\mu_i)$,
respectively without further mention.
\section{elementary Darboux-B\"acklund transformations}
In this section we would like to construct DBTs for
the $N$-th $q$-KdV hierarchy.
To attain this purpose, let us consider the following transformation
\begin{equation}
L\to L^{(1)}=TLT^{-1}
\label{laxgt}
\end{equation}
where $T$ is any reasonable $q$-PDO and $T^{-1}$
denotes its inverse.
In order to obtain the new solution ($L^{(1)}$) from the old
one ($L$) the gauge operator $T$ can not be arbitrarily chosen.
It should be constructed in such a way that the transformed Lax
operator $L^{(1)}$ preserves the form of $L$ and satisfies
the Lax equation (\ref{laxeq}).
From the zero-curvature condition (\ref{zero})
point of view, the operator $B_n$ should be transformed according to
\begin{equation}
B_n\to B^{(1)}_n=TB_nT^{-1}+\partial_{t_n}TT^{-1}
\label{bngt}
\end{equation}
which will, in general, not be a pure $q$-DO
although the $B_n$ does. However if we suitable choose the gauge
operator $T$ such that $B^{(1)}_n$, as defined by (\ref{bngt}), is also a
purely $q$-PDO, then $L_n^{(1)}$
represents a valid new solution to the $N$-th $q$-KdV hierarchy.
This is the goal we want to achieve in this letter.
To formulate the DBTs, we follow \cite{O} to introduce a $q$-version
of the bilinear potential $\Omega(\phi,\psi)$ which is constructed from a wave
function $\phi$ and an adjoint wave function $\psi$.
The usefulness of this bilinear potential will
be clear when we use it to construct DBTs (see below).
{\bf Proposition 2:\/} For any pair of $\phi$ and $\psi$,
there exists a bilinear potential $\Omega(\phi, \psi)$
satisfying
\begin{eqnarray}
(\partial_q\Omega(\phi, \psi))&=&\phi\psi\\
\partial_{t_n}\Omega(\phi, \psi)&=&\mbox{res}(\partial_q^{-1}\psi B_n\phi \partial_q^{-1})
\end{eqnarray}
In fact the bilinear potential $\Omega(\phi, \psi)$ can be formally
represented by a $q$-integration as $\Omega(\phi,\psi)=(\partial_q^{-1}\phi\psi)$.
Motivated by the DBTs for the ordinary KdV \cite{MS}
(or Kadomtsev-Petviashvili (KP) \cite{O,OS,CSY})
hierarchy, we can construct a qualified gauge operator $T$ as follows
\begin{equation}
T_1=\theta(\phi_1)\partial_q\phi_1^{-1}=\partial_q-\alpha_1,\qquad
\alpha_1\equiv \frac{(\partial_q\phi_1)}{\phi_1}
\label{tdb}
\end{equation}
where $\phi_1$ is a wave function associated with the linear
system (\ref{wave}). It is not hard to show that the transformed
Lax operator $L^{(1)}$ is a purely $q$-PDO with order
$N$ and the Lax equation (\ref{laxeq}) transforms covariantly, i.e.
$\partial_{t_n}L^{(1)}=[(L^{(1)})_+^{n/N}, L^{(1)}]$.
The transformed coefficients $\{u_i^{(1)}\}$
then can be expressed in terms of $\{u_i\}$ and $\phi_1$.
On the other hand, for a given generic wave function $\phi\neq \phi_1$
(or adjoint function $\psi$) its transformed result can be expressed
in terms of $\phi_1$ and itself.
{\bf Proposition 3:\/} Suppose $\phi_1$ is a wave function of the
linear system (\ref{wave}).
Then the gauge operator $T_1$ triggers the following DBT:
\begin{eqnarray}
& &L^{(1)}=T_1LT_1^{-1}=
\partial_q^N+u_{N-1}^{(1)}\partial_q^{N-1}+\cdots+u_0^{(1)}\\
& &\phi^{(1)}=(T_1\phi)=\frac{W_q[\phi_1, \phi]}{\phi_1},\qquad
\phi\neq \phi_1\\
& &\psi^{(1)}=((T_1^{-1})^*\psi)=-\frac{\theta(\Omega(\phi_1, \psi))}{\theta(\phi_1)}
\end{eqnarray}
where $L^{(1)}$, $\phi^{(1)}$ and $\psi^{(1)}$
satisfy (\ref{laxeq}), (\ref{wave}) and (\ref{adwave}) respectively.
Just like the DBTs for the ordinary KdV hierarchy, the DBT triggered by
the gauge operator $T_1$ is by no means the only transformation in this
$q$-analogue framework. We have another construction of DBT by
using the adjoint wave function associated with the adjoint linear
system (\ref{adwave}). Indeed, for a given adjoint wave
function $\psi_1$ we can construct a gauge operator
\begin{equation}
S_1=\theta^{-1}(\psi_1^{-1})\partial_q^{-1}\psi_1=(\partial_q+\beta_1)^{-1},
\qquad \beta_1\equiv \frac{(\partial_q\theta^{-1}(\psi_1))}{\psi_1}
\label{sdb}
\end{equation}
which preserves the form of the Lax operator and the Lax equation.
{\bf Proposition 4:\/} Suppose $\psi_1$ is an adjoint wave function of the
adjoint linear system (\ref{adwave}).
Then the gauge operator $S_1$ triggers the following adjoint DBT:
\begin{eqnarray}
& &L^{(1)}=S_1LS_1^{-1}\\
& &\phi^{(1)}=(S_1\phi)=\frac{\Omega(\phi, \psi_1)}{\theta^{-1}(\psi_1)}\\
& &\psi^{(1)}=((S_1^{-1})^*\psi)=\frac{\tilde{W}_q[\psi_1, \psi]}{\psi_1},
\qquad \psi\neq \psi_1
\end{eqnarray}
where $L^{(1)}$, $\phi^{(1)}$ and $\psi^{(1)}$
satisfy (\ref{laxeq}), (\ref{wave}) and (\ref{adwave}), respectively and
$\tilde{W}_q$ is obtained from $W_q$ by replacing $\partial_q$ with $\partial_q^*$.
So far, we have constructed two elementary DBTs triggered by the
gauge operators $T_1$ and $S_1$. Regarding them as the building
blocks, we can construct more complicated transformations from
the compositions of these elementary DBTs. However, we will see
that it is convenient to consider a DBT followed by an adjoint
DBT and vice versa because such combination will frequently
appear in more complicated DBTs. So let us compose
them to form a single operator $R_1$ which we call binary gauge operator.
The construction of the binary gauge operator $R_1$ can be realized as follows:
first we perform a DBT triggered by the gauge operator
$T_1=\theta(\phi_1)\partial_q\phi_1^{-1}$ and the adjoint
wave function $\psi_1$ is thus transformed to
$\psi_1^{(1)}=((T_1^{-1})^*\psi_1)=-\theta(\phi_1^{-1})\theta(\Omega(\phi_1, \psi_1))$.
Then a subsequent adjoint DBT triggered by
$S_1^{(1)}=\theta^{-1}((\psi_1^{(1)})^{-1})\partial_q^{-1}(\psi_1^{(1)})$
is performed and the composition of these two transformations gives
\begin{equation}
R_1=S_1^{(1)}T_1=1-\phi_1\Omega(\phi_1,\psi_1)^{-1}\partial_q^{-1}\psi_1
\label{rdb}
\end{equation}
{\bf Proposition 5:\/} Let $\phi_1$ and $\psi_1$ be wave function and
adjoint wave function associated with the linear
systems (\ref{wave}) and (\ref{adwave}), respectively.
Then the gauge operator $R_1$ triggers the following binary DBT:
\begin{eqnarray}
& &L^{(1)}=R_1LR_1^{-1}\\
& &\phi^{(1)}=(R\phi)=\phi-\Omega(\phi_1,\psi_1)^{-1}\phi_1\Omega(\phi,\psi_1),
\qquad \phi\neq \phi_1\\
& &\psi^{(1)}=((R^{-1})^*\psi)=
\psi-\theta(\Omega(\phi_1,\psi_1)^{-1})\psi_1\theta(\Omega(\phi_1,\psi)),
\qquad \psi\neq \psi_1
\end{eqnarray}
where $L^{(1)}$, $\phi^{(1)}$ and $\psi^{(1)}$
satisfy (\ref{laxeq}), (\ref{wave}) and (\ref{adwave}) respectively.
We would like to remark that the construction of the binary gauge
operator $R_1$ is independent of the order of the gauge
operators $T$ and $S$. If we apply the gauge operator $S_1$ followed
by $T_1^{(1)}$, then a direct calculation shows that $R=T_1^{(1)}S_1$
has the same form as (\ref{rdb}).
The remaining part of this section is to consider the iteration of the DBTs
by using the DBT, the adjoint DBT, and the binary DBT
triggered by the gauge operators $T$, $S$, and $R$, respectively.
For example, by iterating the DBT triggered by the gauge operator $T$,
we can express the solution of the $N$-th $q$-KdV hierarchy through
the $q$-deformed Wronskian representation. This construction
starts with $n$ wave functions $\phi_1, \phi_2,\cdots, \phi_n$
of the linear system (\ref{wave}). Using $\phi_1$, say, to perform
the first DBT of Proposition 1, then all $\phi_i$ are transformed
to $\phi_i^{(1)}=(T_1\phi_i)$. Obviously, we have $\phi_1^{(1)}=0$.
The next step is to perform a subsequent DBT triggered by $\phi_2^{(1)}$,
which leads to the new wave functions $\phi_i^{(2)}$ with $\phi_2^{(2)}=0$.
Iterating this process such that all the wave functions are exhausted,
then an $n$-step DBT triggered by the gauge operator
$T_n=(\partial_q-\alpha_n^{(n-1)})(\partial_q-\alpha_{n-1}^{(n-2)})
\cdots(\partial_q-\alpha_1)$ is obtained, where
$\alpha_i^{(j)}\equiv (\partial_q\phi_i^{(j)})/\phi_i^{(j)}$.
It is easy to see that $T_n$ is an $n$th-order $q$-DO
of the form $T_n=\partial_q^n+a_{n-1}\partial_q^{n-1}+\cdots+a_0$ with
$a_i$ defined by the conditions
$(T_n\phi_j)=0, j=1,2,\cdots,n$. Following the Cramer's formula
it turns out that $a_i=-W_q^{(i)}[\phi_1,\cdots,\phi_n]/
W_q[\phi_1,\cdots,\phi_n]$ where $W_q^{(i)}$ is obtained from
$W_q$ with its $i$-th row replaced by
$(\partial_q^n\phi_1),\cdots,(\partial_q^n\phi_n)$.
This implies that the $n$-step transformed wave function
$\phi^{(n)}$ ($\phi\neq \phi_i$) is given by
\begin{equation}
\phi^{(n)}=(T_n\phi)=\frac{W_q[\phi_1,\cdots,\phi_n,\phi]}
{W_q[\phi_1,\cdots,\phi_n]}
\label{nwf}
\end{equation}
and the $n$-step gauge operator $T_n$ can be expressed as
\begin{equation}
T_n=\frac{1}{W_q[\phi_1,\cdots,\phi_n]}
\left|
\begin{array}{cccc}
\phi_1 &\cdots&\phi_n &1 \\
(\partial_q\phi_1) &\cdots&(\partial_q\phi_n) &\partial_q \\
\vdots & & \vdots& \vdots \\
(\partial_q^n\phi_1) &\cdots&(\partial_q^n\phi_n) &\partial_q^n
\end{array}
\right|
\end{equation}
where it should be realized that in the expansion of the
determinant by the elements of the last column,
$\partial_q^i$ have to be written to the right of the minors.
Next let us turn to the iteration of the adjoint DBT. In this case,
the $n$-step gauge operator can be constructed in a similar manner
by preparing $n$ initial adjoint wave functions
$\psi_1,\cdots,\psi_n$ such that
$S^{-1}_n=(\partial_q+\beta_n^{(n-1)})(\partial_q+\beta_{n-1}^{(n-2)})\cdots
(\partial_q+\beta_1)=\partial_q^n+\sum_{i=1}^{n-1}\partial_q^ib_i$.
Using the required conditions $((S^{-1}_n)^*\psi_i)=0, j=1,\cdots,n$
we obtain $b_i=-\tilde{W}_q^{(i)}[\psi_1,\cdots,\psi_n]/
\tilde{W}_q[\psi_1,\cdots,\psi_n]$
which give the $n$-step transformed adjoint wave function
\begin{equation}
\psi^{(n)}=((S_n^{-1})^*\psi)=\frac{\tilde{W}_q[\psi_1,\cdots,\psi_n,\psi]}
{\tilde{W}_q[\psi_1,\cdots,\psi_n]}
\label{nawf}
\end{equation}
and the gauge operator
\begin{equation}
(S_n^{-1})^*=\frac{1}{\tilde{W}_q[\psi_1,\cdots,\psi_n]}
\left|
\begin{array}{cccc}
\psi_1 &\cdots&\psi_n &1\\
(\partial_q^*\psi_1) &\cdots&(\partial_q^*\psi_n) &\partial_q^* \\
\vdots & & \vdots & \vdots \\
((\partial_q^*)^n\psi_1) &\cdots&((\partial_q^*)^n\psi_n)&(\partial_q^*)^n
\end{array}
\right|
\end{equation}
Finally, we shall construct $n$-step binary DBT by preparing
$n$ wave functions $\phi_i$ and $n$ adjoint wave functions $\psi_i$ at the begining.
Then we perform the first binary DBT by using the gauge operator
$R_1$ which is constructed from $\phi_1$ and $\psi_1$ as in
(\ref{rdb}). At the same time, all $\phi_i$ and $\psi_i$
are transformed to $\phi_i^{(1)}=(R_1\phi_i)$
and $\psi_i^{(1)}=((R_1^{-1})^*\psi_i)$, respectively
except $\phi_1^{(1)}=\psi_1^{(1)}=0$. We then use the pair
$\{\phi_2^{(1)},\psi_2^{(1)}\}$ to perform the next binary DBT.
Iterating this process until all the pairs $\{\phi_i,\psi_i\}$
are exhausted, then we obtain an $n$-step gauge operator of
the form $R_n=1-\sum_{j=1}^{n-1}c_j\partial_q^{-1}\psi_j$.
Solving the conditions $(R_n\phi_j)=0, j=1,\cdots,n$,
we obtain the coefficients $c_j=\det\Omega^{(i)}/\det \Omega$
where $\Omega_{ij}\equiv \Omega(\phi_i,\psi_j)$ and $\Omega^{(i)}$
is obtained from $\Omega$ with its $i$-th column replaced by
$(\phi_i,\cdots,\phi_n)^t$. This leads to the following
representations for $R_n$:
\begin{equation}
R_n=\frac{1}{\det \Omega}
\left|
\begin{array}{cccc}
\Omega_{11}&\cdots&\Omega_{1n}&\phi_1 \\
\vdots& &\vdots& \vdots \\
\Omega_{n1}&\cdots&\Omega_{nn}&\phi_n \\
\partial_q^{-1}\psi_1&\cdots&\partial_q^{-1}\psi_n&1
\end{array}
\right|
\end{equation}
Moreover, the $n$-step transformed bilinear potential
$\Omega(\phi^{(n)},\psi^{(n)})$ can be expressed in terms of
binary-type determinant as
\begin{equation}
\Omega(\phi^{(n)}, \psi^{(n)})=
\frac{1}{\det \Omega}
\left|
\begin{array}{cccc}
\Omega_{11} & \cdots&\Omega_{1n}&\Omega(\phi_1,\psi)\\
\vdots & &\vdots &\vdots\\
\Omega_{n1} & \cdots&\Omega_{nn}&\Omega(\phi_n,\psi)\\
\Omega(\phi, \psi_1) & \cdots&\Omega(\phi, \psi_n)&\Omega(\phi,\psi)\\
\end{array}
\right|
\label{nbf}
\end{equation}
where $\phi^{(n)}=(R_n\phi)$ and $\psi^{(n)}=((R_n^{-1})^*\psi)$.
\section{$q$-deformed KdV hierarchy ($N=2$)}
This section is devoted to illustrate the DBTs for the simplest
nontrivial example: $q$-deformed KdV hierarchy ($N=2$ case).
Let
\begin{equation}
L=\partial_q^2+u_1\partial_q+u_0
\label{laxkdv}
\end{equation}
then the Lax equations
\begin{equation}
\partial_{t_n}L=[L^{n/2}_+, L],\qquad n=1,3,5,\cdots
\label{laxeqkdv}
\end{equation}
define the evolution equations for $u_1$ and $u_0$. In particular,
for the $t_1$-flow, we have
\begin{eqnarray}
\partial_{t_1}u_1&=&x(q-1)\partial_{t_1}u_0\\
\partial_{t_1}u_0&=&(\partial_qu_0)-(\partial_q^2s_0)-(\partial_qs_0^2)
\end{eqnarray}
which is nontrivial and recovers the ordinary case as $q$ goes to 1.
For higher hierarchy flows the evolution equations for $u_1$ and $u_0$
become more complicated due to the non-commutative nature of
the $q$-deformed formulation.
We now perform the DBT of Proposition 3 to the Lax operator
(\ref{laxkdv}), then the transformed coefficients become
\begin{eqnarray}
u_1^{(1)}&=&\theta(u_1)-\alpha_1+\theta^2(\alpha_1)
\label{u1}\\
u_0^{(1)}&=&\theta(u_0)+(\partial_qu_1)+(q+1)\theta(\partial_q\alpha_1)-
\alpha_1u_1+\theta(\alpha_1)u_1^{(1)}
\label{u0}
\end{eqnarray}
Since $\phi_1$ is a wave function associated with the
Lax operator (\ref{laxkdv}), i.e. $L\phi_1=\lambda_1\phi_1$,
one can easily verify that
$(\partial_q\alpha_1)+\theta(\alpha_1)\alpha_1+u_1\alpha_1+u_0=\lambda_1$
and hence Eqs.(\ref{u1}) and (\ref{u0}) can be simplified as
\begin{eqnarray}
u_1^{(1)}-u_1&=&x(q-1)(u_0^{(1)}-u_0)
\label{u11}\\
u_0^{(1)}-u_0&=&\partial_q(u_1+\alpha_1+\theta(\alpha_1))
\label{u01}
\end{eqnarray}
Furthermore, using the facts that
$\partial_{t_1}\phi_1=(L_+^{1/2}\phi_1)=(\partial_q\phi_1)+s_0\phi_1$
and $u_1=\theta(s_0)+s_0$, we can rewrite (\ref{u01}) as
\begin{equation}
u_0^{(1)}=u_0+\partial_q\partial_{t_1}\ln\phi_1\theta(\phi_1)
\label{u02}
\end{equation}
Eqs.(\ref{u11}) and (\ref{u02}) are just the desired
result announced in Sec. III.
Similarly, by applying the above analysis to the
adjoint and binary DBTs, we obtain the following result:
{\bf Proposition 6:\/} Let $\phi_1$ and $\psi_1$ be (adjoint)
wave function associated with the Lax operator (\ref{laxkdv}).
Then under the DBT, adjoint DBT, and binary DBT, the transformed
coefficients are given by
\begin{equation}
u_1^{(1)}-u_1=x(q-1)(u_0^{(1)}-u_0)
\end{equation}
with
\begin{eqnarray}
u_0^{(1)}&=&u_0+\partial_q\partial_{t_1}\ln\phi_1\theta(\phi_1),
\qquad (\mbox{DBT})
\label{1db}\\
u_0^{(1)}&=&u_0+\partial_q\partial_{t_1}\ln\psi_1\theta^{-1}(\psi_1)
,\qquad (\mbox{adjoint DBT})
\label{1adb}\\
u_0^{(1)}&=&u_0+\partial_q\partial_{t_1}\ln\Omega_{11}\theta(\Omega_{11})
,\qquad (\mbox{binary DBT})
\label{1bdb}
\end{eqnarray}
Eqs.(\ref{1db})-(\ref{1bdb}) effectively represent the 1-step
transformations. To obtain the $n$-step DBT, adjoint DBT and
binary DBT we just need to iterate the corresponding 1-step transformations
successively by inserting the triggered wave function (\ref{nwf}), adjoint wave
function (\ref{nawf}) and bilinear potential (\ref{nbf}) into the logarithm in Eqs.
(\ref{1db}), (\ref{1adb}) and (\ref{1bdb}), respectively.
{\bf Proposition 7:\/} Let $\phi_i$ and $\psi_i$
($i=1,2,\cdots,n$) be (adjoint)
wave functions associated with the Lax operator (\ref{laxkdv}).
Then under the successive DBT, adjoint DBT and binary DBT
of Proposition 6, the $n$-step transformed coefficients are given by
\begin{equation}
u_1^{(n)}-u_1=x(q-1)(u_0^{(n)}-u_0)
\end{equation}
with
\begin{eqnarray}
u_0^{(n)}&=&u_0+\partial_q\partial_{t_1}\ln W_q[\phi_1,\cdots,\phi_n]
\theta(W_q[\phi_1,\cdots,\phi_n]),
\qquad (\mbox{DBT})
\label{ndb}\\
u_0^{(n)}&=&u_0+\partial_q\partial_{t_1}\ln\tilde{W}_q[\psi_1,\cdots,\psi_n]
\theta^{-1}(\tilde{W}_q[\psi_1,\cdots,\psi_n])
,\qquad (\mbox{adjoint DBT})
\label{nadb}\\
u_0^{(n)}&=&u_0+\partial_q\partial_{t_1}\ln\det\Omega\theta(\det\Omega)
,\qquad (\mbox{binary DBT})
\label{nbdb}
\end{eqnarray}
Eqs.(\ref{ndb})-(\ref{nbdb}) provide us a convenient way to
construct new solutions from the old ones. Especially,
starting from the trivial solution ($u_1=u_0=0$)
we can obtain nontrivial multi-soliton solutions just by
putting the (adjoint) wave functions into the formulas
(\ref{ndb})-(\ref{nbdb}). For example, the wave functions
$\phi_i$ ($i=1,\cdots$,n) associated with the trivial
Lax operator $L=\partial_q^2$ satisfy
\begin{equation}
\partial_q^2\phi_i=p_i^2\phi_i,\qquad \partial_{t_n}\phi_i=(\partial_q^n\phi_i)
\qquad p_i\neq p_j
\end{equation}
which give the following solutions
\begin{equation}
\phi_i(x,t)=E_q(p_ix)\exp(\sum_{k=0}^{\infty}p_i^{2k+1}t_{2k+1})+
\gamma_i E_q(-p_ix)\exp(-\sum_{k=0}^{\infty}p_i^{2k+1}t_{2k+1})
\label{solution}
\end{equation}
where $\gamma_i$ are constants and $E_q(x)$ denotes the $q$-exponential function
which satisfies $\partial_qE_q(px)=pE_q(px)$ and has the following
representation:
\begin{equation}
E_q(x)=\exp[\sum_{k=1}^{\infty}\frac{(1-q)^k}{k(1-q^k)}x^k]
\end{equation}
Substituting (\ref{solution}) into (\ref{ndb})-(\ref{nbdb})
gives us $n$-solition solutions.
Note that for the soliton solutions constructed out from
the trivial one ($u_1=u_0=0$) as described above,
they satisfy a simple relation: $u_1^{(n)}=x(q-1)u_0^{(n)}$.
This is just the case considered by Haine and Iliev in \cite{HI,Ib}.
In general, it can be shown \cite{AHM} that the solutions of the $q$-KdV
hierarchy can be expressed in terms of a single function $\tau_q$ called
tau-function such that
\begin{eqnarray}
u_1(x,t)&=&\partial_{t_1}\ln\frac{\theta^2(\tau_q)}{\tau_q}\nonumber\\
&=&x(q-1)\partial_q\partial_{t_1}\ln\tau_q(x,t)\theta(\tau_q(x,t))
\label{tau}
\end{eqnarray}
Eq. (\ref{tau}) together with Proposition 7 shows that
for a given solution (or $\tau_q$) of the $q$-KdV hierarchy,
the transformation properties of $\tau_q$ can be summarized
as follows
\begin{eqnarray}
\tau_q &\to& \tau_q^{(n)}=W_q[\phi_1,\cdots,\phi_n]\cdot\tau_q
\qquad (\mbox{DBT})\nonumber\\
\tau_q &\to& \tau_q^{(n)}=\theta^{-1}(\tilde{W}_q[\psi_1,\cdots,\psi_n])
\cdot\tau_q\qquad (\mbox{adjoint DBT})
\label{transform}\\
\tau_q &\to& \tau_q^{(n)}=\det\Omega\cdot\tau_q
\qquad (\mbox{binary DBT})\nonumber
\end{eqnarray}
which implies that the Wronskian-type (or binary-type) tau-function
can be viewed as the $n$-step transformed tau-function constructed
from the trivial solution ($\tau_q=1$).
\section{concluding remarks}
We have constructed the elementary DBTs for the $q$-KdV hierarchy, which
preserve the form of the Lax operator and are compatible with the Lax equations.
Iterated application of these elementary DBTs produces new soliton solutions
(tau-functions ) of the $q$-KdV hierarchy out of given ones. Following the
similar treatment and using the tau-function representation
$u_{N-1}=\partial_{t_1}\ln\theta^N(\tau_q)/\tau_q$ \cite{AHM} for $N>2$
we can reach the same result as (\ref{transform}) except now
$\partial\tau_q/\partial_{t_{iN}}=0$.
In fact these DBTs can be applied to the $q$-deformed KP hierarchy
without difficulty by considering the Lax operator of the form
$L=\partial_q+\sum_{i=0}^{\infty}u_i\partial_q^{-i}$. The
$q$-KdV is just a reduction of the $q$-KP by imposing
the condition $(L^N)_+=L^N$.
Since the ordinary KP hierarchy admits other reductions which
are also integrable. Hence it is quite natural to ask whether
there exist $q$-analogue of these reductions and what
are the DBTs associated with them?
We hope we can answer this question in the near future.
{\bf Acknowledgments\/}
This work is supported by the National Science Council
of the Republic of China under Grant
Nunbers NSC 88-2811-M-194-0003(MHT) and
NSC 88-2112-M-009-008 (JCS)
| proofpile-arXiv_065-8804 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Galaxies and systems of galaxies are formed due to initial density
perturbations of different scale. Short perturbations of a scale of
several Megaparsecs give rise to the formation of galaxies, medium
scale perturbations lead to the formation of clusters of galaxies.
Perturbations of a characteristic scale of $\sim 100$\ $h^{-1}$~{\rm Mpc}\ are
related to superclusters of galaxies. Still larger perturbations have
a lower amplitude and modulate densities and masses of smaller systems
(Frisch {\it et al\/}\ 1995).
We use the term superclusters of galaxies for the largest relatively
isolated density enhancements in the Universe (Einasto {\it et al\/}\
1997b). Superclusters consist of filamentary chains of galaxies,
groups and clusters of galaxies, and voids between them.
Superclusters are not completely isolated in space. Galaxy and cluster
filaments connect neighbouring superclusters to a single network.
Superclusters and voids form a continuous web which extends over the
whole observable part of the Universe.
Let us accept the inflationary paradigm, and assume that the Universe
is dominated by cold dark matter (CDM) and that initial density
perturbations have a Gaussian distribution. Under these assumptions
large-scale high- and low-density regions should be randomly
distributed. It was a great surprise when Broadhurst {\it et al\/}\ (1990)
found that the distribution of high-density regions in a small area
around the northern and southern Galactic pole is fairly regular:
high- and low-density regions alternate with a rather constant step of
128~$h^{-1}$~{\rm Mpc}. The power spectrum of galaxies of the Broadhurst survey has
a sharp peak on the scale equal to the step of supercluster network along
the line of sight. Recent analysis of deep galaxy and cluster samples
has shown that the power spectrum of these objects has also a spike on
similar scale. The presence of such a spike is difficult to explain.
In this review I shall give a summary of recent work on the distribution
of galaxies and clusters of galaxies on large scales. The distribution
is characterised quantitatively by the power spectrum of density
perturbations of these objects. Our goal is to explain the spike in
the observed power spectrum.
\section{The distribution of high-density regions}
Here we shall discuss the distribution of high-density regions in the
Universe. High-density regions of the Universe can be located by
clusters in very rich superclusters.
The Abell-ACO cluster sample (Abell 1958, Abell, Corwin and Olowin
1989) is presently the largest and deepest sample of extragalactic
objects which covers the whole sky out of the Milky Way zone of
avoidance. Thus the study of the distribution of Abell-ACO clusters is
of special interest. We have used the compilation of all available
redshifts of Abell-ACO clusters by Andernach and Tago (1998, in
preparation), and the supercluster catalogue by Einasto {\it et al\/}\ (1997b)
based on this compilation.
The distribution of Abell-ACO clusters shows that very rich
superclusters form a fairly regular lattice (Einasto {\it et al\/}\ 1997b,
1998d). In Figures~1 and 2 we give the distribution of superclusters
consisting of 8 and more clusters of galaxies. A regular lattice is
very well seen in the southern Galactic hemisphere. In the northern
hemisphere the regularity can be best seen if we use a thin slice as
in Figure~1.
\begin{figure*}[t]
\vspace*{6cm}
\caption{ Distribution of clusters in high-density regions in
supergalactic coordinates. A 100~$h^{-1}$~{\rm Mpc}\ thick slice is shown to avoid
projection effects. Only clusters in superclusters with at least 8 members
are plotted. The supergalactic $Y=0$ plane coincides almost exactly with
the Galactic equatorial plane; Galactic zone of avoidance is marked by
dashed lines. }
\special{psfile=d98_fig1.ps voffset=60 hoffset=50 hscale=60 vscale=60}
\label{fig1}
\end{figure*}
Clusters of galaxies in less rich superclusters and isolated clusters
are basically located in walls between voids formed by very rich
superclusters, see Figure~2. Their distribution complements the
distribution of very rich superclusters, together they form a fairly
regular lattice. Cells of this lattice have a characteristic diameter
about 120~$h^{-1}$~{\rm Mpc}.
\begin{figure*}[t]
\vspace*{6cm}
\caption{ Distribution of Abell-ACO and APM clusters in high-density
regions in supergalactic coordinates. Abell-ACO clusters in superclusters
with at least 8 members are plotted; APM clusters are plotted if located in
superclusters of richness 4 and higher. The supergalactic $Y=0$ plane
coincides almost exactly with the Galactic equatorial plane and marks the
Galactic zone of avoidance. In the left panel some superclusters overlap
due to projection, in the right panel only clusters in the southern
Galactic hemisphere are plotted and the projection effect is small. }
\special{psfile=d98_fig2.eps voffset=80 hoffset=-20 hscale=60 vscale=60}
\label{fig2}
\end{figure*}
A regular distribution of objects in high-density regions has been
detected so far only for Abell-ACO clusters of galaxies. It has been
argued that this sample is distorted by selection effects and that the
regular location of clusters may be spurious. To check this possibility we
compare the distribution of Abell-ACO clusters with that of APM clusters.
Results for clusters in rich superclusters are shown in Figure~2. This
comparison shows that all rich superclusters are well seen in both cluster
samples. There is a systematic difference only in the distribution of more
isolated clusters since the APM sample contains some clusters also in
less-dense environment (in voids defined by Abell-ACO clusters) as seen
in Figure~2.
Figure~2 demonstrates also why the overall regularity of the distribution
of APM clusters is much less pronounced. The APM sample covers only a
small fraction of the volume of the Abell-ACO cluster sample: APM
sample is located in the southern hemisphere and even here it covers a
smaller volume. Thus it is not surprising that the grand-design of the
distribution of high-density regions is not well seen in the APM sample.
This comparison of the distribution of Abell-ACO and APM clusters
shows that Abell-ACO clusters are good tracers of the large-scale
distribution of high-density regions. Presently the Abell-ACO sample
provides the best candidate to a fair sample of the Universe.
\section{The power spectrum of galaxies and clusters of galaxies}
Einasto {\it et al\/}\ (1998a) have analysed recent determinations of power
spectra of galaxies and clusters of galaxies; a summary is given in
Figure~1. Here power spectra of different objects are vertically
shifted to coincide amplitudes near the wavenumber $k=2\pi /\lambda =
0.1$~$h$~{\rm Mpc$^{-1}$}. On medium scales spectra have a negative spectral index;
near the scale $l\approx 120$ $h^{-1}$~{\rm Mpc}\ or wavenumber $k=2\pi/l = \approx
0.05$~$h$~{\rm Mpc$^{-1}$}\ spectra have a turnover; on very large scales the
spectral index approaches the theoretically predicted value $n=1$.
\begin{figure*}[t]
\vspace*{5cm}
\caption{Power spectra of galaxies and clusters of galaxies scaled to
match the amplitude of the 2-D APM galaxy power spectrum (Einasto
{\it et al\/}\ 1998a). Spectra are shown as smooth curves. Bold lines show
spectra for clusters data: short dashed line for Abell-ACO clusters
according to Einasto {\it et al\/}\ (1997a), long-dashed line according to
Retzlaff {\it et al\/}\ (1998), dot-dashed line for APM clusters (Tadros {\it et al\/}\
1998); thin lines for galaxies: short dashed line for IRAS galaxies
(Peacock 1997), long-dashed line for SSRS-CfA2 galaxy survey (da Costa
{\it et al\/}\ 1994), dot-dashed line for LCRS (Landy {\it et al\/}\ 1996). The solid
thick line shows the mean power spectrum, the dashed thick line
indicates the power spectrum derived from the APM 2-D galaxy
distribution (Peacock 1997, Gazta\~naga and Baugh 1998). }
\special{psfile=d98_fig3.eps
voffset=130 hoffset=40 hscale=40 vscale=40}
\label{fig3}
\end{figure*}
A closer inspection shows that after scaling samples have different
amplitudes near the turnover. Cluster samples as well as APM and
SSRS-CfA2 galaxy samples have a high amplitude near the maximum.
Spectra of cluster samples have here a sharp spike. In contrast, IRAS
and LCRS galaxy samples have a much shallower transition of lower
amplitude; the power spectrum calculated from the 2-D distribution of
APM galaxies has also a shallow maximum. The reason for this
difference is not fully clear. It is possible that the spatial
distribution of different objects is different. On the other hand, it
is not excluded that differences may be partly caused by disturbing
effects in data reduction. For instance, the window function of the
LCRS is very broad in Fourier space which can smear out the spike in
the power spectrum.
We have formed two mean power spectra of galaxies. One mean spectrum
is based on cluster samples and APM and SSRS+CfA2 galaxy samples, it
has a relatively sharp maximum at $k=0.05$~$h$~{\rm Mpc$^{-1}$}, and a power law
behaviour on medium scales with index $n=-1.9$. Cluster samples cover
a large volume where rich superclusters are present; we call this
power spectrum as characteristic for populations in high-density
regions, and designate the spectrum $P_{HD}(k)$. The second mean
power spectrum was derived from spectra of the LCRS sample, the IRAS
galaxy sample, and the APM 2-D sample. These catalogs sample regions
of the Universe characteristic for medium-rich superclusters, we
designate this spectrum $P_{MD}(k)$. Both mean power spectra are shown
on Figure~3.
\section{Reduction of mean power spectra to matter}
It is well known that the evolution of matter in low- and high-density
regions is different. Gravity attracts matter toward high-density
regions, thus particles flow away from low-density regions, and
density in high-density regions increases until contracting objects
collapse and form galaxies and clusters of galaxies. The collapse
occurs along caustics (Zeldovich 1970). Bond, Kofman and Pogosyan
(1996) demonstrated that the gravitational evolution leads to the
formation of a web of high-density filaments and under-dense regions
outside of the web. The contraction occurs if the over-density
exceeds a factor of 1.68 in a sphere of radius $r$ which determines
the mass of the system or the galaxy (Press \& Schechter 1974). In a
low-density environment the matter remains primordial. This
difference between the distribution of the matter and galaxies is
called biasing. The gravitational character of the density evolution
of the Universe leads us to the conclusion that galaxy formation is a
threshold phenomenon: galaxies form in high-density environment, and
in low-density environment the matter remains in primordial dark form.
The power spectrum of clusters of galaxies has an amplitude larger
than that for galaxies and matter. We can define the bias
parameter $b_c$ through the power spectra of mass, $P_m(k)$, and of
the clustered matter, $P_c(k)$,
$$
P_c(k) = b_c^2(k) P_m(k),
\eqno(1)
$$
where $k$ is the wavenumber in units of $h$~Mpc$^{-1}$, and the Hubble
constant is expressed as usual, $100~h$ km~s$^{-1}$~Mpc$^{-1}$. In
general the biasing parameter is a function of wavenumber $k$.
Gramann \& Einasto (1992), Einasto {\it et al\/}\ (1994) and Einasto {\it et al\/}\
(1998b) investigated the power spectra of matter and simulated
galaxies and clusters. They showed that the exclusion of non-clustered
matter in voids from the population shifts the power spectrum of the
clustered matter in high-density regions to higher amplitude. The bias
factor calculated from equation (1) is surprisingly constant, and can
be found from the fraction of matter in the high-density clustered
population, $F_c= N/N_{tot}$,
$$
b_c = 1/F_c.
\eqno(2)
$$
\begin{figure}[ht]
\vspace*{5cm}
\caption{ Left: Power spectra of simulated galaxies. The solid bold
line shows the spectrum derived for all test particles (the matter
power spectrum); dashed and dotted bold lines give the power spectrum
of all clustered particles (sample Gal-1), and clustered galaxies in
high-density regions (sample Clust). Thin solid and dashed lines show
the power spectra of samples of particles with various threshold
densities and sampling rules, see Table~1 and text for details. Right:
the biasing parameter as a function of wavenumber, calculated from
definition eqn. (1). Samples and designations as in left panel.
}
\special{psfile=d98_fig4a.eps voffset=100 hoffset=-20 hscale=34
vscale=34}
\special{psfile=d98_fig4b.eps voffset=100 hoffset=145
hscale=34 vscale=34}
\label{figure4}
\end{figure}
In Figure~4 we show the power spectra and related biasing factors for
several simulated galaxy populations, calculated using various
threshold densities to separate the unclustered matter in voids and
the clustered matter associated with galaxies in high-density regions
(Einasto {\it et al\/}\ 1998b). In calculating densities a small smoothing
length $\approx 1$~$h^{-1}$~{\rm Mpc}\ was used; in this case the clustered matter
associated with galaxies and their dark haloes is not mixed with
non-clustered matter in voids. This analysis shows that biasing
factor is almost independent of the scale, and can be calculated from
equation (1). The flow of matter from low-density regions was studied
by Einasto {\it et al\/}\ (1994) and Einasto {\it et al\/}\ (1998b). They showed that
this flow depends slightly on the density parameter of the structure
evolution model. In the standard CDM model the speed of void
evacuation is slightly faster than in open CDM models and spatially
flat CDM models with cosmological constant. The present epoch of the
model was found using the calibration through the $\sigma_8$ parameter
-- rms density fluctuations within s sphere of radius 8~$h^{-1}$~{\rm Mpc}. This
parameter was determined for the present epoch for galaxies,
$(\sigma_8)_{gal} = 0.89 \pm 0.05$. Using this estimate and CDM models
with lambda term and density parameter $\Omega_0 \approx 0.4$ Einasto
{\it et al\/}\ (1998b) get $(\sigma_8)_m = 0.68 \pm 0.06$, $F_{gal} = 0.75 \pm
0.05$ and $b_{gal} = 1.3 \pm 0.1$.
\section{Primordial power spectrum of matter}
The semi-empirical power spectrum of matter, found from galaxy and
cluster surveys and reduced to the matter, can be used to compare with
theoretical models. Such analysis has been done by Einasto {\it et al\/}\
(1998c). CDM models with and without cosmological constant, mixed dark
matter models, and open CDM models were used for comparison. Models
were normalised to four year COBE observations; a Hubble parameter $h
= 0.6$ was used; the density in baryons was taken $\Omega_{b} = 0.04$
(in units of the critical density); the density parameter
$\Omega_0=\Omega_b + \Omega_{DM}$ was varied, the model was kept flat
using cosmological constant $\Omega_{\Lambda}$. In mixed DM models the
density of neutrinos was $\Omega_{\nu}=0.1$; in open models the
density parameter was varied between $\Omega_{0} = 0.2$ and
$\Omega_{0} = 1$.
The observed power spectrum is influenced on small scales by
non-linear evolution of the structure, on these scales the spectrum
was reduced to linear case using analytical models, the best agreement
was achieved by a MDM model with density parameter $\Omega_0 = 0.4$.
In Figure~5 we show the semi-empirical linear power spectra of matter,
which represent galaxy populations in high- and medium-density
regions. These empirical spectra are compared with analytical power
spectra for MDM models calculated for various density parameter values
from 0.25 to 1.0.
\begin{figure}[ht]
\vspace*{5cm}
\caption{The semi-empirical matter power spectra compared with
theoretical and primordial power spectra. Left: present power spectra;
right: primordial spectra; for MDM models. Solid bold line shows the
linear matter power spectrum found for regions including rich
superclusters, $P_{HD}(k)$; dashed bold line shows the linear power
spectrum of matter $P_{MD}(k)$ for medium dense regions in the
Universe. Model spectra with $\Omega_{0}= 0.9, ~0.8 \dots ~0.25$ are
plotted with thin lines; for clarity the models with $\Omega_0 = 1.0$
and $\Omega_0 = 0.5$ are drawn with dashed lines. Primordial power
spectra are shown for the peaked matter power spectrum, $P_{HD}(k)$;
they are divided by the scale-free spectrum, $P(k) \sim k$. The mean
error of semi-empirical power spectra is about 11~\%.
}
\special{psfile=d98_fig5a.eps voffset=120 hoffset=-20 hscale=34
vscale=34}
\special{psfile=d98_fig5b.eps voffset=120 hoffset=145
hscale=34 vscale=34}
\label{figure5}
\end{figure}
The comparison of observed and theoretical power spectra gives us also
the possibility to calculate the primordial or initial power spectra
$$
P_{init}(k) = P(k)/T^{2}(k),
\eqno(3)
$$
where $T(k)$ is the transfer function. In right panels of Figure~5 we
plot the ratio of the primordial power spectrum to the scale-free
primordial power spectrum, $P_{init}(k)/ P_0(k)$, where $P_0(k) \sim
k$. Only results for the peaked power spectrum $P_{HD}(k)$ are
shown. The primordial power spectrum calculated for the shallower
power spectrum $P_{MD}(k)$ is flatter; on large and small scales
it is similar to the spectrum shown in Figure~5.
The main feature of primordial power spectra is the presence of a
spike at the same wavenumber as that of the maximum of the observed
power spectrum. On scales shorter than that of the spike the
primordial spectrum can be well approximated by a straight line (on
log--log plot), i.e. by a tilted model with index $n<1$, if $\Omega_0
\geq 0.5$; and $n>1$, if $\Omega_0 <0.5$. This approximation breaks,
however, down if we consider the whole scale interval. Additionally,
there is a considerable difference in the amplitude of the primordial
spectrum on small and large scales. For most values of the density
parameter the amplitude on small scales is lower than on large scales;
for very low values of $\Omega_0$ the effect has opposite sign.
In conclusion we can say that it seems impossible to avoid a break in
the primordial power spectrum. If the peaked power spectrum
$P_{HD}(k)$, based on cluster and deep galaxy data, represents the
spectrum of a fair sample of the Universe, then the break of the
primordial power spectrum is sudden, otherwise it is smooth. The
relative amplitude of the break, and the respective change in the
power index of the spectrum depends on models and cosmological
parameters used. In the framework of the inflationary paradigm the
primordial power spectrum is generated during the inflation. A broken
scale primordial power spectrum suggests that inflation may be more
complicated than thought so far. One possibility for a broken scale
primordial power spectrum has been suggested by Lesgourgues {\it et al\/}\
(1997). Future deep surveys of galaxies will fix the initial power
spectrum of matter more precisely. Presently we can say that a
broken-scale primordial power spectrum deserves serious attention.
\section*{Acknowledgments}
I thank H. Andernach, R. Cen, M. Einasto, S. Gottl\"ober, A.Knebe,
V. M\"uller, A. Starobinsky, E. Tago and D. Tucker for fruitful
collaboration and permission to use our joint results in this talk.
This study was supported by the Estonian Science Foundation.
| proofpile-arXiv_065-8822 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The effective potentials in quantum field theories are
off-shell quantities. Therefore, in gauge field theories,
effective potentials are gauge-dependent as
pointed out
by Jackiw in the early seventies\cite{jackiw}. This property
caused concerns on the physical
significance of effective potentials. In a work by
Dolan and Jackiw \cite{DJ}, the effective potential of scalar
QED was calculated in a set of $R_{\xi}$ gauges. It was concluded that only
the limiting unitary gauge gives a sensible result on
the symmetry-breaking behaviour of the theory.
This difficulty was partially resolved
by the work of Nielsen\cite{niel}.
In his paper, Nielsen derived
the following identity governing the behaviour of effective potential
in a gauge
field theory:
\begin{equation}
(\xi{\partial \over \partial \xi}+C(\phi,\xi)
{\partial\over \partial \phi})V(\phi, \xi)=0,
\end{equation}
where $\xi$ is the gauge-fixing parameter, $\phi$ is the
order-parameter of the effective potential, and $C(\phi, \xi)$ is the
Green's function for certain composite operators containing a ghost field.
The above identity implies that, for different $\xi$,
the local extrema of $V$ are located along
the same characteristic
curve on $(\phi,\xi)$ plane, which satisfies
$d\xi={d\phi\over C(\phi,\xi)/\xi}$.
Hence covariant gauges with different $\xi$ are equally
good for computing $V$. On the other hand, a choice of the multi-parameter
gauge $L_{gf}=-{1\over 2 \xi}
(\partial_{\mu}A^{\mu}+\sigma \phi_1 +\rho \phi_2)^2$\cite{DJ},
with $\phi_{1,2}$ the components of the scalar field, would break the
homogeneity of Eq. (1)\cite{niel}.
Therefore an effective potential calculated in such a
gauge does not have a physical significance.
Recently, it was pointed out\cite{LW} that the Higgs boson mass bound,
which one derives
from the effective potential, is gauge-dependent.
The gauge dependence resides in the expression for the
one-loop effective potential.
Boyanovsky, Loinaz and Willey proposed a resolution\cite{BLW} to the
problem, which is based upon
the {\it Physical
Effective Potential} constructed as the expectation value of
the Hamiltonian in physical states\cite{BBHL}.
They computed the {\it Physical Effective Potential} of
an Abelian Higgs model with an axial coupling of the gauge fields to
the fermions. A gauge-independent lower bound for the
Higgs boson mass is then determined from the effective potential.
We note that their approach requires the identification
of first-class constraints of the model and a projection to
the physical states. Such a procedure is not manifestly
Lorentz covariant. Consequently we expect that it is
highly non-trivial to apply their approach to the Standard Model(SM).
In our work, we shall introduce the Vilkovisky-DeWitt formalism
\cite{vil,dw2} for constructing a gauge-independent effective potential,
and therefore obtain a gauge-independent lower bound for the Higgs boson
mass.
In the Vilkovisky-DeWitt formalism, fields are treated as vectors
in the configuration space,
and the {\it affine connection} of the configuration space is identified to
facilitate the construction of an invariant effective action. Since
this procedure
is completely Lorentz covariant, the computations for the effective potential
and the effective action are straightforward.
We shall perform a calculation with respect to a toy model\cite{LSY}
which disregards all charged boson fields in the SM.
It is easy to generalize our calculations to the full SM case.
In fact,
the applicability of Vilkovisky-DeWitt formalism
to non-Abelian gauge theories has been
extensively demonstrated in the literature\cite{rebhan}.
The outline of this paper is as follows. In Sec. II, we briefly
review the Vilkovisky-DeWitt formalism using the scalar QED as
an example. We shall illustrate that the effective action of Vilkovisky
and DeWitt is equivalent to the ordinary effective action
constructed in the Landau-DeWitt gauge\cite{FT}. In Sec. III, we calculate
the effective potential of the simplified standard model, and the
relevant renormalization constants of the theory using the Landau-DeWitt
gauge.
The effective potential
is then improved by the renormalization group analysis.
In Sec. IV, the Higgs boson mass bound
is derived and compared to that given by the ordinary
effective potential in the Landau gauge. We conclude in Sec. V, with
some technical details
discussed in the Appendix.
\section{Vilkovisky-DeWitt Effective Action of Scalar QED}
The formulation of Vilkovisky and DeWitt is motivated
by the parametrization dependence of the ordinary effective action,
which can be written generically as\cite{kun}
\begin{eqnarray}
\exp{i\over \hbar}\Gamma[\Phi]&=&\exp{i\over \hbar}(W[j]+\Phi^i{\delta \Gamma
\over \delta \Phi^i})\nonumber \\
&=& \int [D\phi]\exp{i\over \hbar}(S[\phi]-(\phi^i-\Phi^i)\cdot
{\delta \Gamma
\over \delta \Phi^i}),
\label{INTEG}
\end{eqnarray}
where $S[\phi]$ is the classical action, and $\Phi^i$ denote the
background fields.
The dependence on the parametrization
arises because the quantum fluctuation $\eta^i\equiv (\phi^i-\Phi^i)$ is not a
vector
in the field configuration space, hence the product
$\eta^i \cdot {\delta \Gamma
\over \delta \Phi^i}$ is not a scalar under a reparametrization of fields.
The remedy to this problem is to replace $\eta^i$
with a two-point function $\sigma^i(\Phi,\phi)$ \cite{vil,dw2,com1}
which, at the
point $\Phi$,
is tangent to the geodesic connecting $\Phi$ and $\phi$.
The precise form of $\sigma^i(\Phi,\phi)$ depends on the
connection of the configuration space, $\Gamma^i_{jk}$. It is easy to
show that\cite{kun}
\begin{equation}
\sigma^i(\Phi, \phi)=\eta^i-{1\over 2}\Gamma^i_{jk}\eta^j \eta^k
+ O(\eta^3).
\end{equation}
For scalar QED described
by the Lagrangian:
\begin{eqnarray}
L&=&-{1\over 4}F_{\mu\nu}F^{\mu\nu}+(D_{\mu}\phi)^{\dagger}
(D^{\mu}\phi)\nonumber \\
&-& \lambda (\phi^{\dagger}\phi-\mu^2)^2,
\label{SQED}
\end{eqnarray}
with $D_{\mu}=\partial_{\mu}+ie A_{\mu}$ and $\phi={\phi_1+i\phi_2\over
\sqrt{2}}$, the connection of the configuration space is given by\cite{vil,kun}
\begin{equation}
\Gamma^i_{jk}= {i\brace j k}+T^i_{jk},
\label{gijk}
\end{equation}
where ${i\brace j k}$ is the Christoffel symbol of the field
configuration space and $T^i_{jk}$ is a quantity induced by generators of
the gauge transformation. The Christoffel symbol ${i\brace j k}$ can be
computed
from the following metric tensor of scalar QED:
\begin{eqnarray}
G_{\phi_a(x)\phi_b(y)}&=&\delta_{ab}\delta^4(x-y),\nonumber \\
G_{A_{\mu}(x)A_{\nu}(y)}&=&-g^{\mu\nu}\delta^4(x-y),\nonumber \\
G_{A_{\mu}(x)\phi_a(y)}&=&0.
\label{metric}
\end{eqnarray}
According to
Vilkovisky's prescription\cite{vil}, the metric tensor of the
field configuration space
is obtained
by differentiating twice with respect to the fields in the
kinetic Lagrangian.
For the above metric tensor, we have ${i\brace jk}=0$ since each component
of the tensor is
field-independent.
However, the
Christoffel symbol would be non-vanishing
if one parametrizes Eq. (\ref{SQED})
with
polar variables $\rho$ and $\chi$ such that
$\phi_1=\rho \cos\chi$ and $\phi_2=\rho \sin\chi$.
Finally, to determine $T^i_{jk}$, let us specify generators $g^i_{\alpha}$ of
the scalar-QED gauge transformations:
\begin{eqnarray}
g^{\phi_a(x)}_y&=&-\epsilon^{ab}e\phi_b(x)\delta^4(x-y),\nonumber \\
g^{A_{\mu}(x)}_y&=&-\partial_{\mu}\delta^4(x-y),
\label{gener}
\end{eqnarray}
where $\epsilon^{ab}$ is a skew-symmetric tensor with $\epsilon^{12}=1$.
The quantity $T^i_{jk}$ is related to the generators $g^i_{\alpha}$
via\cite{vil}
\begin{equation}
T^i_{jk}=-B^{\alpha}_jD_kg^i_{\alpha}+
{1\over 2}g^l_{\alpha}D_lg^i_{\beta}
B^{\alpha}_jB^{\beta}_k+ j\leftrightarrow k,
\label{tijk}
\end{equation}
where $B^{\alpha}_k=N^{\alpha\beta}g_{k\beta}$ with
$N^{\alpha\beta}$ being the inverse of $N_{\alpha\beta}\equiv
g^k_{\alpha}g^l_{\beta}G_{kl}$. The expression for $T^i_{jk}$
can be easily understood
by realizing that $i, j,\cdots, l$ are function-space indices, while
$\alpha$ and $\beta$ are space-time indices. Hence, for example,
\begin{equation}
D_{\phi_1(z)}g^{A_{\mu}(x)}_y={\partial g^{A_{\mu}(x)}_y\over \partial
\phi_1(z)}+ {A_{\mu}(x)\brace j \; \phi_1(z)}g^j_y,
\end{equation}
where the summation over $j$ also implies an integration over the space-time
variable in the function $j$.
The one-loop effective action of scalar QED can be calculated from Eq.
(\ref{INTEG}) with each quantum fluctuation $\eta^i$ replaced by
$\sigma^i(\Phi, \phi)$. The result is written as\cite{kun}:
\begin{equation}
\Gamma[\Phi]=S[\Phi]-{i\hbar\over 2}\ln\det G+
{i\hbar\over 2}\ln\det \tilde{D}^{-1}
_{ij},
\label{ACTION}
\end{equation}
where $S[\Phi]$ is the classical action with $\Phi$ denoting
generically the background fields; $\ln\det G$ arises
from the function-space measure $[D\phi]\equiv \prod_x d\phi(x)
\sqrt{\det G}$; and $\tilde{D}^{-1}_{ij}$ is the modified inverse-propagator:
\begin{equation}
\tilde{D}^{-1}_{ij}={\delta^2 S\over \delta\Phi^i \delta\Phi^j}
-\Gamma^k_{ij}[\Phi]{\delta S\over \delta \Phi^k}.
\label{INVE}
\end{equation}
To study the symmetry-breaking behaviour of the theory, we focus on
the effective potential which is obtained
from $\Gamma[\Phi]$ by setting each
background field $\Phi^i$ to a constant.
The Vilkovisky-DeWitt effective potential of scalar QED has been calculated
in various gauges and different scalar-field parametrizations
\cite{FT,kun,rt}.
The results all agree with one another. In this work, we
calculate the effective potential and other relevant
quantities in the Landau-DeWitt gauge\cite{com2}, which is characterized by the
gauge-fixing term:
$L_{gf}=-{1\over 2\xi}(\partial_{\mu}B^{\mu}-ie\eta^{\dagger}
\Phi+ie\Phi^{\dagger}\eta)^2$,
with $\xi\to 0$. In $L_{gf}$,
$B^{\mu}\equiv A^{\mu}-A^{\mu}_{cl}$, and $\eta \equiv \phi-\Phi$
are quantum fluctuations while $A^{\mu}_{cl}$ and $\Phi$ are background
fields. For the scalar fields, we further write
$\Phi={\rho_{cl}+i\chi_{cl}\over
\sqrt 2}$ and $\eta={\rho+i\chi\over
\sqrt 2}$.
The advantage of performing calculations in the Landau-DeWitt gauge
is that $T^i_{jk}$ vanishes\cite{FT} in this case.
In other words, the Vilkovisky-DeWitt formalism coincides with the
conventional formalism in the Landau-DeWitt gauge.
For computing the effective potential, we choose
$A^{\mu}_{cl}=\chi_{cl}=0$, i.e., $\Phi={\rho_{cl}\over \sqrt{2}}$.
In this set of background fields,
$L_{gf}$ becomes
\begin{equation}
L_{gf}=-{1\over 2\xi}\left(\partial_{\mu}B^{\mu}\partial_{\nu}B^{\nu}
-2e\rho_{cl}\chi\partial_{\mu}B^{\mu}+e^2\rho_{cl}^2\chi^2\right).
\label{GAUGE}
\end{equation}
We note that $B_{\mu}-\chi$ mixing in $L_{gf}$ is
$\xi$ dependent, and therefore would not cancell out
the corresponding mixing term in the classical Lagrangian of
Eq. (\ref{SQED}). This induces mixed-propagators such as
$<0\vert T(A_{\mu}(x)\chi(y)) \vert 0>$
or $<0\vert T(\chi(x)A_{\mu}(y)) \vert 0>$. The Faddeev-Popov ghost
Lagrangian in this gauge reads:
\begin{equation}
L_{FP}=\omega^*(-\partial^2-e^2\rho_{cl}^2)\omega.
\label{FADPOP}
\end{equation}
With each part of the Lagrangian determined, we are ready to
compute the effective potential. Since we choose a field-independent
flat-metric,
the one-loop effective potential is completely determined by
the modified inverse-propagators $\tilde{D}^{-1}_{ij}$\cite{grassmann}. From
Eqs. (\ref{SQED}), (\ref{INVE}), (\ref{GAUGE})
and (\ref{FADPOP}), we arrive at
\begin{eqnarray}
\tilde{D}^{-1}_{B_{\mu}B_{\nu}}&=&(-k^2+e^2\rho_0^2)g^{\mu\nu}
+(1-{1\over \xi})k^{\mu}k^{\nu},\nonumber \\
\tilde{D}^{-1}_{B_{\mu}\chi}&=&ik^{\mu}e\rho_0(1-{1\over \xi}),
\nonumber \\
\tilde{D}^{-1}_{\chi\chi}&=&(k^2-m_G^2-{1\over \xi}e^2\rho_0^2),
\nonumber \\
\tilde{D}^{-1}_{\rho\rho}&=&(k^2-m_H^2),\nonumber \\
\tilde{D}_{\omega^*\omega}&=&(k^2-e^2\rho_0^2)^{-2},
\label{PROP}
\end{eqnarray}
where we have set $\rho_{cl}=\rho_0$, which is a space-time independent
constant, and defined
$m_G^2= \lambda (\rho_0^2-2\mu^2)$,
$m_H^2=\lambda (3\rho_0^2-2\mu^2)$.
Using the definition $\Gamma[\rho_0]=-(2\pi)^4\delta^4(0)V_{eff}(\rho_0)$
along with Eqs. (\ref{ACTION}) and
(\ref{PROP}), and taking the limit $\xi\to 0$, we obtain
$V_{eff}(\rho_0)=V_{tree}(\rho_0)+V_{1-loop}(\rho_0)$ with
\begin{equation}
V_{1-loop}(\rho_0)={-i\hbar\over 2}\int {d^nk\over (2\pi)^n}
\ln\left[(k^2-e^2\rho_0^2)^{n-3}(k^2-m_H^2)(k^2-m_+^2)(k^2-m_-^2)\right],
\label{EFFECTIVE}
\end{equation}
where $m_+^2$ and $m_-^2$ are solutions of the quadratic equation
$(k^2)^2-(2e^2\rho_0^2+m_G^2)k^2+e^4\rho_0^4=0$. One notices that the
gauge-boson's degree of freedom in $V_{1-loop}$
has been continued to $n-3$ in order to
preserve the relevant Ward identities. For example, this continuation is
crucial to ensure the Ward identity which relates
the scalar self-energy to the contribution of the tadpole diagram.
Our expression for $V_{1-loop}(\rho_0)$
agrees with previous results obtained in the unitary gauge\cite{rt}.
One could also calculate the effective potential
in the {\it ghost-free} Lorentz
gauge with $L_{gf}=-{1\over 2\xi}(\partial_{\mu}B^{\mu})^2$.
The cancellation of the gauge-parameter($\xi$) dependence in the effective
potential has been demonstrated in
the case of massless
scalar QED where $\mu^2=0$\cite{FT,kun}. It can be easily extended to the
massive case, and the resultant effective potential coincides
with Eq. (\ref{EFFECTIVE}).
In the Appendix, we will also
demonstrate the cancellation of gauge-parameter dependence in the
calculation of Higgs-boson self-energy. The
obtained self-energy will be shown to coincide with its counterpart obtained
from the Landau-DeWitt gauge. We do this not only
to show that the Vilkovisky-DeWitt formulation coincides with the ordinary
formulation in the Landau-DeWitt
gauge, but also to illustrate how it gives rise to identical effective action
in spite of beginning with different gauges.
It is instructive to rewrite Eq. (\ref{EFFECTIVE}) as
\begin{equation}
V_{1-loop}[\rho_0]={\hbar \over 2}\int {d^{n-1}\vec{k}\over (2\pi)^{n-1}}
\left((n-3)\omega_B(\vec{k})+\omega_H(\vec{k})+\omega_+(\vec{k})
+\omega_-(\vec{k})\right),
\end{equation}
where $\omega_B(\vec{k})=\sqrt{\vec{k}^2+e^2\rho_0^2}$,
$\omega_H(\vec{k})=\sqrt{\vec{k}^2+m_H^2}$ and $\omega_{\pm}(\vec{k})
=\sqrt{\vec{k}^2+m_{\pm}^2}$. One can see that $V_{1-loop}$ is
a sum of the zero-point energies of four excitations with masses
$m_B\equiv e\rho_0$, $m_H$, $m_+$ and $m_-$. Since there
are precisely four physical degrees of freedom in the scalar QED,
we see that the Vilkovisky-DeWitt effective potential does exhibit a
correct number of physical degrees of freedom. Such a nice property is
not shared by the ordinary effective potential calculated in the
{\it ghost free} Lorentz gauge just mentioned. As will be shown later,
the ordinary effective potential in this gauge
contains unphysical degrees of freedom.
\section{Vilkovisky-DeWitt Effective Potential of the Simplified Standard
Model}
In this section, we compute the effective potential of the simplified
standard model where charged boson fields and all fermion fields except
the top-quark field are disregarded. The gauge interactions for the top quark
and the neutral scalar bosons are
prescribed by the following covariant derivatives\cite{LSY}:
\begin{eqnarray}
D_{\mu}t_L&=&(\partial_{\mu}+ig_LZ_{\mu}-{2\over 3}ieA_{\mu})t_L,
\nonumber \\
D_{\mu}t_R&=&(\partial_{\mu}+ig_RZ_{\mu}-{2\over 3}ieA_{\mu})t_R,
\nonumber \\
D_{\mu}\phi&=&(\partial_{\mu}+i(g_L-g_R)Z_{\mu})\phi,
\end{eqnarray}
where $Z_{\mu}$ and $A_{\mu}$ denote the $Z$ boson and the photon
respectively; the coupling constants $g_L$ and $g_R$ are given by
$g_L=(-g_1/2+g_2/3)$ and $g_R=g_2/3$ with $g_1=g/\cos\theta_W$ and
$g_2=2e\tan\theta_W$ respectively.
The self-interactions of scalar fields are described by the same potential
term as that in Eq. (\ref{SQED}).
Clearly this toy model exhibits a $U(1)_A\times U(1)_Z$
symmetry where each $U(1)$ symmetry is
associated with a neutral gauge boson. The $U(1)_Z$-charges of $t_L$, $t_R$
and $\phi$ are related in such a way that the following
Yukawa interactions are
invariant under $U(1)_A\times U(1)_Z$:
\begin{equation}
L_Y=-y\bar{t}_L\phi t_R-y\bar{t}_R\phi^* t_L.
\end{equation}
Since Vilkosvisky-DeWitt effective action coincides with the
ordinary effective
action in the Landau-DeWitt gauge, we thus
calculate the effective potential in this gauge, which is defined by
the following gauge-fixing terms
\cite{fermion}:
\begin{eqnarray}
L_{gf}=&-&{1\over 2\alpha}(\partial_{\mu}{\tilde Z}^{\mu}+
{ig_1\over 2}\eta^{\dagger}
\Phi-{ig_1\over 2}\Phi^{\dagger}\eta)^2\nonumber \\
&-&{1\over 2\beta}(\partial_{\mu}{\tilde A}^{\mu})^2,
\label{FIXG}
\end{eqnarray}
with $\alpha, \ \beta\to 0$.
We note that ${\tilde A}^{\mu}$,
${\tilde Z}^{\mu}$ and $\eta$ are quantum fluctuations associated
with the photon, the $Z$ boson and the scalar boson respectively, i.e.,
$A^{\mu}=A_{cl}^{\mu}+{\tilde A}^{\mu}$,
$Z^{\mu}=Z_{cl}^{\mu}+{\tilde Z}^{\mu}$, and $\phi=\Phi+\eta$ with
$A_{cl}^{\mu}$, $Z_{cl}^{\mu}$ and $\Phi$ being the background fields.
For computing the effective potential, we take $\Phi$ as
a space-time-independent constant denoted as $\rho_o$, and set
$A_{cl}^{\mu}=Z_{cl}^{\mu}=0$.
Following the method
outlined in the previous section, we obtain the one-loop effective
potential
\begin{equation}
V_{VD}(\rho_0)={\hbar \over 2}\int {d^{n-1}\vec{k}\over (2\pi)^{n-1}}
\left((n-3)\omega_Z(\vec{k})+\omega_H(\vec{k})+\omega_+(\vec{k})
+\omega_-(\vec{k})-4\omega_F(\vec{k})\right),
\label{POTENTIAL2}
\end{equation}
where $\omega_i(\vec{k})=\sqrt{\vec{k}^2+m_i^2}$ with
$m_Z^2={g_1^2\over 4}\rho_0^2$, $m_{\pm}^2=m_Z^2+{1\over 2}(m_G^2\pm
m_G\sqrt{m_G^2+4m_Z^2})$ and $m_F^2\equiv m_t^2={y^2\rho_0^2\over 2}$.
The Goldstone boson mass $m_G$ is defined as before, i.e, $m_G^2=\lambda
(\rho_o^2-2\mu^2)$ with $\mu$ being the mass parameter of the Lagrangian.
One may notice the absence of
photon contributions in the above effective potential. This is
not surprising since photons do not couple directly to the Higgs boson.
Performing the integration in Eq. (\ref{POTENTIAL2})
and subtracting the infinities with $\overline{MS}$ prescription, we obtain
\begin{eqnarray}
V_{VD}(\rho_0)&=&{\hbar\over 64\pi^2}\left(m_H^4\ln{m_H^2\over \kappa^2}
+m_Z^4\ln{m_Z^2\over \kappa^2}+m_+^4\ln{m_+^2\over \kappa^2}+
m_-^4\ln{m_-^2\over \kappa^2}-4m_t^4\ln{m_t^2\over \kappa^2}\right)\nonumber \\
&-&{\hbar\over 128\pi^2}\left(3m_H^4+5m_Z^4+3m_G^4+12m_G^2m_Z^2-12m_t^4\right),
\end{eqnarray}
where $\kappa$ is the mass scale introduced in the dimensional regularization.
Although $V_{VD}(\rho_0)$ is obtained in the Landau-DeWitt gauge, we
should stress that any other gauge with non-vanishing $T^i_{jk}$ should lead to
the same result. For later comparisons, let us write down the ordinary
one-loop effective
potential in the Lorentz gauge(removing the scalar part of Eq. (\ref{FIXG}))
as follows\cite{DJ}:
\begin{equation}
V_{L}(\rho_0)={\hbar \over 2}\int {d^{n-1}\vec{k}\over (2\pi)^{n-1}}
\left((n-1)\omega_Z(\vec{k})+\omega_H(\vec{k})+\omega_a(\vec{k},\alpha)
+\omega_b(\vec{k},\alpha)-4\omega_F(\vec{k})\right),
\end{equation}
where
$\alpha$ is the gauge-fixing parameter and
$\omega_{a,b}(\vec{k},\alpha)=\sqrt{\vec{k}^2+m_{a,b}^2(\alpha)}$ with
$m_a^2(\alpha)=1/2\cdot (m_G^2+\sqrt{m_G^4-4\alpha m_Z^2m_G^2})$
and $m_b^2(\alpha)=1/2\cdot (m_G^2-\sqrt{m_G^4-4\alpha m_Z^2m_G^2})$.
It is easily seen that there are 6 bosonic degrees of freedom in $V_{L}$,
i.e., two extra degrees of freedom emerge as a result of
choosing the Lorentz gauge.
In the Landau gauge, which is a special case of
the Lorentz gauge with $\alpha=0$,
there is still one extra
degree of freedom in the effective potential\cite{BBHL}.
Since
the Landau gauge is adopted most frequently for computing
the ordinary effective potential,
we shall take $\alpha=0$ in $V_L$ hereafter.
Performing the integrations in $V_{L}$ and subtracting the
infinities, we obtain
\begin{eqnarray}
V_{L}(\rho_0)&=&{\hbar\over 64\pi^2}\left(m_H^4\ln{m_H^2\over \kappa^2}
+3m_Z^4\ln{m_Z^2\over \kappa^2}+m_G^4\ln{m_G^2\over \kappa^2}
-4m_t^4\ln{m_t^2\over \kappa^2}\right)\nonumber \\
&-&{\hbar\over 128\pi^2}\left(3m_H^4+5m_Z^4+3m_G^4-12m_t^4\right).
\end{eqnarray}
We remark that $V_{L}$ differs from $V_{VD}$ except
at the point of extremum where $\rho_0^2=2\mu^2$.
At this
point, one has
$m_G^2=0$ and $m_{\pm}^2
=m_Z^2$, which lead to $V_{VD}(\rho_0=2\mu^2)=V_L(\rho_0^2=2\mu^2)$.
That
$V_{VD}=V_L$ at the point of extremum is a consequence of the Nielsen
identity\cite{niel} mentioned earlier.
To derive the Higgs boson mass bound from $V_{VD}(\rho_0)$ or $V_L(\rho_0)$,
one encounters a breakdown of the perturbation theory at the point
of sufficiently large $\rho_0$ such that, for instance,
${\lambda\over 16\pi^2}\ln{\lambda\rho_0^2\over \kappa^2}>1$.
To extend the validity of the effective potential to the
large-$\rho_0$ region, the effective potential has to be improved by
the renomalization group(RG) analysis. Let us denote the effective potential
as $V_{eff}$ which includes the tree-level contribution
and quantum corrections.
The renormalization-scale independence of $V_{eff}$
implies the following equation\cite{cw,BLW}:
\begin{equation}
\left(
-\mu(\gamma_{\mu}+1){\partial \over \partial \mu}
+\beta_{\hat{g}}{\partial
\over \partial \hat{g}}
-(\gamma_{\rho}+1)t{\partial \over \partial t}
+4\right)V_{eff}
(t\rho_0^i,\mu,\hat{g},\kappa)=0.
\end{equation}
where $\mu$ is the mass parameter of the Lagrangian as shown in
Eq. (\ref{SQED}), and
\begin{eqnarray}
& & \beta_{\hat{g}}=\kappa {d\hat{g}\over d\kappa},\nonumber \\
& & \gamma_{\rho}=-\kappa {d\ln \rho \over d\kappa},\nonumber \\
& & \gamma_{\mu}=-\kappa {d\ln \mu\over d\kappa},
\end{eqnarray}
with $\hat{g}$ denoting collectively the coupling constants $\lambda$, $g_1$,
$g_2$ and $y$; $\rho_0^i$ is an arbitrarily chosen initial value for $\rho_0$.
Solving this differential equation gives
\begin{equation}
V_{eff}(t\rho_0^i,\mu_i,\hat{g}_i,\kappa)=\exp\left(\int_0^{\ln t}
{4\over 1+\gamma_{\rho}(x)}dx\right)V_{eff}(\rho_0^i,\mu(t,\mu_i),
\hat{g}(t,\hat{g}_i),\kappa),
\label{IMPROVE}
\end{equation}
with $x=\ln(\rho_0^{\prime}/\rho_0^i)$ for an intermediate scale
$\rho_0^{\prime}$, and
\begin{equation}
t{d\hat{g}\over dt}={\beta_{\hat{g}}(\hat{g}(t))\over 1+\gamma_{\rho}
(\hat{g}(t))} \ {\rm with} \ \hat{g}(0)=\hat{g}_i,
\label{BEGA}
\end{equation}
\begin{equation}
\mu(t,\mu_i)=\mu_i\exp\left(-\int_0^{\ln t}
{1+\gamma_{\mu}(x)\over 1+\gamma_{\rho}(x)}dx\right).
\label{SCALE}
\end{equation}
To fully determine $V_{eff}$ at a large $\rho_0$, we need to calculate
the $\beta$ functions of $\lambda$, $g_1$, $g_2$ and $y$, and the anomalous
dimensions $\gamma_{\mu}$ and $\gamma_{\rho}$. It has been demonstrated
that the $n$-loop effective potential is
improved by the $(n+1)$-loop $\beta$ and $\gamma$ functions\cite{ka,bkmn}.
Since
the effectve potential is calculated to the one-loop order,
a consistent RG analysis
requires the knowledge of $\beta$ and $\gamma$ functions up to a two-loop
accuracy.
However, as the computations of two-loop $\beta$ and $\gamma$ functions
are quite involved, we will only improve the tree-level
effective potential with one-loop $\beta$ and $\gamma$ functions. After all,
the main focus of this paper is to show how to
obtain a gauge-independent Higgs boson mass
bound rather than a detailed calculation of this quantity.
To compute one-loop $\beta$ and $\gamma$ functions,
we first calculate the renormalization constants $Z_{\lambda}$, $Z_{g_1}$,
$Z_{g_2}$, $Z_y$, $Z_{\mu^2}$ and $Z_{\rho}$, which are defined by
\begin{eqnarray}
\lambda^{bare}&=&Z_{\lambda}\lambda, \;\; g_1^{bare}=Z_{g_1}g_1, \;\;
g_2^{bare}=Z_{g_2}g_2, \nonumber \\
y^{bare}&=&Z_yy, \;\; (\mu^2)^{bare}=Z_{\mu^2}\mu^2, \;\; \rho^{bare}
=\sqrt{Z_{\rho}}\rho.
\end{eqnarray}
In the ordinary formalism of the effective action,
all of the above renormalization
constants except $Z_{\rho}$ are in fact gauge-independent at
the one-loop order
in the $\overline{MS}$ scheme. For $Z_{\rho}$,
the result given by the commonly adopted Landau gauge differs
from that obtained from
the Landau-DeWitt gauge. In Appendix A, we shall reproduce $Z_{\rho}$
obtained in the Landau-DeWitt gauge
with the general Vilkovisky-DeWitt formulation.
The calculation of various renormalization constants are straightforward.
In the $\overline{MS}$ scheme, we have(we will set
$\hbar=1$ from this point on):
\begin{eqnarray}
Z_{\lambda}&=&1-{1\over 128\pi^2\epsilon'}\left({3g_1^4\over \lambda}
-24g_1^2-{16y^4\over \lambda}+32y^2+160\lambda\right),\nonumber \\
Z_{g_1}&=&Z_{g_2}=1-{1\over 216\pi^2\epsilon'}\left({27g_1^2\over 8}
+2g_2^2-3g_1g_2\right), \nonumber \\
Z_y&=&1+{1\over 192\pi^2\epsilon'}\left(9g_1^2+4g_1g_2-24y^2\right),\nonumber
\\
Z_{\mu^2}&=&1+{1\over 128\pi^2\epsilon'}\left({3g_1^4\over \lambda}
-12g_1^2-{16y^4\over \lambda}+16y^2+96\lambda
\right),\nonumber \\
Z_{\rho}&=&=1+{1\over 32\pi^2\epsilon'}\left(-5g_1^2+4y^2\right),
\label{renc}
\end{eqnarray}
where $1/\epsilon'\equiv 1/\epsilon+{1\over 2}\gamma_E-{1\over 2}\ln(4\pi)$
with $\epsilon=n-4$. The one-loop $\beta$ and $\gamma$ functions resulting
from the above renormalization constants are:
\begin{eqnarray}
\beta_{\lambda}&=&{1\over 16\pi^2}\left({3\over 8}g_1^4-3\lambda g_1^2
-2y^4+4\lambda y^2+20\lambda^2\right),\nonumber \\
\beta_{g_1}&=&{g_1\over 4\pi^2}\left({g_1^2\over 16}-{g_1g_2\over 18}
+{g_2^2\over 27}\right),\nonumber \\
\beta_{g_2}&=&{g_2\over 4\pi^2}\left({g_1^2\over 16}-{g_1g_2\over 18}
+{g_2^2\over 27}\right),\nonumber \\
\beta_y&=&{y\over 8\pi^2}\left(y^2-{3g_1^2\over 8}+{g_1g_2\over 12}\right),
\nonumber \\
\gamma_{\mu}&=&{1\over 2\pi^2}\left({3\lambda\over 4}+{3g_1^4\over 128\lambda}
-{3g_1^2\over 32}-{y^4\over 8\lambda}+{y^2\over 8}\right),\nonumber \\
\gamma_{\rho}&=&{1\over 64\pi^2}\left(-5g_1^2+4y^2\right).
\label{BETA}
\end{eqnarray}
Similar to what was mentioned earlier,
all of the above quantities
are gauge-independent in the $\overline{MS}$
scheme except $\gamma_{\rho}$, the anomalous dimension of the scalar field.
In the Landau gauge of the ordinary formulation,
we have
\begin{equation}
\gamma_{\rho}={1\over 64\pi^2}\left(-3g_1^2+4y^2\right).
\end{equation}
\section{The Higgs Boson Mass Bound}
The lower bound of the Higgs boson mass can be derived from the
vacuum instability
condition of the electroweak effective potential\cite{SHER}.
In this derivation, there exists different criteria
for determining the instability scale of the electroweak vacuum.
The first criterion is to identify the instability
scale as the critical value of
the Higgs-field strength beyond which the RG-improved tree-level effective
potential becomes negative\cite{FJSE,SHER2,AI}. To implement this criterion,
the
tree-level effective potential is improved by the leading\cite{AI} or
next-to-leading order
\cite{FJSE,SHER2} renormalization group equations, where one-loop or
two-loop $\beta$ and $\gamma$ functions are employed.
Furthermore, one-loop corrections to parameters of the effective
potential are also taken into account\cite{SHER2,AI}. However, the effect of
one-loop effective potential is not considered.
To improve the above treatment, Casas et. al\cite{ceq}
considered the effect of RG-improved one-loop effective potential.
The vacuum-instability scale is then identified as the value of
the Higgs-field strength
at which the sum of tree-level and one-loop effective potentials vanishes.
In our subsequent analysis, we will follow this
criterion except that the one-loop effective potential is not RG-improved.
To derive the Higgs boson mass bound, one begins with Eq. (\ref{IMPROVE})
which implies
\begin{equation}
V_{tree}(t\rho_0^i,\mu_i,\lambda_i)={1\over 4}
\chi(t)\lambda(t,\lambda_i)
\left((\rho_0^i)^2-
2\mu^2(t,\mu_i)\right)^2,
\end{equation}
with $\chi(t)=
\exp\left(\int_0^{\ln t}
{4\over 1+\gamma_{\rho}(x)}dx\right)$. Since Eq. (\ref{SCALE}) implies that
$\mu(t,\mu_i)$ decreases
as $t$ increases, we then have $V_{tree}(t\rho_0^i,\mu_i,\lambda_i)
\approx {1\over 4}\chi(t)\lambda(t,\lambda_i)(\rho_0^i)^4$ for a sufficiently
large $t$. Similarly, the one-loop effective potential
$V_{1-loop}(t\rho_0^i,\mu_i,\hat{g}_i,\kappa)$ is also proportional to
$V_{1-loop}(\rho_0^i,\mu(t,\mu_i),\hat{g}(t,\hat{g}_i),\kappa)$ with
the same proportional constant $\chi(t)$. Because
we shall ignore all running effects in $V_{1-loop}$, we can take
$\hat{g}(t,\hat{g}_i)=\hat{g}_i$ and $\mu(t,\mu_i)={1\over t}\mu_i$
in $V_{1-loop}$. For
a sufficiently large $t$, $V_{1-loop}$ can also be approximated by its quartic
terms. In the Landau-DeWitt gauge with the choice $\kappa=\rho_0^i$,
we obtain
\begin{eqnarray}
V_{VD}&\approx&{(\rho_0^i)^4\over 64\pi^2}\left[9\lambda_i^2\ln(3\lambda_i)
+{g_{1i}^4\over 16}\ln({g_{1i}^2\over 4})-y_i^4\ln({y_i^2\over 2})\right.
\nonumber \\
&+&\left. A_+^2(g_{1i},\lambda_i)\ln A_+(g_{1i},\lambda_i)
+A_-^2(g_{1i},\lambda_i)\ln A_-(g_{1i},\lambda_i)\right. \nonumber \\
&-&\left. {3\over 2}(10\lambda_i^2+
\lambda_i g_{1i}^2+{5\over 48}g_{1i}^4-y_i^4)\right],
\label{VVD}
\end{eqnarray}
where $A_{\pm}(g_1,\lambda)=g_1^2/4+\lambda/2\cdot
(1\pm \sqrt{1+g_1^2/\lambda})$.
Similarly, the effective potential in the Landau gauge is given by
\begin{eqnarray}
V_{L}&\approx&{(\rho_0^i)^4\over 64\pi^2}\left[9\lambda_i^2\ln(3\lambda_i)
+{3g_{1i}^4\over 16}\ln({g_{1i}^2\over 4})
-y_i^4\ln({y_i^2\over 2})\right. \nonumber \\
&+&\left.\lambda_i^2\ln(\lambda_i)
-{3\over 2}(10\lambda_i^2+\lambda_i g_{1i}^2+{5\over 48}g_{1i}^4-y_i^4)\right],
\label{VL}
\end{eqnarray}
Combining the tree-level and the one-loop effective potentials, we arrive at
\begin{equation}
V_{eff}(t\rho_0^i,\mu_i,\hat{g}_i,\kappa)\approx{1\over 4}\chi(t)
\left(\lambda(t,\lambda_i)+\Delta \lambda(\hat{g}_i)\right)(\rho_0^i)^4,
\end{equation}
where $\Delta \lambda$ represents the one-loop corrections obtained from Eqs.
(\ref{VVD}) or (\ref{VL}). Let $t_{VI}=\rho_{VI}/\rho_0^i$,
the condition for the vacuum
instability of the effective potential is then\cite{ceq}
\begin{equation}
\lambda(t_{VI},\lambda_i)+\Delta \lambda(\hat{g}_i)=0.
\label{VI}
\end{equation}
We note that the couplings $\hat{g}_i$ in $\Delta \lambda$ are evaluated
at $\kappa=\rho_0^i$, which can be taken as the electroweak scale. Hence
we have
$g_{1i}\equiv g/\cos\theta_W=0.67$, $g_{2i}\equiv 2e\tan\theta_W=0.31$,
and $y_i=1$. The running coupling $\lambda(t_{VI},\lambda_i)$ also depends upon
$g_1$, $g_2$ and $y$ through $\beta_{\lambda}$, and $\gamma_{\rho}$
shown in Eq. (\ref{BETA}).
To
solve Eq. (\ref{VI}), we first determine the running behaviours of the
coupling
constants $g_1$,
$g_2$ and $y$. For $g_1$ and $g_2$, we have
\begin{equation}
t {d\left(g_l^2(t)\right)\over dt}=2g_l(t){\beta_{g_l}
(\hat{g}(t))\over 1+\gamma_{\rho}(\hat{g}(t))}\approx \beta_{g_l^2},
\end{equation}
where $l=1, \ 2$, and
the contribution of $\gamma_{\rho}$ is neglected in accordance
with our leading-logarithmic approximation. Also $\beta_{g_l^2}
=g_l^2/2\pi^2\cdot (g_1^2/16-g_1g_2/18+g_2^2/27)$. Although the differential
equations for $g_1^2$ and $g_2^2$ are coupled, they can be easily
disentangled by observing that $g_1^2/g_2^2$ is a RG-invariant.
Numerically, we have
$\beta_{g_l^2}=c_lg_l^4$ with $c_1=2.3\times 10^{-3}$ and $c_2=
1.1\times 10^{-2}$. Solving the differential equations gives
\begin{equation}
g_l^{-2}(t)=g_l^{-2}(0)-c_l\ln t.
\end{equation}
With $g_1(t)$ and $g_2(t)$ determined, the running behaviour of
$y$ can be calculated analytically\cite{LW}. Given $\beta_{y^2}\equiv
2y\beta_y=c_3y^4-c_4g_1^2y^2$ with $c_3=2.5\times 10^{-2}$ and
$c_4=8.5\times 10^{-3}$, we obtain
\begin{equation}
y^2(t)=\left[\left({g_1^2(t)\over g_{1i}^2}\right)^{c_4\over c_1}
\left(y_i^{-2}-{c_3\over c_1+c_4}g_{1i}^{-2}\right)+
{c_3\over c_1+c_4}g_1^{-2}(t)
\right]^{-1}.
\end{equation}
Now the strategy for solving Eq. (\ref{VI}) is to make an initial guess
on $\lambda_i$, which enters into $\lambda(t)$ and $\Delta \lambda$, and
repeatedly adjust $\lambda_i$ until $\lambda(t)$ completely
cancells $\Delta \lambda$.
For $t_{VI}=10^2$(or $\rho_0\approx 10^4$ GeV) which is
the new-physics scale reachable by
LHC, we find $\lambda_i=4.83\times 10^{-2}$ for the Landau-DeWitt gauge,
and $\lambda_i=4.8\times 10^{-2}$ for the Landau gauge. For a higher
instability scale such as the scale of grand unification,
we have $t_{VI}=10^{13}$
or $\rho_0\approx 10^{15}$ GeV. In this case, we find $\lambda_i=3.13\times
10^{-1}$ for both the Landau-DeWitt and Landau gauges.
The numerical similarity
between the $\lambda_i$ of each gauge can be attributed to
an identical $\beta$ function for the running of $\lambda(t)$, and
a small difference between the $\Delta \lambda$ of each gauge.
We recall from Eq. (\ref{BEGA}) that the evolutions of $\lambda$ in the above
two
gauges will be different if the effects of next-to-leading logarithms are
taken into account. In that case, the difference between the $\gamma_{\rho}$
of each gauge gives rise to different evolutions for $\lambda$.
For a large $t_{VI}$, one may expect
to see a non-negligible difference between the $\lambda_i$ of each gauge.
The critical value $\lambda_i=4.83\times 10^{-2}$ corresponds to a lower
bound for the $\overline{MS}$ mass of the Higgs boson. Since $m_H=2\sqrt{
\lambda}\mu$, we have $(m_H)_{\overline{MS}}\geq 77$ GeV. For
$\lambda_i=3.13\times 10^{-1}$, we have $(m_H)_{\overline{MS}}\geq 196$ GeV.
To obtain the lower bound for the physical mass of the Higgs boson,
finite radiative corrections must be added to the above bounds\cite{LW}.
We will not pursue these finite corrections any further since we are
simply dealing with a toy model. However we would like to point out
that such corrections are gauge-independent as ensured by
the Nielsen identity\cite{niel}.
\section{Conclusion}
We have computed the one-loop effective potential of
an Abelian $U(1)\times U(1)$
model in the Landau-DeWitt gauge, which reproduces the result
given by the gauge-independent
Vilkovisky-DeWitt formulation. One-loop $\beta$ and $\gamma$ functions
were also computed to facilitate the RG-improvement of the effective
potential. A gauge-independent lower bound for the Higgs-boson self-coupling
or equivalently the $\overline{MS}$ mass of the Higgs boson was derived.
We compared this bound to that obtained using the ordinary
Landau-gauge effective potential.
The numerical values of both bounds are
almost identical due to the leading-logarithmic approximation we have taken.
A complete next-to-leading-order analysis
should better distinguish the two bounds. This improvement as well as
extending the current analysis to the full Standard Model
will be reported in future
publications.
Finally we would like to comment on the issue of comparing our result
with that of
Ref.\cite{BLW}.
So far, we have not found a practical way of
relating the effective potentials
calculated in both approaches. In Ref.\cite{BLW}, to achieve
a {\it gauge-invariant}
formulation, the theory is written in terms of a new set of fields
which are related to the original fields through non-local transformations.
Taking scalar QED as an example,
the new scalar field $\phi^{\prime}(\vec{x})$ is related to the
original field through\cite{BBHL}
\begin{equation}
\phi^{\prime}(\vec{x})=\phi(\vec{x})\exp\left(ie\int d^3y
\vec{A}(\vec{y})\cdot \vec{\nabla}_yG(\vec{y}-\vec{x})\right),
\end{equation}
with
$G(\vec{y}-\vec{x})$ satisfying $\nabla^2 G(\vec{y}-\vec{x})=\delta^3
(\vec{y}-\vec{x})$. To our knowledge, it does not appear obvious how one
might
incorporate the above non-local and non-Lorentz-covariant transformation
into the Vilkovisky-DeWitt formulation. This is an issue deserving
further investigations.
\acknowledgments
We thank W.-F. Kao for discussions.
This work is supported in part by
National Science Council of R.O.C. under grant numbers
NSC 87-2112-M-009-038, and NSC 88-2112-M-009-002.
\newpage
| proofpile-arXiv_065-8837 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\paragraph{}
As the most amazing and even paradoxical one in elementary particles, neutrino
has been known to us for a long time since 1931, when Wolfgang Pauli proposed
a new particle -- very penetrating and neutral -- in $\beta$-dacay.
Nevertheless, we don't know much about neutrino itself. That is mainly
because of experimental difficulty to detect neutrino itself and neutrino
events too. Up to now, we are not quite sure if neutrino has a rest mass or
not, and also we cannot explain {\it solar neutrino deficit\/}\cite{Winter}
\cite{Kamio}\cite{SNE}\cite{Sun}
contrary to the expectation from Standard Solar Model\cite{Sun}.
Furthermore, Nature doesn't seem to be fair in {\it Weak Interaction\/}
in which neutrino is involved. That has been known as {\it Parity Violation\/}.
\paragraph{}
As one of trial solutions to explain the {\it solar neutrino deficit\/},
neutrino {\it mass\/} or {\it flavor\/} oscillation has been suggested, and
many experiments have been done and doing until now. However, in the theory
of neutrino flavor oscillation, it is supposed that neutrino, for instance,
{$\nu_e$, $\nu_{\mu}$, or $\nu_{\tau}$}, is not in a fundamental state in
Quantum mechanical point of view, but in a mixed state with {\it mass
eigenstates\/}, such as {$m_1$, $m_2$, and $m_3$} ($m_1\not=m_2\not=m_3$).
This supposition implies that neutrino has an intrinsic structure and, thus,
one of flavors spontaneously transforms to another kind of flavors, like the
conversion of Meson $K^o$ into its antiparticle $\bar K^o$.
If neutrino flavor mixing is true, we have another dilemma in {\it lepton
number\/} conservation, which has been quite acceptable in phenomenology.
In section(1.2), we are going to compare neutrino flavor mixing with the
$K^o$ and $\bar K^o$ mixing. Moreover, the theory itself will be reviewed
if it is physically sound or not.
\paragraph{}
In Weak Interaction, we know that {\it parity\/} is violated.
When C. S. Wu {\it et al\/} showed parity violation in their experiment\cite
{Wu}, it was astonishing because we had believed in a fairness in natural
phenomena. Up to now, we don't know the reason for {\it parity violation\/},
and also we cannot explain why {\it parity violation\/} is detected only in
Weak Interaction.
Should we accept this fact as nature itself? Otherwise, there should be a
reasonable reason.
\paragraph{}
As mentioned above, the experimentation is difficult because neutrinos
have extremely small cross-sections(typical cross-section : $10^{-44}$ $m^2$
(1GeV)\cite{Perkins}); furthermore, neutrinos participate in only Weak
Interaction in which the interaction range and the relative strength are $\ll 1
fm$($\sim 10^{-18} m$) and $10^{-15}$, respectively.\cite{Greiner} Meanwhile,
if the interaction range and the relative strength are compared with
Electromagnetic Interaction case(range : $\infty$, relative strength :
$10^{-2}$)\cite{Greiner}, it is not easy [for us] to understand why
Electromagnetic Interaction - intermediated by {\it photon\/} -- is suppressed
in the Weak Interaction range despite that its interaction range is $\infty$
and the relative strength is $10^{-2}$ which is $10^{13}$ times bigger than
Weak Interaction one.\cite{Hans}
\subsection{Fermion and Boson}
\paragraph{}
Through the Complex Space model\cite{Kim-1}, it was possible to understand how
Special Theory of Relativity and Quantum Mechanics can be connected,
what is the physical entity of wave function -- ontological point of view
-- in Quantum Mechanics, and how Schr{\"o}dinger equation is related to
the Complex Space model\cite{Kim-2}.
In that, Dirac's {\it hole theory\/}\cite{Dirac} was re-interpreted
as following : In the Complex Space, vacuum particles -- vacuum electrons
in the model -- are not packed completely; they can transfer energy(for
instance, electromagnetic energy) through a wave motion such as
{\it vacuum-string-oscillation\/}. In which the electromagnetic energy is
propagated along the imaginary axis, and it is realized to real space through
U(1) symmetry.
\paragraph{}
In Quantum Physics, we know that the {\it phase factor\/} of wave function is
arbitrary if we consider only probability density($\psi\cdot \psi^{*}$);
yet, it need to be considered in physical interactions.
Especially, its peculiarity was shown by Berry, M.V. in considering slowly
evolving Quantum phase in the interaction of fermion(for example, electron)
with magnetic field, and it has been known as
{\it Berry phase\/}.\cite{Berry}\cite{Thomas}
\par
\medskip
Now, let us think about what the electron's spin is and, in general, what the
fermion(electron, proton, neutron, etc) is. First of all, we know that
the electron's spin can be represented as a rotation in {\it two dimensional
complex space\/}, that is SU(2); and the spin vector is realized through an
interaction in which the interaction Hamiltonian is $\alpha (\vec\sigma \cdot
\vec r)$($\sigma$:pauli matrix, $\alpha$:constant).\cite{Berry} Furthermore,
we can physically surmise that the peculiarity of Quantum phase occurs when
the spin vector in SU(2) is realized through the interaction with magnetic
field because of the difference between SU(2) and SO(3) --
two to one correspondence in their parameter spaces even though both are
similar groups. Here, we can be sure, at least, that electron's spin vector(
axial vector) resides in the Complex Space.
\begin{figure}[h]
\begin{center}
\leavevmode
\hbox{%
\epsfxsize=4.9in
\epsffile{fermion.eps}}
\end{center}
\caption{phase change through space inversion}
\label{Fermion}
\end{figure}
According to the interpretation of Quantum formalism in the Complex Space
\cite{Kim-2}, Quantum phase corresponds to the phase itself in the Complex
Space. Although the Quantum phase in a stationary system is arbitrary -- U(1)
-- and doesn't affect physical interpretation, in a dynamical system the phase
is no more free to choose. As a specific and simple example of Berry phase,
let us imagine following : One electron(spin $\hbar/2$) is at origin under
the influence of magnetic field($\vec H$) which is on $x$-$y$ plane with
pointing to the origin. Now, let the field pointing direction rotate slowly
around $z$ axis; then, the system is in SO(2) $\subset$ SO(3) in real space
and U(1) $\subset$ SU(2) in the Complex Space because of the correspondence
between SU(2) and SO(3); U(1) and SO(2) in their transformations. Here, U(1)
is different from the one we traditionally use since our concern is U(1) in
the internal Complex Space. Now let us say, the electron's spin points to
the positive $x$ axis at the beginning. And let the field direction rotate
with $\phi$ degrees which corresponds to $\phi /2$ rotating simultaneously
in the internal Complex Space.
After a round trip of the system($\phi = 2\pi$) in real space, we can easily
recognize that the spin points to the other direction -- the negative $x$
axis.
\paragraph{}
From Pauli exclusion principle we know that any two identical fermions cannot
be at the same Quantum state, but bosons can share the same state.
That means, the wave function representing two identical fermion system
is anti-symmetry, but the wave function of two identical bosons is
symmetry. To demonstrate this fact, let us say, $\Psi(1,2)$ represents
the Quantum state for two identical fermions. Then, $\Psi(2,1) = - \Pi
\makebox[0.4 mm]{}\Psi(1,2) = - {\rm P}\makebox[0.4 mm]{}\Psi(1,2)$,
where $\Pi$ and ${\rm P}$ are respectively {\it exchange\/} and {\it parity\/}
operator and it was used the fact that exchange operator($\Pi$) is
identical to parity operator(${\rm P}$) for this two identical fermion system.
In Fig.(\ref{Fermion}) total spin vector(axial) in two identical fermion
system is in internal Complex Space. After space inversion or parity
operation the Quantum phase is changed from $\theta$ to $\pi + \theta$ with
showing how the anti-symmetric wave function(two-fermion system) and the
parity operation are connected.
\paragraph{}
With this fact we can distinguish {\it fermion\/} and {\it boson\/}. Fermion
has a spin -- intrinsic angular moment(half integer $\times \hbar$) --
in Complex Space; boson can also have a spin, but the spin vector
(axial) is in real space.
\subsection{Neutrino Oscillation}
\paragraph{}
One of long standing questions related elusive particle, neutrino, is neutrino
oscillation. To understand the theory of neutrino oscillation,
firstly we have to assume that each flavor of neutrinos($\nu_{e}, \nu_{\mu},
\nu_{\tau}$) has a rest mass. Of course, $m_{\nu_{e}} \not =
m_{\nu_{\mu}} \not = m_{\nu_{\tau}}$ to be meaningful in the theory.
Furthermore we have to suppose that there are fundamental Quantum states,
in which the individual mass eigenstate doesn't appear in phenomenon, but
linear combinations of these states emerge to $\nu_{e}, \nu_{\mu}$ or
$\nu_{\tau}$.
\paragraph{}
{\bf In the respect of Quantum formalism :\/} To describe {\it one\/} particle
system, the wave function is a linear combination with all possible Quantum
states -- orthogonal and complete set of bases.
There, each state represents {\it the particle \/} itself with a probability.
Now, if the particle changes its identity, for instance, from $\nu_{e}$ to
$\nu_{\mu}$, can we still use the same complete basis set to describe the new
one? Or should we change the basis set to new one also?
According to the theory, we can use the same basis set to describe these
phenomenologically different particles because these particles are
fundamentally identical but in different states to each other.
However, $K^o$ and $\bar K^o$ mixing is different from neutrino flavor mixing
case because they have internal structures and thus can be different intrinsic
parities. In phenomenon, they appear to have different properties and
transform spontaneously to each other with exchanging $\pi^{\pm}$(Pions).
Now what about neutrinos? Do they have been known they have internal
structures and different intrinsic parities?
Although it has been known that neutrinos have imaginary intrinsic
parities\cite{Boehm}, it doesn't make any sense in phenomenology.
What about in the middle of oscillation? For instance,
``$50 \%$ $\nu_{e}$ and $50 \%$ $\nu_{\mu}$'' is possible? Although we can
use Quantum formalism to describe a physical system, the particle's identity
in the physical system should be clear at any time. Otherwise,
there should be an additional explanation why the fundamental mass states
are beyond physical phenomena.
\paragraph{}
{\bf Energy and Momentum conservation:\/} Let us say, rest mass $m_{\nu_{e}}$
-- electron neutrino -- is moving in an isolated system with velocity
$\beta_e$ at $t = 0$, and after $\tau$ seconds($\tau$ is arbitrary)
the mass is changed to rest mass $m_{\nu_{\mu}}$($m_{\nu_{\mu}} \not=
m_{\nu_{e}}$) -- muon neutrino -- and the velocity to $\beta_{\mu}$.
First of all, energy conservation should be satisfied as
\begin{equation}
m_{\nu_{e}} \gamma_e = m_{\nu_{\mu}} \gamma_{\mu}
\label{E-conser}
\end{equation}
where $\gamma_e = 1/\sqrt{1-\beta_{e}^2}$,
$\gamma_{\mu} = 1/\sqrt{1-\beta_{\mu}^2}$, and $ c \equiv 1$,
Moreover their momentums also should be conserved as
\begin{equation}
m_{\nu_{e}} \gamma_e \beta_{e} = m_{\nu_{\mu}} \gamma_{\mu} \beta_{\mu}
\label{P-conser}
\end{equation}
If the masses, $m_{\nu_{e}}$ and $m_{\nu_{\mu}}$ are different, both equations,
Eqn.(\ref{E-conser}) and Eqn.(\ref{P-conser}), cannot be satisfied
simultaneously.
\paragraph{}
Even though our concerning is different in Special Theory of Relativity and
Quantum Physics, both theories should be equally satisfied by a new theory
because they are basic theories in physics and connected fundamentally.
\cite{Kim-1}
\section{Dirac equation and Majorana neutrino}
\paragraph{}
To describe spin ${1\over 2} \hbar$ particle(fermion), two formalisms --
Dirac\cite{Itzykson_1}\cite{Lewis} and Majorana\cite{Majorana}\cite{Case}
\cite{Boehm} formalism -- have been known.
Since both formalisms are not derived directly from Sch{\"o}dinger equation --
that is not deductive but inductive, it is natural [for us] to check if these
two formalisms are physically enough meaningful or not.
\paragraph{}
In Majorana case; neutrino and antineutrino are identical such as
$\nu_{_L} \equiv \bar \nu_{_L}$ and $\nu_{_R} \equiv \bar \nu_{_R}$
(Majorana abbreviation),
and neutrino has a rest mass($m_{\nu} \not = 0$). Although the first
condition is tolerable in the respect of phenomenological facts that we
couldn't have detected $\bar \nu_{_L}$ and $\nu_{_R}$, the second one is not
compatible with Special Theory of Relativity.
\par
\medskip
If $m_{\nu} \not = 0$ no matter how small it is\footnote{our main concern here
is about neutrino.}, then we can find the neutrino rest frame through proper
Lorentz transformation; furthermore, we can even flip the helicity
from right-handed($\bar\nu_{_R}$) to left-handed($\nu_{_L}$) or {\it vice versa\/}.
Because there are only two kinds of neutrinos, $\nu_{_L}$ and $\bar \nu_{_R}$,
the flipped one is always corresponded to the other one.
However, obviously we can distinguish $\nu_{_L}$ from $\bar \nu_{_R}$ in their
properties.\cite{Cowan}\cite{LA}
For instance, in interactions; $\nu_{\mu} + e \longrightarrow \nu_{\mu} + e$
and $\bar \nu_{\mu} + e \longrightarrow \bar \nu_{\mu} + e$,
their total cross sections, $\sigma_{\nu e}$ and $\sigma_{\bar \nu e}$, are
different.\cite{LA} If Majorana neutrinos can be applicable in physics, they
should be identical in their physical properties, that is,
$\nu_{_L} \equiv \bar \nu_{_R}$. Yet, the total cross section can be changed
suddenly at a critical boost velocity in the transformation in spite of
the fact that the cross section should be invariant. That means, a physical
fact, which should be unique in all Lorentz frames, can be changed through the
transformation. Therefore, both conditions in Majorana formalism cannot be
feasible in physical situation as long as Special Theory of Relativity is
impeccable. About the neutrino mass, there was already a similar argument in
the respect of Group Theory in 1957\cite{Lee}. Therefore, neutrinos satisfied
by both conditions are not appropriate in physics.
\par
\medskip
If we abandon the first condition but still assume that neutrino mass is not zero,
we have to treat neutrinos as like other spin ${1\over 2} \hbar$ fermions --
not Majorana type fermions.
However, what if we give up the mass of neutrino but hold the first condition,
$\nu_{_L} \equiv \bar \nu_{_L}$ and $\nu_{_R} \equiv \bar \nu_{_R}$? Then, we have
to ask again the old questions if $\nu_{_R}$ and $\bar\nu_{_L}$ exist or not;
if these two neutrinos exist, why we cannot detect them; why parity is violated
in Weak Interaction. As a possible case, let us assume that neutrino has no mass
($m_{\nu} = 0$) and that $\nu_{_R}$ and $\bar\nu_{_L}$ exist but we couldn't have
detected them yet. If this supposition, which corresponds to Majorana
neutrino with setting $m_{\nu}$ to zero\cite{Case}, is true, we can understand
why parity violation happens in Weak Interaction; moreover, we can find
a clue to understand {\it solar neutrino deficit\/} problem.
\par
\medskip
In Dirac formalism; four spinors are closed by {\it parity\/} operation in
general, and in neutrino case($m_{\nu} = 0$) the Dirac equation is decoupled
to two component theory -- Weyl equations. To compare spinor parts among
neutrinos($\nu_{_R}$, $\nu_{_L}$, $\bar\nu_{_R}$, and $\bar\nu_{_L}$)
in free space, let us say :
\begin{eqnarray}
\nu_{_R}(x)& = & \nu_{_R}(k) e^{-ikx} \makebox[1 cm]{;}
\nu_{_L}(x) \makebox[2 mm]{} = \makebox[2 mm]{} \nu_{_L}(k) e^{-ikx}
\nonumber \\
\bar\nu_{_R}(x) & = & \bar\nu_{_R}(k) e^{ikx} \makebox[2 mm]{}\makebox[1.0 cm]{;}
\bar \nu_{_L}(x) \makebox[2 mm]{} = \makebox[2 mm]{} \bar\nu_{_L}(k) e^{ikx}
\label{X-K}
\end{eqnarray}
for a given $k$. Here, $\nu_{_R}$ represents positive energy with right-handed
neutrino; $\bar\nu_{_L}$, negative energy with left-handed antineutrino,
and {\it etc\/}.
Using {\it charge conjugation\/} operator $C$ which is consistent with
the Dirac's hole interpretation, we can see that there are only two independent
solutions\cite{Itzykson-2} as
\begin{eqnarray}
\bar\nu_{_R}(k) &=& C \gamma^o \nu^*_L(k) \makebox[2 mm]{}
= - \nu_{_R}(k) \makebox[2 cm]{and} \nonumber \\
\bar\nu_{_L}(k) & = & C \gamma^o \nu^*_R(k) \makebox[2 mm]{}
= - \nu_{_L}(k),
\label{C-C}
\end{eqnarray}
in which $\gamma^o\makebox[1 mm]{}=\makebox[1 mm]{}
\pmatrix{ & I & 0 \cr
& 0 & -I \cr} $ and $C\makebox[1 mm]{}=\makebox[1 mm]{}
\pmatrix{& 0 & -i \sigma_{_2} \cr
& -i \sigma_{_2} & 0 \cr} $ ;
the minus sign on RHS can be interpreted as the phase factor came from
{\it Pauli exclusion\/} principle as in Sec.(1.1). With a normalization
condition given for massless particles(neutrinos)\cite{Itzykson-2}, that is,
$ \nu_{_R}(k)^{\dag} \nu_{_R}(k) = \nu_{_L}(k)^{\dag} \nu_{_L}(k) =
\bar \nu_{_R}(k)^{\dag} \bar \nu_{_R}(k) = \bar \nu_{_L}(k)^{\dag} \bar
\nu_{_L}(k) = 2 E_{_k}$, plane wave solutions in Standard representation
can be constructed as
$$
\nu_{_R}(k) = \sqrt{E_{_k}} \left(\begin{array}{c}
+1 \\ 0 \\ +1 \\ 0 \end{array}\right)
\makebox[2cm]{}
\bar \nu_{_R}(k) = \sqrt{E_{_k}} \left(\begin{array}{c}
-1 \\ 0 \\ -1 \\ 0 \end{array}\right)
$$
$$
\nu_{_L}(k) = \sqrt{E_{_k}} \left(\begin{array}{c}
0 \\ +1 \\ 0 \\ -1 \end{array}\right)
\makebox[2cm]{}
\bar \nu_{_L}(k) = \sqrt{E_{_k}} \left(\begin{array}{c}
0 \\ -1 \\ 0 \\ +1 \end{array}\right)
$$
\section{Neutrino}
If neutrino mass is zero as assumed in the last Section, there are only two
physically distinctive neutrinos for each kind, and their helicities are
uniquely fixed to {\it right-handed\/} or {\it left-handed\/} once for all
because of $m_{\nu} = 0$. Moreover, if we assume that $\nu_{_R}$ and
$\bar \nu_{_L}$ exist, only the helicity of neutrino should be considered to
distinguish neutrinos for each kind; and Dirac equation is closed by parity
operation like other spin ${1\over 2}\hbar$ fermions. Phenomenological facts
-- neutrinos have an extremely small cross section, and Electromagnetic
Interaction is highly suppressed in Weak Interaction limit($\sim 10^{-18} m$)
-- can be a good clue to understand neutrino. In what follows, we are going
to propose what is {\it neutrino\/} and what is a possible model of $W^{\pm}$
boson with which {\it parity violation\/} can be explained.
\subsection{Propagation mode}
Since Electromagnetic wave, on which rest mass cannot be defined, was
considered as {\it vacuum string oscillation\/}\cite{Kim-1}, we might consider
a similarity of neutrino to photon. In the model of electromagnetic wave
propagation\cite{Kim-1} Plank's constant($h$) is given as
\begin{equation}
h = 2 \pi^2 m_e c \left( {A^2 \over d} \right),
\label{Plank}
\end{equation}
where $A$ is the {\it constant\/} amplitude for each wave string, $d$ is the
distance between two adjacent vacuum electrons, and $m_e$ is bare electron
mass.
For a rough estimation if we use electron mass instead of the bare mass,
electron Compton wavelength($\lambda_c /2 \pi$) has relation as following
\begin{eqnarray}
{S \over d} & \approx & {\hbar \over {m_e c}}, \nonumber \\
& = & 3.86 \times 10^{-13} m,
\label{P_C}
\end{eqnarray}
in which $S = \pi A^2$. Now, if we accept $A$ as $\sim 10^{-18} m$
\footnote{in facts, it can be even bigger than $\sim 10^{-18} m$.} because
Electromagnetic Interaction is highly suppressed in this limit and, thus, we
can surmise that electromagnetic wave cannot propagate in the limit.
Then, $d$ is estimated as $\sim 10^{-23} m$ from Eqn.(\ref{P_C}).
Furthermore, there is a limit of photon energy because $d$ is not zero --
the vacuum string oscillation is not through a continuous medium.
For example, wavelengths of T$eV$ and P$eV$ energy photons\footnote{T$eV =
10^{12} eV$, P$eV = 10^{15} eV$, E$eV = 10^{18} eV$} are $\sim 10^{-18} m$ and
$\sim 10^{-21} m$, respectively. But E$eV$ energy photon is not possible
because the wavelength($\lambda \sim 10^{-24} m $) is smaller than $d$.
As a reference, P$eV$ energy order $\gamma$ rays have been detected in
cosmic ray experiments.\cite{PEV}
\par
For neutrino to propagate Weak Interaction region($\sim 10^{-18} m$) there is
a mode in which only two independent states exist, that is {\it longitudinal
vacuum string oscillation\/} in which spin state(helicity) is also transferred
through the string oscillation. With this model we can estimate bare electron
size by a crude comparison of Compton scattering with neutrino and electron
elastic scattering($\nu_e, e$). Let us say, the ratio of Compton scattering
total cross section\footnote{low energy $\gamma$ ray to avoid hadronic
interaction} to the elastic scattering cross section $\sigma({\nu_e} e)$ is
$$
R = {\sigma_{_{com}} \over \sigma_{_{\nu e}}} \sim {A^2 \over \delta^2},
$$
where $\delta$ is bare electron radius. For 1 M$eV$ of $\nu_e$ and photon
in electron rest frame, $\sigma_{_{com}} \sim 10^{-29} m^2$ and
$\sigma_{_{\nu e}} \sim 10^{-48} m^2$.\cite{John} Then, the ratio,
$R \sim 10^{19}$, and the bare electron radius can be estimated as $\delta
\sim 10^{-27} m$.
\subsection{$W^{\pm}$ boson and parity violation}
Through experimental facts we have known that $\bar \nu_{_L}$ and $\nu_{_R}$
don't exist in nature. Yet we need to distinguish the existence in
phenomenology and in ontology. If $\bar \nu_{_L}$ and $\nu_{_R}$ exist,
necessarily we need an explanation why these two neutrinos couldn't have been
detected. For example, in $\beta^+$ decay of ${^{8}B} \rightarrow
{^{8}Be^*} + e^+ + \nu_e$ one proton is converted to a neutron inside
the nucleus with emitting a positron $e^+$ and an electron neutrino $\nu_e$.
That is $p \rightarrow n + e^+ + \nu_e$. In this process of Weak Interaction
it has been known that $W^+$ boson intermediate the process and the neutrino
$\nu_e$ is {\it left handed\/} electron neutrino.
However, if we assume $W^+$ boson($\Gamma^{e\nu}_{_W} \simeq 0.23$ G$eV$,
$\tau \sim 3 \times 10^{-24} sec$\cite{Gordon}) as a momentary interacting
state of a positron and {\it virtual positive charge strings\/} moving around the positron, we can
figure out from Fig.(\ref{W_boson}) that there is no more preference in the
neutrino helicity if the magnetic field is turned off.
\paragraph{}
In Fig.(\ref{W_boson}) $e^+$ represents bare charge of the positron, and
$q^+_v$ stands for the virtual positive charge strings induced from vacuum
polarization\cite{Sakura}. In which the virtual positive charge strings might
even have a distribution depending on radial distance from the positron.
Although we don't expect a positively induced virtual charges from the vacuum
polarization if the positron is in a free space, inside Weak Interaction
region($r \leq 10^{-18} m$) the virtual positive charges can be accumulated
around the edge of the region and even experienced a repulsive force from the
edge. According to the mirror image in Dirac's hole theory
the virtual-positive-charges behave like that they have {\it positive\/}
masses in phenomena.
\begin{figure}[h]
\begin{center}
\leavevmode
\hbox{%
\epsfxsize=3.6in
\epsffile{W_boson.eps}}
\end{center}
\caption{$W^+$ in $\beta^+ $ decay}
\label{W_boson}
\end{figure}
Since the state, interacting between the positron
and virtual-positive-charges, is not stable, soon it decay to a free
positron and a neutrino, in which the kinetic energy of
the virtual-positive-charges is transferred to the longitudinal vacuum string
oscillation, that can be interpreted as neutrino as assumed before.
Now, if magnetic field $\vec H$ is applied as in Fig.(\ref{W_boson}) to investigate
the positron's spin orientation we can use Faraday Induction law, which is
an empirical and macroscopic law, because the radius of $10^{-18} m$ is much
bigger than the bare electron size($10^{-27} m$).
Then, the virtual-positive-charges rotating around the positron move to
the negative z-axis(attractive in Faraday Induction law) and finally
it will be transferred to the longitudinal vacuum string oscillation --
neutrino -- following negative imaginary z-axis
as in the case of photon\cite{Kim-1} because neutrino mass $m_{_{\nu}}$ is
assumed to be zero.
On the other hand, the positron must move to the positive z-axis
because of momentum conservation. If total spin of the system was $1 \hbar$
pointing to the positive z-axis, the emitting electron neutrino definitely has
{\it left handed\/} helicity. What if the magnetic fields is turned off?
we cannot say which direction the neutrino choose for its emitting.
From a reasonable guess, we can say that it should be equally probable
for each direction.
\par
\medskip
In $\beta^-$ decay, that is $n \rightarrow p + e^- + \nu_e$, $W^-$ boson also
can be treated similarly as in $W^+$ boson case if we remind of the mirror
image in Dirac's hole theory, where $W^-$ boson is considered as a momentary
state of an electron $e^-$ and virtual-negative-charge strings moving around
the electron, and the virtual-negative-charge strings behave like that they
have negative masses. With a similar set up like Fig.(\ref{W_boson}) except
negative charges and the other rotating direction of the virtual-negative-
charge strings, we can find out that the emitting electron neutrino is now
{\it right handed\/} if the magnetic fields is on, and equally probable to be
{\it right handed\/} or {\it left handed\/} if the magnetic field is off.
Similarly, we can assume further that $Z^o$ boson is a momentary interacting
state($\sim 3 \times 10^{-24} sec$) between virtual-positive-charge strings and
virtual-negative-charge strings inside the Weak Interaction
region($r \le 10^{-18} m$).
\paragraph{}
From the reasoning as above, it is reasonable to say that the {\it parity
violation\/} can be happened if we consider only one part of phenomena,
in which we can investigate the spin orientation of lepton using magnetic
field; however, {\it intrinsically\/} parity is conserved
if vacuum is magnetic field free.
\subsection{neutrino flavors and solar neutrinos}
From experimental results\cite{Dandy} we can be sure that there are
at least three kinds of flavors -- $\nu_e, \nu_{\mu}, \nu_{\tau}$. Up to now,
it seems that we have assumed only one kind of neutrino because we cannot
distinguish neutrino flavors with one longitudinal vacuum string oscillation
model. Yet, there is a way to distinguish them. If neutrino propagation
is assumed as a bundle of vacuum string oscillations in general, each flavor
of neutrinos can be distinguished as a bulk motion with a different number of
vacuum string oscillations.
\paragraph{}
If there exist the counter parts of left-handed neutrino $\nu_{_L}$ and
right-handed antineutrino $\bar \nu_{_R}$ as assumed before, the parity should
be conserved. Moreover, we found in the model of $W^{\pm}$ bosons
that the helicity of neutrino is affected by the external magnetic field.
\par
\medskip
With these facts, let us try to find a possible explanation for the solar
neutrino problem as mentioned before. For example, $^8$B-neutrino
deficit\cite{Sun} as
${\Phi_{_{exp.}} \over \Phi_{_{SSM}}} \approx 0.47 \pm 0.10(1 \sigma)$
was confirmed again by Super-Kamiokande\cite{Kamio} experimental result.
If solar magnetic field doesn't affect on helicities of neutrinos,
the half of the neutrinos are {\it right handed\/}; the other half,
{\it left handed\/}. Now, if we compare the cross-sections(leptonic
interaction) of {\it right handed\/} neutrino and {\it left handed\/} neutrino
to electron,
the ratio of $\sigma(\bar\nu_{_R} e^-)$ to $\sigma(\nu_{_L} e^-)$ is
$\sim 0.416$ for $^8$B-neutrinos.\cite{Winter} Hence, with this fact we can
estimate that $\Phi_{_{exp.}}$ should be
$\sim 3.65 \times 10^6 cm^{-2} s^{-1}$. In fact, it is still bigger than
the experimental result of $2.42\pm0.06 \times 10^6 cm^{-2} s^{-1}$.
\par
\medskip
Although it is not easy to estimate how much solar magnetic field affect on
solar neutrinos(helicity and radiational direction), we can suppose that
the neutrino flux itself from the Sun is not {\it isotropic\/}.
In the model of $W^+$ boson in Fig.(\ref{W_boson}), we considered that virtual
positive-charge-string is affected during the disintegration to $e^+$ and
$\nu_e$ ($\sim 10^{-24} sec$) -- Faraday Induction law. If neutrinos, those of which we
are looking for on earth, emit from the equatorial zone of the Sun,
then the direction of emission possibly can be deviated from
the equatorial plane of the Sun. This effect should reduce the neutrino flux
to $66 \%$.
Moreover, we can expect that the smaller energy neutrinos, the more strong deviations.
In Kamiokande experimental result(1994)\cite{Kamio}, we can confirm this effect by
comparing neutrino spectrums in the measurements and from the Monte Carlo
simulation, that is the expectation from standard solar model.
\section{Summary}
\paragraph{}
It has been known that Dirac equation represents spin ${1\over 2} \hbar$
fermions. Through this letter we compared Dirac formalism and Majorana
formalism for neutrino case. Before that, we investigated how Pauli exclusion
principle is related to fermions in the Complex Space\cite{Kim-1} and
if neutrino oscillation is physically feasible or not. In short, neutrino
oscillation is not compatible with Special Theory of Relativity.
That is, as long as Special Theory of Relativity is impeccable, neutrino
flavor oscillation is not possible.
\par
\medskip
\noindent
For neutrino, Majorana type neutrino was rejected in considering experimental
results\cite{Cowan}\cite{LA}. If neutrino has rest mass, it should be treated
like other spin ${1\over2} \hbar$ fermions. However, we assumed that neutrino
has no mass; right-handed neutrino $\nu_{_R}$ and left-handed
antineutrino $\bar \nu_{_L}$ exist.
Here, we use neutrino and antineutrino, but they are not different even though
we use them traditionally in Dirac four spinor formalism. There are only two
physically distinctive neutrinos for each kind -- {\it right-handed \/} and
{\it left-handed \/}. From this fact and under the assumption that neutrino has
no rest mass, the longitudinal vacuum string oscillation was suggested for
neutrino. Moreover, the intermediate bosons $W^{\pm}$ was also suggested
as a model because it is necessary that {\it parity\/} should be conserved
if right-handed neutrino $\nu_{_R}$ and left-handed antineutrino
$\bar \nu_{_L}$ exist. In the model of $W^{\pm}$ bosons, we also found how
we might have overlooked the truth.
Meanwhile, V-A theory, with which the interaction Hamiltonian in Weak Interaction
has been formulated, is not enough to include $\nu_{_R}$ and $\bar\nu_{_L}$.
Instead, both V-A and V+A theories should be considered
because we are assuming {\it parity\/} conservation --
that means existing the mirror image.
Moreover, the {\it lepton number\/} conservation law should be extended
to include the other part, that might be a pair of {\it lepton number\/}
conservations, separately.
\paragraph{}
The neutrino oscillation has been considered as a candidate to a new physics
to explain the discrepancy of standard solar model\cite{Sun} in
solar neutrino experiments.\cite{William}\cite{SNE}
In spite of that, we investigated {\it neutrino\/} itself alternatively
and suggested one way to solve the solar neutrino problem.
\newpage
| proofpile-arXiv_065-8851 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The nature of the low energy excitations in the Hubbard model has
attracted a great deal of attention. It is well established that
At half--filling the ground state is an antiferromagnetic (AF) insulator.
Also, there exists conclusive evidence which indicates that
antiferromagnetism is rapidly suppressed upon doping \cite{Ka94,Sc95}.
Close to half filling,
a large amount of work suggests the existence of spin polarons,
made of dressed holes, which propagate within a given sublattice
with kinetic energy which in the strong coupling limit is of the order of
$J = \frac{4 t^2}{U}$ \cite{BS94,PL95}, where $t$ is the hopping
integral and $U$ the on site Coulomb repulsion.
These results are consistent with similar calculations
in the strong coupling,
low doping limit of the Hubbard model, the $t-J$
model\cite{DN94,LG95,DN97}.
There is also evidence for an effective attraction between these
spin polarons\cite{FO90,BM93,PR94,Da94,KA97,GM98}. However, recent
and extensive
Monte Carlo calculations for 0.85 filling and $U=2-8t$,
have shown that the pairing correlations vanish as the system size or the
interaction strength increases \cite{ZC97}.
We have recently analyzed the dynamics of spin polarons \cite{LC93a,LC93b}
and the interactions between them \cite{LG98} by means of a systematic
expansion around mean field calculations of the Hubbard model.
Two spin polarons in neighboring sites experience an increase
in their internal kinetic energy, due to the overlap of the
charge cloud. This repulsion is of the order of $t$.
In addition, a polaron reduces the obstacles
for the diffussion of another, leading to an assisted hopping
term which is also of same order. The combination of these
effects is an attractive interaction at intermediate values of
$U/t$. The purpose of this work is to discuss in detail the
results and the approach proposed in \cite{LG98}. We present new results
which support the validity of our approach, highlighting
the physically appealing picture of pairing that it provides.
An alternative scheme to go beyond the unrestricted Hartree Fock approximation
is to supplement it with the Gutzwiller projection method, or,
equivalently, slave boson techniques~\cite{SSH98,S98}.
These results are in agreement with the existence of
significant effects due to the delocalization of the solutions,
as reported here.
The rest of the paper is organized as follows. In Section II we discuss
the physical basis of our proposal and the way in which we implement
the Configuration Interaction method. A discussion of the limit of large
$U/t$ in the undoped case is presented in Section III.
It is shown that, contrary to some expectations,
the Hartree-Fock scheme reproduces correctly the mean field
solution of the Heisenberg model. The systematic
corrections analyzed here can be put in precise correspondence
with similar terms discussed for quantum antiferromagnets.
Results for the $4 \times 4$ cluster
are compared with exact results in Section IV. Section V is devoted to
discuss our results for a single hole (spin polaron) and for two or more
holes. The hole--hole correlations are also presented
in this Section. The last Section is devoted to the conclusions of
our work.
\section{Methods}
\subsection{Hamiltonian}
We investigate the simplest version of the Hubbard Hamiltonian
used to describe the dynamics of electrons in CuO$_2$ layers, namely,
\begin{mathletters}
\begin{equation}
H = T + C\;,
\end{equation}
\begin{equation}
T = \sum_{\sigma}T^{\sigma} = -\sum_{\langle ij\rangle}
t_{ij} c^{\dagger}_{i\sigma}c_{j\sigma}\;,
\end{equation}
\begin{equation}
C=\sum_iU_in_{i\uparrow}n_{i\downarrow}\;.
\end{equation}
\end{mathletters}
\noindent The Hamiltonian includes a single atomic orbital
per lattice site with energy $E_i$=0. The sums are over all
lattice sites $i=1,N_s$ of the chosen cluster of the square lattice
and/or the $z$ component of the spin ($\sigma =\uparrow, \downarrow$).
The operator $c_{j\sigma}$ destroys an
electron of spin $\sigma$ at site $i$, and $n_{i\sigma}=
c^{\dagger}_{i\sigma}c_{i\sigma}$ is the local density operator.
$t_{ij}$ is the hopping matrix element between sites $i$ and $j$ (the symbol
$\langle ij\rangle$ denotes that the sum is restricted to all nearest
neighbors pairs) and $U_i$ is the intrasite Coulomb repulsion.
Here we take $t_{ij}=t$ and $U_i = U$, and the lattice constant as
the unit of length.
\subsection{Unrestricted Hartree--Fock (UHF) solutions}
As we shall only consider UHF solutions having a local magnetization
pointing in the same direction everywhere in the cluster, we shall
use the most simple version of the UHF approximation \cite{VL91}.
Within this approximation the effective mean field Hamiltonian
that accounts for the Hubbard term is written as,
\begin{mathletters}
\begin{equation}
C^{\rm eff} = \sum_{\sigma}X^{\sigma} -U\sum_i \langle n_{i\uparrow}
\rangle\langle n_{i\downarrow}\rangle\;,
\end{equation}
\begin{equation}
X^{\sigma}=U\sum_in_{i\sigma}\langle n_{i\sigma}\rangle\;.
\end{equation}
\end{mathletters}
\noindent The full UHF Hamiltonian is then written as,
\begin{equation}
H^{\rm UHF} = T + C^{\rm eff}\;.
\end{equation}
Use of the Unrestricted Hartree Fock (UHF) approximation in finite clusters
provides a first order approximation to the spin polaron near half
filling. As discussed elsewhere, the UHF approximation
describes well the undoped, insulating
state at half filling \cite{VL91} (see also next Section).
A realistic picture of the spin wave excitations is obtained by adding
harmonic fluctuations by means of the time dependent Hartree Fock
approximation (RPA)\cite{GL92}. At intermediate and large
values of $U/t$, the most stable HF solution with a single hole is
a spin polaron\cite{VL91,LC93a}.
In this solution, approximately half of the charge of the hole
is located at a given site.
The spin at that site is small and it is reversed with
respect to the antiferromagnetic background. The remaining charge
is concentrated in the four neighboring sites.
A number of alternative derivations lead to a similar picture
of this small spin bag\cite{Hi87,KS90,DS90,Au94}.
A similar solution is expected to exist in the $t-J$ model.
A schematic picture of the initial one hole and two holes Hartree Fock
wavefunctions used in this work is shown in Fig. \ref{spins}.
They represent the solutions observed at large
values of $U/t$ for the isolated polaron and two spin polarons
on neighboring sites. The electronic spectrum of these configurations
show localized states which split from the top of the valence band.
As usual in mean field theories, the UHF solutions for an arbitrary
number of holes \cite{VL91}, such as the spin polaron solution described
above, break symmetries which must be restored by quantum fluctuations.
In particular, it breaks spin symmetry and translational
invariance (see Fig. \ref{spins}).
Spin isotropy must exist in finite clusters. However, it
is spontaneously broken in the thermodynamic limit, due to the
presence of the antiferromagnetic background. Hence, we do not expect
that the lack of spin invariance is a serious drawback of the
Hartree Fock solutions (this point is analyzed,
in some detail in\cite{GL92}).
Results obtained for small clusters \cite{LC93b,LG92} show a slight
improvement of the energy,
which goes to zero as the cluster size is increased.
On the other hand, translational invariance is expected
to be present in the exact solution of clusters of any size.
The way we restore translational invariance is discussed in the
following subsection. Finally we know how to estimate the effects
due to zero point fluctuations around the UHF ground state \cite{GL92}.
For spin polarons these corrections do not change appreciably
the results, although they are necessary to describe the long
range magnon cloud around the spin polaron \cite{RH97}.
\subsection{Configurations Interaction (CI) method}
We have improved the mean field results by following the procedure
suggested years ago by some of us \cite{LC93a}.
We hybridize a given spin UHF solution with all
wavefunctions obtained from it by lattice translations.
In the case of two or more holes point symmetry has also to be restored.
This is accomplished by applying rotations to the chosen configuration.
Configurations generated from a given one through this procedure
are degenerate in energy and interact strongly. Here we have also
investigated the effect of extending the basis by including other
configurations having different energies.
In all cases we include sets of wavefunctions with the lattice symmetry
restored as mentioned.
In a path integral formulation, this procedure would be equivalent
to calculating the contribution from instantons which visit
different minima.
On the other hand, it is equivalent to the Configuration Interaction
(CI) method used in quantum chemistry.
The CI wavefunction for a solution
corresponding to $N_e$ electrons is then written as
\begin{equation}
\Psi(N_e) = \sum_i a_i \Phi^i(N_e)\;,
\end{equation}
\noindent where the set $\Phi^i(N_e)$ is formed by some chosen UHF
wavefunctions (Slater determinants) plus those obtained from them by all
lattice translations and rotations. The coefficients $a_i$ are obtained
through diagonalization of the exact Hamiltonian.
The same method, using homogeneous paramagnetic solutions
as starting point, has been used in \cite{FL97}.
The wavefunctions forming this basis set are not in principle
orthogonal. Thus,
both wavefunctions overlap and non--diagonal matrix elements of
the Hamiltonian need to be taken into account when mixing between
configurations is considered.
If only configurations having the same energy and corresponding, thus,
to the same UHF Hamiltonian, are included,
a physically sound decomposition of the exact Hamiltonian
is the following \cite{LC93b},
\begin{equation}
H=H^{\rm UHF} + C -\sum_{\sigma} X^{\sigma} +U\sum_i \langle n_{i\uparrow}
\rangle\langle n_{i\downarrow}\rangle\;,
\end{equation}
\noindent In writing the matrix elements of this Hamiltonian
we should note that the basis formed by the wavefunctions $\Phi_i$
is not orthogonal. Then, we obtain,
\begin{equation}
H_{ij} =\left(E^{\rm UHF}+U\sum_i\langle n_{i\uparrow}
\rangle\langle n_{i\downarrow}\rangle\right)S_{ij}+C_{ij}-
\sum_{\sigma}X_{ij}^{\sigma}S_{ij}^{\bar \sigma}
\end{equation}
\noindent where $E^{\rm UHF}$ is the UHF energy of a given mean field
solution, and the matrix elements of the overlap $S$ are given by
\begin{equation}
S_{ij}=\langle\Phi^i(N_e)|\Phi^j(N_e)\rangle=S_{ij}^
{\uparrow}S_{ij}^{\downarrow}
\end{equation}
This factorization is a consequence of the characteristics
of the mean field solutions considered in this work (only one
component of the spin different from zero). The specific expression for
the matrix elements of the overlap is,
\begin{equation}
S_{ij}^{\sigma}=\left |
\begin{array}{ccc}
<\phi_1^{i\sigma}|\phi_1^{j\sigma}> & ... & <\phi_1^{i\sigma}|
\phi_{N_{\sigma}}^{j\sigma}> \\
... & ... & ... \\
<\phi_{N_{\sigma}}^{i\sigma}|\phi_1^{j\sigma}> & ... &
<\phi_{N_{\sigma}}^{i\sigma}|\phi_{N_{\sigma}}^{j\sigma}>
\end{array}
\right |
\end{equation}
\noindent where the number of particles for each component of the spin are
determined from the usual conditions, $N_{\uparrow}+N_{\downarrow}=N_e$
and $N_{\uparrow}-N_{\downarrow}=2S_z$. The $\phi^{i\sigma}_{n}$
are the monoelectronic wavefunctions corresponding to the Slater
determinant $i$,
\begin{equation}
|\phi^{i\sigma}_{n}\rangle=\sum_k\alpha^{i\sigma}_{nk}c^{\dagger}_
{k\sigma}|0\rangle,
\end{equation}
\noindent $\alpha^{i\sigma}_{nk}$ being real coefficients obtained through
diagonalization of the $H^{\rm UHF}$ Hamiltonian. The matrix element
of the exchange operator between Slater determinants $i$ and $j$ is,
\begin{equation}
X_{ij}^{\sigma}=\left |
\begin{array}{ccc}
<\phi_1^{i\sigma}|X^{\sigma}\phi_1^{j\sigma}> & ... & <\phi_1^{i\sigma}|
\phi_{N_{\sigma}}^{j\sigma}> \\
... & ... & ... \\
<\phi_{N_{\sigma}}^{i\sigma}|X^{\sigma}\phi_1^{j\sigma}> & ... &
<\phi_{N_{\sigma}}^{i\sigma}|\phi_{N_{\sigma}}^{j\sigma}>
\end{array}
\right |+ ...+\left|
\begin{array}{ccc}
<\phi_1^{i\sigma}|\phi_1^{j\sigma}> & ... & <\phi_1^{i\sigma}|X^{\sigma}
\phi_{N_{\sigma}}^{j\sigma}> \\
... & ... & ... \\
<\phi_{N_{\sigma}}^{i\sigma}|\phi_1^{j\sigma}> & ... &
<\phi_{N_{\sigma}}^{i\sigma}|X^{\sigma}\phi_{N_{\sigma}}^{j\sigma}>
\end{array}
\right |
\end{equation}
\noindent where the matrix elements of $X^{\sigma}$
between monoelectronic wavefunctions are given by,
\begin{equation}
<\phi_n^{i\sigma}|X^{\sigma}\phi_m^{j\sigma}>=U\sum_k
\alpha^{i\sigma}_{nk}\alpha^{j\sigma}_{mk}\langle n_{k{\bar \sigma}}
\rangle
\end{equation}
On the other hand the matrix elements of $C$ are,
\begin{equation}
C_{ij} = U\sum_k \left(n_{k\uparrow}\right)_{ij}
\left(n_{k\downarrow}\right)_{ij}
\end{equation}
where each $\left(n_{k\sigma}\right)_{ij}$ is given by an equation
similar to Eq. (10). The matrix elements of the density operator
between monoelectronic wavefunctions are,
\begin{equation}
<\phi_n^{i\sigma}|n_{k\sigma}\phi_m^{j\sigma}>=
\alpha^{i\sigma}_{nk}\alpha^{j\sigma}_{mk}
\end{equation}
If the CI basis includes wavefunctions having different UHF energies,
the above procedure is not valid and one should calculate the matrix
elements of the original exact Hamiltonian. Although this is in fact
reduced to calculate the matrix elements of the kinetic energy
operator T, the procedure is slightly more costly (in terms of computer
time) than the one described above. The matrix elements of the
exact Hamiltonian in the basis of Slater determinants are,
\begin{equation}
H_{ij}=\sum_{\sigma}T_{ij}^{\sigma}S_{ij}^{\bar \sigma}+C_{ij}
\end{equation}
\noindent where $T_{ij}^{\sigma}$ are given by an equation similar
to Eq. (10), and the matrix elements of the kinetic energy operator
between monoelectronic wavefunctions are
\begin{equation}
<\phi_n^{i\sigma}|T^{\sigma}\phi_m^{j\sigma}\rangle=
-t\sum_{<kl>}\alpha^{i\sigma}_{nk}\alpha^{j\sigma}_{ml}
\end{equation}
The matrix elements involved in the calculation of hole--hole
correlations for a given CI wavefunction are similar to those
that appeared in the computation of the Hubbard term.
In particular the following expectation value has to be computed,
\begin{equation}
\langle\Psi|(1-n_k)(1-n_l)|\Psi\rangle=
\sum_{ij}a_ia_j\langle\Phi_i|(1-n_k)(1-n_l)|\Phi_j\rangle
\end{equation}
\noindent where $n_k=n_{k\uparrow}+n_{k\downarrow}$. Terms of four
operators in this expectation value are similar to Eq. (12). Those
requiring more computer time involve
$n_{k\uparrow}n_{l\uparrow}$ with $k\ne l$,
\begin{equation}
\left( n_{k\uparrow}n_{l\uparrow} \right)_{ij} = \left |
\begin{array}{cccc}
<\phi_1^{i\sigma}|n_{k\uparrow}\phi_1^{j\sigma}> &
<\phi_1^{i\sigma}|n_{l\uparrow}\phi_2^{j\sigma}> &
... & <\phi_1^{i\sigma}|\phi_{N_{\sigma}}^{j\sigma}> \\
... & ... & ... & ... \\
<\phi_{N_{\sigma}}^{i\sigma}|n_{k\uparrow}\phi_1^{j\sigma}> &
<\phi_{N_{\sigma}}^{i\sigma}|n_{l\uparrow}\phi_2^{j\sigma}> &
... & <\phi_{N_{\sigma}}^{i\sigma}|\phi_{N_{\sigma}}^{j\sigma}>
\end{array}
\right |+ {\rm permutations}
\end{equation}
\subsection{Numerical Calculations}
Calculations have been carried out on $L \times L$ clusters with
periodic boundary conditions ($L \leq 12$) and $U = 8-5000t$. Some results for
lower values of $U$ are also presented.
Note that $U=8t$ is widely accepted as the most physically meaningful
value of Coulomb repulsion in these systems (see for instance \cite{WS97}).
Although larger clusters can be easily reached, no improvement
of the results is achieved due to the short--range character
of the interactions (see below).
The numerical procedure runs as follows. Localized UHF solutions are first
obtained and the Slater determinants for a given filling constructed.
The full CI basis set is obtained by applying all lattice traslations
to the chosen localized UHF Slater determinants all having the same $z$
component of the spin $S_z$.
Then we calculate the matrix elements of the overlap and of the Hamiltonian
in that basis set. This is by far the most time consuming part of the
whole calculation. Diagonalization is carried out by means of standard
subroutines for non--orthogonal bases. The state of lowest energy
corresponds to the CI ground state of the system for a given $S_z$.
The desired expectation values are calculated by means of this ground state
wavefunction. The procedure is variational and, thus, successive
enlargements of the basis set do always improve the description
of the ground state.
\section{The limit of large $U$ in the undoped case.}
The Hartree Fock scheme, for the undoped Hubbard model in a square lattice
gives an antiferromagnetic ground state, with a charge gap.
At large values of $U/t$, the gap is of order $U$. The simplest
correction beyond Hartree-Fock, the RPA approximation, leads to
a continuum of spin waves at low energies\cite{GL92}.
Thus, the qualitative features
of the solution are in good agreement with the expected properties
of an antiferromagnetic insulator.
There is, however, a great deal
of controversy regarding the adequacy of mean field techniques
in describing a Mott insulator\cite{La97,AB97}.
In principle, the Hubbard model, in the large $U$ limit,
should describe well such a system. At half filling and large $U$,
the only low energy degrees of freedom of the Hubbard model
are the localized spins, which interact antiferromagnetically,
with coupling $J = \frac{4 t^2}{U}$. It has been argued that,
as long range magnetic order is not relevant for the existence
of the Mott insulator, spin systems with a spin gap are the most
generic realization of this phase. A spin gap is often associated
with the formation of an RVB like state,
which cannot be adiabatically connected to the Hartree Fock
solution of the Hubbard model. So far, the best
examples showing these features are two leg spin 1/2 ladders\cite{DR96}.
Recent work\cite{CK98} indicates that, in the presence of
magnetic order, the metal insulator transition is of the Slater
type, that is, coherent quasiparticles can always be defined
in the metallic side. These results seem to favor the scenario
suggested in\cite{La97}, and lend support to our mean field
plus corrections approach.
Without entering into the full polemic outlined above, we now show
that the method used here gives, in full detail, the results which
can be obtained from the Heisenberg model by expanding around the
antiferromagnetic mean field solution\cite{CK98}. Such an expansion
gives a consistent picture of the physics of the Heisenberg model
in a square lattice.
The ground state energy of the Hartree Fock solution in a $4 \times 4$
cluster is compared to the exact value\cite{FO90} at large values of $U$
in Table I. The corresponding Heisenberg model is:
\begin{equation}
H_{\rm Heis} = \frac{4 t^2}{U} \sum_{ij} {\bf \vec{S}}_i {\bf \vec{S}}_j
- \frac{t^2}{U} \sum_{ij} n_i n_j
\label{Heisenberg}
\end{equation}
In a $4 \times 4$ cluster, the exact ground state energy is
\begin{equation}
E_{\rm Heis} = - 16 ( c + 0.5 )\frac{4 t^2}{U}
\end{equation}
\noindent where $c = 0.702$ \cite{Sa97}, in good
agreement with the results for the Hubbard model. The mean field energy
can be parametrized in the same way, except that $c = 0.5$. This is the
result that one expects for the mean field solution of the
Heisenberg model, which is given by a staggered configuration of static
spins. This solution can be viewed as the ground state of an anisotropic
Heisenberg model with $J_z = J$ and $J_{\pm} = 0$.
We now analyze corrections to the Hartree Fock solution by hybridizing it
with mean field wavefunctions obtained from it by flipping two
neighboring spins (hereafter referred to as sf). These solutions are
local extrema of the mean field solutions in the large $U$ limit.
In Table I we show the energy difference between
these states and the antiferromagnetic (AF) Hartree Fock ground state,
and their
overlap and matrix element also with the ground state. We have checked
that these are the only wavefunctions with a non negligible mixing with
the ground state. The overlap goes rapidly to zero, and the energy difference
and matrix elements adjust well to the expressions
\begin{mathletters}
\begin{equation}
\Delta E_{\rm AF,sf} = E_{\rm AF}-E_{\rm sf} = \frac{12 t^2}{U}
\end{equation}
\begin{equation}
t_{\rm AF,sf} = \frac{2 t^2}{U}\;.
\end{equation}
\end{mathletters}
These are the results that one obtains when proceeding from
the Heisenberg model. These values, inserted in a perturbative analysis of
quantum corrections to the ground state energy of the Heisenberg
model\cite{CK98}, lead to excellent agreement with exact results
(see also below).
As already pointed out, in the CI calculation of the ground state energy
we only include the mean field
wavefunctions with two neighboring spins flipped.
Restoring point symmetry gives
a total of 4 configurations, while applying lattice translations
leads to a set of $4L^2/2$ configurations (remember that
configurations on different sublattices do not interact) to which
the AF wavefunction has to be added. In the case
of the $4 \times 4$ cluster the set has a total of 33 configurations. The
CI energy for this cluster is given in Table I along with the exact
and the UHF energies. It is noted that the CI calculation reduces
in 50\% the difference between the exact and the mean field result.
Improving this results would require including a very large set,
as other configurations do only decrease the ground state energy
very slightly.
In the large U limit, the largest interaction is $t_{\rm AF,sf}$. Then,
neglecting the overlap between the AF and the sf mean field solutions,
the CI energy of the ground state can be approximated by,
\begin{equation}
E_{\rm CI}=\frac{1}{2}\left[ E_{\rm AF}+E_{\rm sf} - \Delta E_{\rm AF,sf}
\sqrt{1+\frac{8L^2t_{\rm AF,sf}^2}{(\Delta E_{\rm AF,sf})^2}}\right]
\end{equation}
\noindent For $U=50$ this expression gives $E_{\rm CI}$=-1.421,
in excellent agreement with the CI result given in Table I.
Note that a perturbative calculation of the corrections of the
ground state energy in finite clusters is somewhat tricky,
as the matrix element
scales with $\sqrt{N_s}$, where $N_s=L^2$ is the number of sites in the cluster,
while the energy difference is independent of $N_s$. The first term
in a perturbative expansion (pe) coincides with the first term in the
expansion of the square root in Eq. (21),
\begin{equation}
E_{\rm pe}=E_{\rm AF}-\frac{2L^2t_{\rm AF,sf}^2}{\Delta E_{\rm AF,sf}}
\end{equation}
\noindent in agreement with the result reported in \cite{CK98}. Although
this correction to the AF energy has the expected size dependence for
an extensive magnitude (it is
proportional to the number of sites $N_s$) and gives an energy
already very similar to the exact, it was obtained by inconsistently
expanding in terms of a parameter that can be quite large. For instance
in the $4 \times 4$ cluster and $U$=50, $E_{\rm pe}\approx 1.51$, close
to the exact result (Table I) while
$(8L^2t_{\rm AF,sf}^2)/(\Delta E_{\rm AF,sf})^2 \approx 3.9$
much larger than 1. Thus, perturbation theory
is doomed to fail even for rather small clusters.
On the other hand, the CI calculation described above introduce a
correction to the AF energy which has not the correct size dependence.
This can be easily checked in the large cluster limit in which
the CI energy can be approximated by
\begin{equation}
E_{\rm CI}\approx \frac{1}{2}\left( E_{\rm AF}+E_{\rm sf}\right) -
\sqrt{2}L t_{\rm AF,sf}
\end{equation}
\noindent while the correct expression should scale as $N_s$,
because, in large clusters, the difference between the exact and the
Hartree-Fock ground state energies must be proportional to $N_s$, irrespective
of the adequacy of the Hartree-Fock approximation.
Thus, one obtains a better approximation to the ground state
energy in the thermodynamic limit, by using
the perturbative calculations in small clusters and extrapolating
them to large clusters, as in the related $t-J$ model\cite{CK98}.
In any case, the problem outlined here does not appear when
calculating corrections to localized spin textures, such as the
one and two spin polarons analyzed in the next sections.
The relevant properties are associated to the size of the texture,
and do not scale with the size of the cluster they are embedded in.
From the previous analysis, we can safely conclude that our scheme
gives a reliable approximation to the undoped Hubbard model in a square
lattice in the strong coupling regime, $U/t \gg 1$. We cannot conclude
whether the method is adequate or not for the study of models which exhibit
a spin gap. It should be noted, however, that a spin gap needs not
only be related to RVB like ground states. A spin system modelled
by the non linear sigma model can also exhibit a gap in the
ground state, if quantum fluctuations, due to dimensionality or
frustration, are sufficiently large. In this case, a mean field approach
plus leading quantum corrections should be qualitatively correct.
\section{Comparison with Exact Results for $4 \times 4$ clusters
with two holes}
In order to evaluate the performance of our approach we have calculated
the ground state energy of two holes in the $4 \times 4$ cluster
and compared the results with those obtained by means of the Lanczos method
\cite{FO90}. The results are reported in Tables II--IV, where the energies
for one hole are also given for the sake of
completness (a full discussion of this case can be found
in \cite{LC93a}, see also below).
In the case of one hole the standard spin polaron solution (Fig. \ref{spins})
plus those derived from it through lattice translations form the basis
set. For two holes we consider solutions with $S_z=0$ or 1. In the first
case we include either the
configuration having the two holes at the shortest distance, i.e.,
separated by a (1,0) vector and/or at the largest distance possible, that
is separated by a (2,1) vector, and those obtained from them
through rotations. The basis used for the two polarons at the shortest
distance is shown in Fig. \ref{bipolaronCI}. The set of these four
configuration has the proper point symmetry.
Again, lattice translations are applied to these configurations to
produce the basis set with full translational symmetry.
On the other hand, wavefunctions with $S_z=1$ can be constructed by including
configurations with the two holes separated by vectors (1,1) and/or (2,2).
The results for the energies of wavefunctions with $S_z=0$ for several
values of the interaction parameter $U$ are reported in Tables II and III.
As found for a single hole, the kinetic energy included by restoring
the lattice symmetry, improves the wavefunction energies \cite{LC93a}.
The improvement in the energy is larger for intermediate $U$.
For instance for $U=32$ a 10\% gain is noted.
Within UHF, the solution with the holes at the largest distance
is more favorable for $U > 8t$. Instead, restoring the translational
and point symmetries favors the solution with the holes at neighboring
sites for all $U$ shown in the Tables. The results also indicate that
the correction introduced by this procedure does no vanish with $U$.
A more detailed discussion
of the physical basis of this result along with results for larger values
of $U$ and larger clusters will be presented in the following Section.
On the other
hand the energies get closer to the exact energies (see Table II).
A further improvement in the energy is obtained by including
both UHF configurations, namely, \{1,0\} and \{2,1\}. This
improvement is larger for
intermediate $U$ and vanishes as $U$ increases (Table III).
Other configurations, such as that proposed in \cite{RD97} in which the two
holes lie on neighboring sites along a diagonal and a neighboring spin
is flipped, may contribute to further improve the CI energy of
the ground state.
It is interesting to compare these results with those corresponding
to wavefunctions with $S_z=1$ also reported in Table IV. It is noted that
for $U=6-16$ the energy of the solution including all configurations
from the set \{1,1\} is smaller than those obtained with all configurations
from either the set \{1,0\} or the set \{2,1\}. However, the wavefunction
constructed with all configurations from the last two sets is
more favorable than the best wavefunction with $S_z=1$. The latter is
in agreement with exact calculations \cite{FO90} which obtained
a ground state wavefunction with $S_z=0$.
\section{Results}
\subsection{Single Polaron}
Here we only consider the quasiparticle band structure associated
to the single polaron, the energy gain induced through restoration
of translational symmetry has been considered elsewhere \cite{LC93a}.
The calculated dispersion band of a single polaron is shown in
Fig. \ref{polaron}. Because of the antiferromagnetic background,
the band has twice the lattice periodicity.
Exact calculations in finite clusters do not show
this periodicity, as the solutions have a well defined spin
and mix different background textures. As cluster sizes are increased,
however, exact solutions tend to show the extra periodicity
of our results. We interpret this as a manifestation
that spin invariance is broken in the thermodynamic
limit, because of the antiferromagnetic background.
Hence, the lack of this symmetry in our calculations
should not induce spurious effects.
Fig. \ref{polaron} shows the polaron bandwidth
as a function of $U$. It behaves as $t^2/U$, the fitted law being
\begin{equation}
E_{\rm BW}=-0.022 t + 11.11 \frac{t^2}{U}
\end{equation}
\noindent This result indicates that the band width tends to zero
as $U$ approaches infinite, as observed in the results
for the energy gain reported in \cite{LC93a}.
Our scheme allows a straightforward explanation of
this scaling. Without reversing the spin of the whole background,
the polaron can only hop within a given sublattice.
This implies an intermediate virtual hop into a site with
an almost fully localized electron of the opposite spin.
The amplitude of finding a
reversed spin in this new site decays as $t^2/U$ at large $U$.
On the other hand, we find that the dispersion relation
can be satisfactorily fitted by the expression:
\begin{eqnarray}
\epsilon_{\bf k} = \epsilon_0 + 4 t_{11} \cos (k_x ) \cos ( k_y )
+ 2 t_{20} [ \cos ( 2 k_x ) + \cos ( 2 k_y ) ]
+ 4t_{22} \cos ( 2 k_x ) \cos ( 2 k_y ) +\nonumber \\
4 t_{31}[ \cos ( 3 k_x) \cos ( k_y )+ \cos ( k_x ) \cos ( 3 k_y )].
\end{eqnarray}
\noindent For $U = 8 t$, we
get $t_{11} = 0.1899 t$ , $t_{20} = 0.0873 t$, $t_{22} = -0.0136 t$,
and $t_{31} = -0.0087 t$. All hopping integrals vanish as $t^2/U$
in the large $U$ limit for the reason given above.
Also the energy gain with respect to UHF \cite{LC93b}
behaves in this way. All
these features are in good agreement with known
results\cite{BS94,PL95,DN94,LG95} for both
the Hubbard and the $t-J$ models.
\subsection{Two Holes}
We now consider solutions with two spin polarons.
The relevant UHF solutions are those with $S_z = 0$ (solutions with
$S_z=1$ will also be briefly considered).
In order for the coupling to be finite, the centers of the
two spin polarons must be located in different sublattices.
The mean field energy increases as the two polarons
are brought closer, although, for intermediate and large
values of $U$, a locally stable Hartree Fock solution can be
found with two polarons at arbitrary distances. We have not attempted
to do a full CI analysis of all possible combinations of two holes in a finite
cluster. Instead, we have chosen a given mean field solution (UHF)
and hybridized it with all others obtained by all lattice translations
and rotations. Some results of calculations in which more than one
UHF solution is included will be also presented. Clusters of sizes up to
$10 \times 10$ were studied
which, as in the case of the polaron, are large enough due to the
short--range interactions between different configurations.
The basis used for the two polarons at the shortest distance
is shown in Fig. \ref{bipolaronCI}.
This procedure leads to a set of bands, whose number depends on
the number of configurations included in the CI calculation.
For instance if the basis set of Fig. \ref{bipolaronCI} is used four
bands are obtained (see also below).
Like in the single polaron case,
we obtain a gain in energy (with respect to UHF), due to the delocalization
of the pair. The numerical results for $L$=6, 8 and 10 and $U$
in the range $8t-5000t$ are shown in the inset of Fig. \ref{difference}.
They can be fitted by the following straight lines,
\begin{mathletters}
\begin{equation}
E_{\rm G}^{{1,0}}=0.495 t + 1.53 \frac{t^2}{U}
\end{equation}
\begin{equation}
E_{\rm G}=-0.002 t + 3.78 \frac{t^2}{U}
\end{equation}
\end{mathletters}
\noindent where (26a) corresponds to holes at the shortest
distance and (26b) to holes at the largest distance.
Note that, whereas in the case of the holes at the largest distance,
the gain goes to zero
in the large $U$ limit, as for the isolated polaron, when
the holes are separated by a $\{1,0\}$ vector the gain goes to a
finite value. This result is not surprising, as the following arguments
suggest. The hopping terms in the bipolaron calculation,
that are proportional to $t$ at large $U$,
describe the rotation of a pair around the position of one of
the two holes. Each hole is spread between four sites.
In order for a rotation to take place, one hole has to jump
from one of these sites into one of the rotated positions.
This process can always take place without a hole moving into a
fully polarized site with the wrong spin. There is a gain
in energy, even when $U/t \rightarrow \infty$. In the single
polaron case, the motion of a hole involves the inversion of, at least, one
spin, which is fully polarized in the large $U$ limit. Because of this,
hybridization gives a vanishing contribution to the energy as $U/t
\rightarrow \infty$.
The results discussed above are in line with those for the width of the
quasiparticle band. The numerical results can be fitted by,
\begin{mathletters}
\begin{equation}
E_{\rm BW}^{{1,0}}=3.965 t + 14.47 \frac{t^2}{U}
\end{equation}
\begin{equation}
E_{\rm BW}=-0.007 t + 10.1 \frac{t^2}{U}
\end{equation}
\end{mathletters}
Thus, the total bandwidth of the two bands obtained for holes in neighboring
sites does not vanish in the infinite $U$ limit (as the energy gain reported
in Fig. \ref{bipolaronCI}). The internal consistency of
our calculations is shown comparing the large $U$ behavior
of the two holes at the largest distance possible with the corresponding
results obtained for the isolated polaron (compare this fitting with
that given in Eq. (24)) .
The hole--hole interaction, i.e., the difference
between the energy of a state built up by all configurations with the
two holes at the shortest distance (separated by a vector of the set
\{1,0\}) and the energy of the state having the holes at the largest
distance possible at a given cluster is depicted in Fig. \ref{difference}.
Two holes bind for intermediate values of $U$ \cite{ferro}.
This happens because the delocalization energy tends to be higher
than the repulsive contribution obtained within mean field.
The local character of the interactions is
illustrated by the almost null dependence of the results
shown in Fig. \ref{difference} on the cluster size.
The only numerical calculations which discuss the binding of holes
in the Hubbard model are those reported in \cite{FO90}. Energetically,
it is favorable to pair electrons in a $4 \times 4$ cluster for
values of $U/t$ greater than 50. The analysis of the correlation functions
suggests that pairing in real space ceases at $U/t \sim 16$,
leading to the suspicion of large finite size effects.
Our results give that pairing dissappears at $U/t \approx 40$,
which is consistent with the analysis in\cite{FO90}.
Similar calculations for the t-J model give binding between
holes for $J/t \ge 0.1$\cite{PR94}. Taking $J = \frac{4 t^2}{U}$,
this threshold agrees well with our results.
In order to clarify some aspects of the method, we have
carried out a more detailed analysis of two hole solutions in
$6 \times 6$ clusters. The results are presented in Table V.
Within UHF the most favorable solution is that with the two
holes at the largest distance (2,3) but for the smallest $U$
shown in the Table. The solution with the holes at the shortest
distance (1,0) is only favored at small $U$, while for $U\geq 8$
even the solution with $S_z=1$ has a smaller energy.
Instead when the lattice symmetry is restored the solution
with the holes at the shortest distance is the best for all $U$
excluding $U=200$. For
such a large $U$ the wavefunction constructed with all configurations
from \{2,3\} has the lowest energy. The solution with $S_z=1$ is
unfavorable for all $U$ shown in Table V, in contrast with
the results found in the $4 \times 4$ cluster, indicating that
size effects were determinant in the results for the smaller cluster.
Including all configurations with $S_z$ either 1 or 0 does not change
this trend. The small difference between the results
for \{1,0\} and those with
all configurations with $S_z=0$ for large $U$ is misleading. In fact,
the weight of the configuration with the holes at the largest
distance (2,3) in the final CI wavefunction increases with $U$.
This will be apparent in the hole--hole correlations
discussed in the following paragraph.
We have analysed the symmetry of the ground state wavefunction
$|\Psi\rangle$ obtained with all configurations having the holes
at the shortest distance. The numerical results for all $U$ show
that $\langle\Phi^1|\Psi\rangle=-\langle\Phi^2|\Psi\rangle=
\langle\Phi^3|\Psi\rangle=-\langle\Phi^4|\Psi\rangle$,
where the $|\Phi^i\rangle$ are the four configurations shown in Fig. 2.
This symmetry corresponds to the $d_{x^2-y^2}$ symmetry, in agreement with
previous theoretical studies of the Hubbard and $t--J$ models
\cite{Da94,CP92}.
The quasiparticle band structure for two holes has also been investigated.
The main interactions $t_i$ and the overlaps $s_i$
between the configurations are given in Table VI (the meaning of
the symbols is specified in Fig. \ref{bipolaronCI}).
The results correspond to a $6 \times 6$ cluster with the two holes
at the shortest distance. At finite $U$ many interactions
contribute to the band, in Table VI we only show the largest ones.
Of particular significance is the $t_3$ interactions which accounts
for the simultaneous hopping of the two holes. This term, which clearly
favors pairing, vanishes in the infinite $U$ limit, in line
with the results for the hole--hole
interaction (see above) which indicate that pairing is not favored at
large $U$. Also in this limit $t_1=t_2$.
Including only the interactions given in Table VI, the bands can
be easily obtained from,
\begin{equation}
\left |
\begin{array}{cc}
E+2(s_1E-t_1){\rm cos}k_x + 2(s_3E-t_3){\rm cos}k_y &
(s_2E-t_2)(1+{\rm e}^{{\rm i}k_x})(1+{\rm e}^{-{\rm i}k_y}) \\
(s_2E-t_2)(1+{\rm e}^{-{\rm i}k_x})(1+{\rm e}^{{\rm i}k_y}) &
E+2(s_1E-t_1){\rm cos}k_y + 2(s_3E-t_3){\rm cos}k_x
\end{array}
\right |=0
\end{equation}
Neglecting the overlap, the bands are given by,
\begin{equation}
E({\bf k})=(t_1+t_3)({\rm cos}k_x + {\rm cos}k_y) \pm
\sqrt{[(t_1+t_3)({\rm cos}k_x + {\rm cos}k_y)]^2+4t_2^2
(1+{\rm cos}k_x)(1+{\rm cos}k_y)}
\end{equation}
In the infinite $U$ limit ($t_3=0$ and $|t_1|=|t_2|$) the bands are simply,
\begin{mathletters}
\begin{equation}
E_1({\bf k})=-2t_1
\end{equation}
\begin{equation}
E_2({\bf k})=2t_1(1+{\rm cos}k_x+ {\rm cos}k_y)
\end{equation}
\end{mathletters}
\noindent Note that, as in the single hole case and due to the
antiferromagnetic background, the bands have twice the lattice periodicity.
The dispersionless band has also been reported in \cite{CK98}
and, in our case, it is a consequence of the absence of two hole hopping
in the infinite $U$ limit ($t_3=0$).
Our results, however, disagree with the conclusions reached in \cite{CK98}
concerning the absence of hole attraction.
We find a finite attraction for holes at intermediate $U$'s. It is interesting
to note that our effective hopping is of order $t$, and not of order $t^2/U$
as in cite{CK98}.
This effect is due to the delocalized nature of the single polaron texture
(5 sites, at least), and it does not correspond to a formally similar term
which can be derived from the mapping from the Hubbard to the
t-J model\cite{Tr88}.
The results for the hole--hole correlation,
$\langle ( 1 - n_i ) ( 1 - n_j ) \rangle$, as function of
the hole--hole distance $r_{ij}=|{\bf r}_i-{\bf r}_j|$ are
reported in Tables VII and VIII. The normalization
$\sum_j\langle ( 1 - n_i ) ( 1 - n_j ) \rangle = 1$ has been used. The results
correspond to CI wavefunctions with $S_z=0$ and were obtained including
all configurations from either the set \{1,0\} or from the sets
\{1,0\}, \{1,2\}, \{3,0\} and \{2,3\}. The results are in qualitative
agreement with those in \cite{FO90}. When comparing with
results obtained for the t-J model, one must take into account
that, in the Hubbard model, the hole--hole correlation, as defined
above, can take negative values (see Appendix). This is due to the
appearance
of configurations with double occupied sites, which are counted as
negative holes. Aside from this effect, our results describe
well a somewhat puzzling result found in the t-J model
(see for instance \cite{GM98,WS97,RD97}):
the maximum hole--hole correlation occurs when the two holes
are in the same sublattice, at a distance equal to $\sqrt{2}$
times the lattice spacingi \cite{diagonal}.
This result follows directly from the
delocalized nature of the spin polarons, as seen in Fig. \ref{spins}.
The center of each spin polaron propagates through one sublattice only,
but the electron cloud has a finite weight in the other one,
even when $U/t \rightarrow \infty$. This effect is noted in all
cases but for $U=200$ with all configurations. In that case there is
not a clear maximum and the correlations are appreciable even at rather
large distances. The reason for this behavior is that for
large $U$ the configuration with the holes at the largest distance, namely,
\{2,3\}, have the lowest energy and, thus, a large weight in the
CI wavefunction. This is consistent with the fact that no
attraction was observed at large $U$ (see Fig. \ref{difference}).
Finally we note that the slower decrease with distance of hole--hole
correlations, obtained for $U=8$ including configurations from the
four sets (Table VIII) may be a consequence of the decrease in the
difference between UHF and CI energies as $U$ diminishes (see Fig. 4).
\subsection{Four Holes}
An interesting question is whether the holes would tend to segregate
when more holes are added to the cluster. In order to investigate this
point, we have calculated total energies for four holes
on $10 \times 10$ clusters with the holes either centered on a square,
or located on two bipolarons separated by a (5,5) vector and with the holes
at the shortest distance. Two (four)
configurations (plus translations) were included in each case.
In the case of two bipolarons only configurations
in which the two bipolarons are rotated simultaneously are included.
Other possible configurations have different energies and contribute to a less
extent to the wavefunction. In any case, increasing the size of the basis
set would not have changed the essential conclusion of our analysis (see below).
The results for several values of $U$ are shown in Table IX.
We note that already at the UHF level the solution with two
separated bipolarons has a lower energy. The Coulomb repulsion term in
the Hamiltonian does not favor the configuration with the aggregated
holes but for very small $U$. Restoring lattice symmetry decreases
the energy in both cases to an amount which in neither case vanishes
in the infinite $U$ limit. The decrease is slightly larger in the case
of the four holes on a square. This result can be understood by noting that
the holes move more freely (producing the smallest distortion to the AF
background) when the
charge is confined to the smallest region possible. In any case, this
is not enough to compensate the rather important difference in energy
between the two cases at the UHF level. These results indicate that for large
and intermediate $U$ no hole segregation takes place and
that the most likely configuration is that of separated bipolarons.
\subsection{Effective Hamiltonian for Hole Pairing}
As discussed above, in the large $U$ limit the bipolaron moves over the
whole cluster due to the interactions among the four mean field
wavefunctions of Fig.
\ref{bipolaronCI} (interactions $t_1$ and $t_2$ in Fig. \ref{bipolaronIN}).
This mechanism can be viewed as another manifestation of hole assisted
hopping.
The possibility of hole assisted hopping has been already considered
in \cite{Hi93}, although in a different context. It always leads
to superconductivity. In our case, we find a contribution,
in the large $U$ limit, of the type:
\begin{eqnarray}
{\cal H}_{hop} &= &\sum \Delta t {c^{\dag}}_{i,j;s} c_{i,j;s} (
{c^{\dag}}_{i+1,j;{\bar s}} c_{i,j+1;{\bar s}} +
\nonumber \\ &+ &{c^{\dag}}_{i-1,j;\bar{s}} c_{i+1,j;\bar{s}} + h. c. +{\rm
perm})
\label{hopping}
\end{eqnarray}
This term admits the BCS decoupling
$\Delta t \langle c^{\dag}_{i,j;s}
c^{\dag}_{i+1,j;{\bar s}} \rangle c_{i,j;s} c_{i,j+1;{\bar s}} +
h. c. + ...$.
It favors superconductivity with
either $s$ or $d$ wave symmetry, depending on the sign of $\Delta t$.
Since we find $\Delta t > 0$, $d$--wave symmetry follows.
\section{Concluding Remarks}
We have analyzed the leading corrections to the Hartree Fock solution
of the Hubbard model, with zero, one and two holes. We show that
a mean field approach gives a reasonable picture of the undoped system
for the entire range of values of $U/t$.
The main drawback of mean field solutions in doped systems is their
lack of translational invariance. We overcome this problem by using
the Configuration Interaction method. In the case of one hole, the
localized spin polaron is replaced by delocalized wavefunctions
with a well defined dispersion relation. The bandwidth, in the large $U$
limit, scales as $\frac{t^2}{U}$, and the solutions correspond to
spin polarons delocalized in a given sublattice only.
As in the undoped case, these results are in good agreement with
other numerical calculations, for the Hubbard and t-J models.
The same approach is used to analyze the interactions between
pairs of holes. We first obtain Hartree Fock solutions with two holes
at different separations. From comparing their respective energies, and
also with single hole solutions, we find a short range repulsive interaction
between holes. This picture is significantly changed
when building delocalized solutions.
The energy gain from the delocalization is enough to compensate the
static, mean field, repulsion. There is a net attractive interaction
for $8 \le U/t \le 50$, approximately. The correlations between
the holes which form this bound state are in good agreement with
exact calculations, when available. The state has $d_{x^2-y^2}$
symmetry. In this range of parameters, we find no evidence
of hole clustering into larger structures.
A further proof of the efficiency ot the present CI approach results
from a comparison with the CI approach of \cite{FL97}, that is
based upon an extended basis (${\bf k}$--space). For a $6 \times 6$ cluster
and $U=4$ the UHF localized solution has an energy of -31.747.
A CI calculation including 36 localized configurations lowers the energy
down to -31.972. This has to be compared with the result reported in
\cite{FL97} obtained by means of 2027860 extended configurations,
namely, -30.471. The difference between the two approaches should
further increase for larger $U$.
We have not applied the same technique to other
Hartree Fock solutions which have been found extensively in the
Hubbard model: domain walls separating
antiferromagnetic regions \cite{VL91,ZG89,PR89,Sc90}.
The breakdown of translational
symmetry associated with these solutions is probably real and not
just an artifact of the Hartree Fock solution, as in the previous cases.
Hybridization of equivalent solutions can, however, stabilize
domain walls with a finite filling, which are not the
mean field solutions with the lowest energy.
Because of the qualitative differences between spin
polarons and domain walls, we expect a sharp transition between
the two at low values of $U/t$.
Note, however, that the
scheme presented here, based on mean field solutions plus
corrections, is equally valid in both cases.
\acknowledgments
Financial support from the CICYT, Spain, through grants PB96-0875,
PB96-0085, PB95-0069, is gratefully acknowledged.
\section{Appendix: Hole-hole Correlations in the Hydrogen Molecule}
Here we explicitly calculate the hole--hole correlations in the hydrogen
molecule described by means of the Hubbard model. Let us
call $a^{\dagger}_{\sigma}$ and $b^{\dagger}_{\sigma}$ the operators
that create a particle with spin $\sigma$ at sites $a$ and $b$
respectively. The ground state wavefunction has $S_z=0$ and is given by,
\begin{equation}
|\psi> =(2+\alpha^2)^{-1/2}\left(|\phi_1>+|\phi_2>+|\phi_3>\right )
\end{equation}
\noindent where,
\begin{mathletters}
\begin{equation}
|\phi_1> = a^{\dagger}_{\uparrow}b^{\dagger}_{\downarrow}|0>
\end{equation}
\begin{equation}
|\phi_2> = \frac{1}{\sqrt{2}}\left(
a^{\dagger}_{\uparrow}a^{\dagger}_{\downarrow}+
b^{\dagger}_{\uparrow}b^{\dagger}_{\downarrow}\right)|0>
\end{equation}
\begin{equation}
|\phi_3> = b^{\dagger}_{\uparrow}a^{\dagger}_{\downarrow}|0>
\end{equation}
\label{H2}
\end{mathletters}
\noindent with
\begin{equation}
\alpha=\frac{E}{\sqrt{2}},\;\;\;
E=\frac{U}{2}-\left(\frac{U^2}{4}+4\right)^{1/2}
\end{equation}
\noindent The wavefunctions in Eq. (\ref{H2}) are orthonormalized, namely,
$<\phi_i|\phi_j>= \delta_{ij}$. The result for the hole--hole correlations
on different sites is,
\begin{equation}
<\psi|(1-n_a)(1-n_b)|\psi>=-\frac{\alpha^2}{2+\alpha^2}
\end{equation}
\noindent As $\alpha=-\sqrt{2},0$ when $U=0,\infty$, this expectation
value varies from -0.5 to 0.0. Thus it can take negative values as found
in the case of clusters of the square lattice. Instead, the hole--hole
correlation on the same site is given by,
\begin{equation}
<\psi|(1-n_a)(1-n_a)|\psi>=\frac{\alpha^2}{2+\alpha^2}
\end{equation}
\noindent which is positive for all values of $U$. Particle--particle
correlations are obtained by adding 1 to these results.
| proofpile-arXiv_065-8857 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec0}
Our understanding of critical phenomena has been significantly advanced
with the development of the renormalization-group (RG) theory\cite{Wilson}.
The RG theory predicts relationships between groups of exponents and that there is
a universal behavior. In a second order phase transition, the correlation length $\xi$
diverges as the critical point is approached, and so
the details of the microscopic Hamiltonian are unimportant for the
critical behavior.
All members of a given universality class have identical critical behavior and
critical exponents.
The three-dimensional classical XY model is relevant to the critical
behavior of many physical systems, such as superfluid
$^{4}He$, magnetic materials and the high-Tc superconductors.
In the pseudospin notation, this model is defined by the Hamiltonian
\begin{equation}
H=-J\sum_{<ij>}(S_{i}^{x}S_{j}^{x}+S_{i}^{y}S_{j}^{y}),
\label{XY}
\end{equation}
where the summation is over all nearest neighbor
pairs of sites $i$ and $j$ on a simple cubic lattice. In this model one considers that the
spin has two components, $\vec S_{i}= (S_{i}^x,S_{i}^y)$ and $S_i^{x 2}+S_i^{y 2}=1$.
In this paper we wish to consider
a three component local spin $\vec S_{i}= (S^x_i,S^y_i,S^z_i)$ and the same Hamiltonian as given by
Eq. (\ref{XY}) (namely, with no coupling between the z-components of the spins) in
three dimensions. Even though the Hamiltonian is the same,
namely, there is no coupling between the z-component of the spins, the constrain for
each spin is $(S_i^{x})^2 + (S_i^{y})^2+(S_i^{z})^2=1$, which implies that
the quantity $(S_i^{x})^2+(S_i^{y})^2$ is
fluctuating. In order to be distinguished from the usual XY model,
the name {\it planar magnet model} will be adopted for this model.
The reason for our desire
to study this model is that it is related directly to the so-called model-F\cite{Hohenberg}
used to study non-equilibrium phenomena in systems, such as superfluids,
with a two-component order parameter and a conserved current.
In the planar magnet model, the order parameter is not a constant of the motion.
A constant of the motion is the $z$ component of the
magnetization. Thus, there is an important
relationship between the order parameter and the $z$ component of magnetization,
which is expressed by a Poisson-bracket relation\cite{Hohenberg}.
This equation is crucial for the hydrodynamics and the critical dynamics of the system.
One therefore needs to find out the critical properties of this model in order
to study non-equilibrium properties of superfluids or other systems described
by the model F. In future work, we shall use model-F
to describe the dynamical critical phenomena of superfluid helium.
Before such a project is undertaken, the static critical properties of the
planar magnet model should be investigated accurately.
Although the static properties of the $XY$ model with $\vec{S}_{i}=
(S^{x}_{i},S^{y}_{i})$ have been investigated by a variety of
statistical-mechanical
methods\cite{Guillou,Albert,high,Li,MC,S2,S3,Janke,Hasenbusch},
the system with $\vec{S}_{i}=
(S^{x}_{i},S^{y}_{i},S^{z}_{i})$ has been given much less attention.
So far the critical behavior of this model has been studied
by high temperature expansion\cite{Ferer} and Monte Carlo(MC) simulation methods\cite{Costa,Oh}.
High temperature expansion provides the value for the critical temperature and the critical
exponents. In these recent MC calculations\cite{Costa,Oh}, only the critical
temperature is determined. These MC calculations were carried out on small size systems and thus only rough estimates are available.
In this paper we study the three-dimensional planar magnet model
using a hybrid Monte Carlo method
(a combination of the cluster algorithm
with over-relaxation and Metropolis spin re-orientation algorithm) in conjuction
with single-histogram re-weighting technique and finite-size scaling.
We calculate the fourth order cumulant,
the magnetization, and the susceptibility (on cubic lattices $L\times L \times L$ with
$L$ up to $42$) and from
their finite-size scaling behavior
we determine the critical properties of the
planar magnet model accurately.
\section{Physical Quantities and Monte Carlo Method}
\label{sec1}
Let us first summarize the
definitions of the observables that are calculated in our simulation.
The energy density of our model is given by
\begin{equation}
<e>=E/V=\frac{1}{V}\sum_{<ij>}<S_{i}^{x}S_{j}^{x}+S_{i}^{y}S_{j}^{y}>,
\end{equation}
where $V=L^{3}$ and the angular brackets denote the thermal average.
The fourth-order cumulant $U_{L}(K)$\cite{Binder} can be written as
\begin{equation}
U_{L}(K)=1-\frac{<m^{4}>}{3<m^{2}>^{2}},
\end{equation}
where $m=\frac{1}{V}(M_{x}^{2}+M_{y}^{2}+M_{z}^{2})^{1/2}$ is the magnetization per spin,
$\vec M = \sum_{i} \vec S_i$
and $K=J/(k_{B}T)$ is the coupling, or the reduced inverse temperature
in units of $J$. The fourth-order cumulant $U_{L}(K)$ is one important quantity which we use to
determine the critical coupling constant $K_{c}$.
In the scaling region close to the critical coupling, the fourth-order
cumulant $U_{L}(K)$ as function of $K$ for different values of $L$ are lines which
go through the same point.
The magnetic susceptibility per spin $\chi$ is given by
\begin{equation}
\chi = VK(<m^{2}>-<\vec{m}>^{2}),
\end{equation}
where $\vec{m}$ is the magnetization vector per spin.
The three-dimensional planar magnet model with ferromagnetic interactions $J>0$ has a second-order phase transition.
In simulations of systems near a second-order phase transition, a major difficulty
arises which is known as critical slowing down. The critical slowing down can be reduced
by using several techniques and what we found as optimal for our case
was to use the hybrid Monte Carlo algorithm as described in Ref. \cite{Landau}.
Equilibrium configurations were created using a hybrid Monte Carlo algorithm
which combines cluster updates of in-plane spin components\cite{Wolff} with Metropolis
and over-relaxation\cite{Brown} of spin re-orientations. After each single-cluster update,
two Metropolis and eight over-relaxation sweeps were performed\cite{Landau}.
The $K$ dependence of the fourth-order cumulant $U_{L}(K)$ was determined
using the single-histogram re-weighting method\cite{Ferr}.
This method enables us to obtain accurate thermodynamic information over
the entire scaling region using Monte Carlo simulations performed at only a few different values of
$K$. We have performed Monte Carlo simulation on simple
cubic lattices of size $L\times L\times L$ with $6\leq L\leq 42$ using periodic boundary
conditions applied in all directions and $10^{6}$ MC steps.
We carried out of the order of 10000 thermalization steps and of
the order of 20000 measurements.
After we estimated the critical coupling $K_{c}$, we computed the magnetization
and the magnetic susceptibility at the critical coupling $K_{c}$.
\section{Results and Discussion}
\label{sec2}
In this section, we first have to determine the critical coupling $K_{c}$, and
then to examine the static behavior around $K_{c}$. Binder's fourth-order
cumulant\cite{Binder} $U_{L}(K)$ is a convenient quantity that we use in order to estimate the critical
coupling $K_{c}$ and the correlation length exponent $\nu$.
Near the critical coupling $K_{c}$, the cumulant is expanded as
\begin{equation}
U_{L}=U^{\ast}+U_{1}L^{1/\nu}(1-\frac{T}{T_{c}})+\cdot\cdot\cdot\cdot.
\end{equation}
Therefore, if we plot $U_{L}(K)$ versus the coupling $K$ for several
different sizes $L$, it is expected
that the curves for different values of $L$ cross at the critical coupling $K_{c}$.
In order to find the $K$ dependence of the fourth-order cumulant $U_{L}(K)$,
we performed simulations for each lattice size from $L=6$ to $L=42$
at $K$=0.6450 which is chosen to be close to previous estimates for the critical inverse temperature\cite{Ferer,Oh}.
The $U_{L}(K)$ curves were calculated from the histograms
and are shown in Fig. \ref{fi-1} for $L$=12, 24, and 32.
\begin{figure}[htp]
\epsfxsize=\figwidth\centerline{\epsffile{fig1.ps}}
\caption{
Fourth-order cumulant $U_{L}(K)$ versus coupling $K$ for
lattice sizes $L$=12, 24, and 32.
}
\label{fi-1}
\end{figure}
If one wishes to obtain higher accurary, then one needs to examine Fig. 1 more
carefully and to see that the points where each pair of curves cross are slightly different
for different pairs of lattices; in fact the points where the curves cross move slowly to
lower couplings for larger system sizes.
For the pair which corresponds to our largest lattice sizes $L$=32 and 42, the point where they cross is
$K_{c}\approx 0.64455$.
In order to extract more precise critical coupling $K_{c}$ from our data,
we compare the curves of $U_{L}$ for the two different lattice sizes $L$ and $L'=bL$ and then
find the location of the intersection of two different curves $U_{L}$ and $U_{L'}$.
As a result of the residual corrections to the finite size scaling \cite{Binder}, the locations
depend on the scale factor $b=L'/L$. We used the crossing points of the $L$=12, 14, and, 16
curves with all the other ones with higher $L'$ value respectively.
Hence we need to extrapolate the results of this method
for (ln$b$)$^{-1} \longrightarrow 0$ using $(U_{bL}/U_{L})_{T=T_{c}}=1$.
In Fig. \ref{fi-2} we show the estimate for the critical temperature $T_{c}$. Our final estimate
for $T_{c}$ is
\begin{equation}
T_{c}=1.5518(2), K_{c}=0.6444(1).
\end{equation}
For comparison, the previous estimates are $T_{c}$=1.54(1)\cite{Costa,Oh}
obtained using Monte Carlo simulation and $T_{c}$=1.552(3)\cite{Ferer}
obtained using high-temperature series. The latter result obtained with an
expansion is surprisingly close to ours.
\begin{figure}[htp]
\epsfxsize=\figwidth\centerline{\epsffile{fig2.ps}}
\caption{
Estimates for $T_{c}$ plotted versus inverse logarithm of the scale factor
b=$L'/L$. The extrapolation leads to an estimate of $T_{c}$=1.5518(2).
}
\label{fi-2}
\end{figure}
In order to extract the critical exponent $\nu$, we performed finite-size scaling
analysis of the slopes of $U_{L}$ versus $L$ near our estimated critical point $K_{c}$.
In the finite-size scaling region, the slope of the cumulant at $K_{c}$ varies
with system size like $L^{1/\nu}$,
\begin{equation}
\frac{dU_{L}}{dK} \sim L^{1/\nu}.
\end{equation}
In Fig. \ref{fi-3} we show results of a finite-size scaling analysis for the slope of the cumulant.
We obtained the value of the static exponent $\nu$,
\begin{equation}
\nu = 0.670(7).
\end{equation}
For comparison, the field theoretical estimate\cite{Guillou} is $\nu$=0.669(2) and
a recent experimental measurement gives $\nu$=0.6705(6)\cite{Goldner}.
\begin{figure}[htp]
\epsfxsize=\figwidth\centerline{\epsffile{fig3.ps}}
\caption{
Log-log plot of the slopes of $U$ near the crossing point versus $L$.
The slope gives an estimate for the critical exponent $\nu$=0.670(7).
}
\label{fi-3}
\end{figure}
In order to obtain the value of the exponent ratio $\gamma/\nu$, we calculated the magnetic
susceptibility per spin $\chi$ at the critical coupling $K_{c}$.
The finite-size behavior for $\chi$ at the critical point is
\begin{equation}
\chi \sim L^{\gamma/\nu}.
\end{equation}
Fig. \ref{fi-4} displays the finite-size scaling of the susceptibility
$\chi$ calculated at $K_{c}$=0.6444.
>From the log-log plot we obtained the value of the exponent
ratio $\gamma/\nu$,
\begin{equation}
\gamma/\nu=1.9696(37).
\end{equation}
>From the hyperscaling relation, $d\nu=\gamma+2\beta$, we get
the exponent ratio $\beta/\nu$,
\begin{equation}
\beta/\nu=0.515(2).
\label{betanu}
\end{equation}
\begin{figure}[htp]
\epsfxsize=\figwidth\centerline{\epsffile{fig4.ps}}
\caption{
Log-log plot of the susceptibility versus the lattice size $L$ at the critical coupling
$K_{c}$=0.6444. The slope gives an estimate for the critical exponent $\gamma/\nu$=1.9696(37).
}
\label{fi-4}
\end{figure}
The equilibrium magnetization $m$ at $K_{c}$
should obey the relation
\begin{equation}
m \sim L^{-\beta/\nu}
\end{equation}
for sufficiently larger $L$.
In Fig. \ref{fi-5} we show the results of a finite-size scaling analysis for the magnetization $m$.
We obtain the value of the exponent ratio $\beta/\nu$,
\begin{equation}
\beta/\nu=0.515(2).
\end{equation}
This result agrees very closely to that of Eq. (\ref{betanu}) obtained from the susceptibility
and the fourth-order cumulant.
\begin{figure}[htp]
\epsfxsize=\figwidth\centerline{\epsffile{fig5.ps}}
\caption{
Log-log plot of the magnetization versus the lattice size $L$ at the critical coupling
$K_{c}$=0.6444. The slope gives an estimate for the critical exponent $\beta/\nu$=0.515(3).
}
\label{fi-5}
\end{figure}
\begin{table}[htp] \centering
\begin{tabular}{|l|l|l|}
\hline
$L$ & \multicolumn{1}{c|}{$\chi$} & \multicolumn{1}{c|}{$m$} \\ \hline\hline
12 & 82.39(28) & 0.26195(55) \\
14 & 111.88(36) & 0.24219(43) \\
16 & 145.12(59) & 0.22567(55) \\
18 & 182.91(52) & 0.21241(35) \\
20 & 224.08(85) & 0.20072(49) \\
22 & 272.23(60) & 0.19163(23) \\
24 & 322.35(98) & 0.18308(32) \\
32 & 571.0(4.0) & 0.15833(66) \\
42 & 972.0(4.8) & 0.13749(40) \\
\hline
\end{tabular}
\caption{\label{t1} Results for the magnetization and the susceptibility}
\end{table}
In conclusion, we determined the critical temperature and the exponents of the planar magnet model with
three-component spins using a high-precision MC method, the single-histogram method, and
the finite-size scaling theory. Our simulation results for the critical coupling and for the critical exponents
are $K_{c}$=0.6444(1), $\nu$=0.670(7), $\gamma/\nu$=0.9696(37), and $\beta/\nu$=0.515(2).
Our calculated values for the critical temperature
and critical exponents are significantly more accurate that those previously calculated.
Comparison of our results with results of MC studies of the 3$D$ $XY$ model with two-component spins\cite{MC,S3,Janke,Hasenbusch}
shows that both the system with $\vec{S}_{i}=
(S^{x}_{i},S^{y}_{i})$ and the planar magnet system with $\vec{S}_{i}=(S^{x}_{i},S^{y}_{i},S^{z}_{i})$
belong to the same universality class.
\section{acknowledgements}
This work was supported by the National Aeronautics and Space
Administration under grant no. NAG3-1841.
| proofpile-arXiv_065-8877 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
When $q$ is a root of unity ($q^{N}=1$), the quantized enveloping algebra
$U_{q}sl(2,\mbox{\rm C}\hskip-5.5pt \mbox{l} \;)$ possesses interesting quotients that are finite dimensional
Hopf algebras. The structure of the left regular representation of such an
algebra was investigated in \cite{Alekseev} and the pairing with its dual in
\cite{Gluschenkov}. We call ${\mathcal H}$ the Hopf algebra quotient of
$U_{q}sl(2,\mbox{\rm C}\hskip-5.5pt \mbox{l} \;)$ defined by the relations $K^N=\mbox{\rm 1}\hskip-2.8pt \mbox{\rm l}$, $X_{\pm}^N=0$
(we shall define the generators $K, X_{\pm}$ in a later section), and
${\mathcal F}$ its dual. It was shown\footnote{
Warning: the authors of \cite{Alekseev} actually consider a Hopf algebra
quotient defined by $K^{2N} = 1$, $X_{\pm}^N=0$, so that their algebra is,
in a sense, twice bigger than ours.
}
in \cite{Alekseev} that the {\underline{non}} semi-simple algebra
${\mathcal H}$ is isomorphic with the direct sum of a complex matrix
algebra and of several copies of suitably defined matrix algebras with
coefficients in the ring $Gr(2)$ of Grassmann numbers with two generators.
The explicit structure (for all values of $N$) of those algebras, including
the expression of generators themselves, in terms of matrices with
coefficients in $\mbox{\rm C}\hskip-5.5pt \mbox{l} \;$ or $Gr(2)$ was obtained by \cite{Ogievetsky}. Using
these results, the representation theory of ${\mathcal H}$, for the case
$N=3$, was presented in \cite{Coquereaux}. Following this work, the authors
of \cite{Dabrowski} studied the action of ${\mathcal H}$ (case $N=3$) on the
algebra of complex matrices ${\mathcal M} \equiv M_3(\mbox{\rm C}\hskip-5.5pt \mbox{l} \;)$. In the letter
\cite{CoGaTr} a reduced Wess-Zumino complex $\Omega_{WZ}({\mathcal M})$ was
introduced, thus providing a differential calculus bicovariant with respect
to the action of the quantum group ${\mathcal H}$ on the algebra $M_3(\mbox{\rm C}\hskip-5.5pt \mbox{l} \;)$
of complex matrices. This differential algebra (that could be used to
generalize gauge field theory models on an auxiliary smooth manifold) was
also analysed in terms of representation theory of ${\mathcal H}$ in the
same work. In particular, it was shown that $M_3(\mbox{\rm C}\hskip-5.5pt \mbox{l} \;)$ itself can be
reduced into the direct sum of three indecomposable representations of
${\mathcal H}$. This result was generalized in \cite{Coquereaux-Schieber}.
A general discussion of several other properties of the dually paired Hopf
algebras ${\mathcal F}$ and ${\mathcal H}$ (scalar products, star
structures, twisted derivations, {\rm etc.\/}\ ) can also be found there, as well as
in the article \cite{CoGaTr-E}. In the present contribution we
present a summary of results already discussed in the papers
\cite{CoGaTr, CoGaTr-E, Coquereaux-Schieber}. The original purpose
of our work was to define generalized differential forms and
generalized connections on a smooth manifold, with values in the
Lie algebra of the linear group
$GL(N)$, in such a way that there would be a non trivial global
action of some Hopf algebra on the so obtained differential
complex, extending somehow the usual action of the isometry group
of the manifold (when it exists) by some internal quantum symmetry.
This construction will be recalled in the text.
\section{The space $\mathcal M$ of $N \times N$ complex matrices as a
reduced quantum plane}
\label{sec:red-q-plane}
The algebra of $N \times N$ matrices can be generated by two elements
$x$ and $y$ with relations:
\begin{equation}
xy = q yx \qquad \text{and} \qquad x^N = y^N = \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l} \ ,
\label{M-relations}
\end{equation}
where $q$ denotes an $N$-th root of unity ($q \neq 1$) and $\mbox{\rm 1}\hskip-2.8pt \mbox{\rm l}$ denotes
the unit matrix. Explicitly, $x$ and $y$ can be taken as the following
matrices:
\begin{equation}
x = \begin{pmatrix}
{ 1 & & & & \cr
& q^{-1} & & & \cr
& & q^{-2} & & \cr
& & & \ddots & \cr
& & & & q^{-(N-1)} }
\end{pmatrix}
\qquad
y = \begin{pmatrix}
{ 0 & & & \cr
\vdots & & \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l}_{N-1} & \cr
0 & & & \cr
1 & 0 & \cdots & 0}
\end{pmatrix}
\label{xymatrix}
\end{equation}
This result can
be found in \cite{Weyl}. \\
{\bf Warning}: for technical reasons, we shall assume in all this paper
that $N$ is odd and that $q$ is a {\sl primitive} root of unity.
Here and below, we shall simply denote the algebra $M_{N}(\mbox{\rm C}\hskip-5.5pt \mbox{l} \;)$ by the
symbol ${\mathcal M}$, and call it ``the reduced quantum plane''.
\section{The finite dimensional quantum group $\mathcal F$ and its
coaction on $\mathcal M$}
\label{sec:q-group-F}
One introduces {\sl non-commuting symbols\/} $a,b,c$ and $d$ generating an
algebra ${\mathcal F}$ and imposes that the quantities $x',y'$ (and
$\tilde x, \tilde y$) obtained by the following matrix equalities should
satisfy the same relations as $x$ and $y$.
\begin{equation}
\Delta_L \pmatrix{x \cr y} = \pmatrix{a & b \cr c & d}
\otimes \pmatrix{x \cr y} \doteq
\pmatrix{x' \cr y'} \qquad \text{left coaction} \ ,
\label{3.1}
\end{equation}
and
\begin{equation}
\Delta_R \pmatrix{x & y} = \pmatrix{x & y} \otimes
\pmatrix{a & b \cr c & d} \doteq
\pmatrix{\tilde x & \tilde y} \qquad \text{right coaction} \ .
\label{3.2}
\end{equation}
These equations also define left and right coactions, that extend to the
whole of $\mathcal M$ using the homomorphism property
$\Delta_{L,R}(fg) = \Delta_{L,R}(f) \Delta_{L,R}(g) \;\;$
$(f,g \in {\mathcal M})$. Here one should not confuse $\Delta$ (the
coproduct on a quantum group that we shall introduce later) with
$\Delta_{R,L}$ (the $R,L$-coaction on $\mathcal M$)!
The elements $a,b,c,d$ should therefore satisfy an algebra such that
\begin{eqnarray}
\Delta_L (xy - q yx) &=& 0 \\
\Delta_L (x^{N} -1) = \Delta_L (y^{N} -1) &=& 0 \ ,
\end{eqnarray}
and the same for $\Delta_R$. This leads to the usual relations defining
the algebra of ``functions over the quantum plane'' \cite{Manin}:
\begin{equation}
\begin{array}{ll}
qba = ab \qquad & qdb = bd \\
qca = ac & qdc = cd \\
cb = bc & ad-da = (q-q^{-1})bc \ , \\
\end{array}
\end{equation}
but, we also have non quadratic relations:
\begin{equation}
\begin{tabular}{ll}
$a^N = \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l} \ , $ & $ b^N = 0 \ , $ \\
$c^N = 0 \ , $ & $ d^N = \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l} \ . $
\end{tabular}
\label{F-products-quotient}
\end{equation}
The element ${\mathcal D} \doteq da -q^{-1}bc = ad - qbc $ is central
(it commutes with all the elements of $Fun(GL_q(2))$); it is called the
$q$-determinant and we set it equal to $\mbox{\rm 1}\hskip-2.8pt \mbox{\rm l}$. Since $a^N = \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l}$,
multiplying the relation $ad = \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l} + qbc$ from the left by $a^{(N-1)}$
leads to
\begin{equation}
d = a^{(N-1)} (\mbox{\rm 1}\hskip-2.8pt \mbox{\rm l} + qbc) \ ,
\label{d-element-in-F}
\end{equation}
so $d$ is not needed and can be eliminated. The algebra $\mathcal F$ can
therefore be {\sl linearly \/} generated ---as a vector space--- by the
elements $a^\alpha b^\beta c^\gamma $ where indices $\alpha, \beta, \gamma$
run in the set $\{0,1,\ldots ,N-1\}$. We see that $\mathcal F$ is a
{\sl finite dimensional\/} associative algebra, whose dimension is
$$
\dim ({\mathcal F}) = N^{3} \ .
$$
${\mathcal F}$ is not only an associative algebra but a Hopf algebra, with
the corresponding maps defined on the generators as follows:
\begin{description}
\item[Coproduct:]
$\Delta a = a \otimes a + b \otimes c$,
$\Delta b = a \otimes b + b \otimes d$,
$\Delta c = c \otimes a + d \otimes c$,
$\Delta d = c \otimes b + d \otimes d$.
\item[Antipode:] $Sa = d$, $Sb = -q^{-1} b$, $Sc = -q c$, $Sd = a$.
\item[Counit:] $\epsilon(a)=1$, $\epsilon(b)=0$, $\epsilon(c)=0$,
$\epsilon(d)=1$.
\end{description}
We call $\mathcal F$ the {\sl reduced quantum unimodular group\/} associated
with an $N$-th root of unity. It is, by construction, an associative
algebra. However, it is not semi-simple. Therefore, $\mathcal F$ is not a
matrix quantum group in the sense of Woronowicz \cite{Woronowicz}.
The coaction of $\mathcal F$ on $\mathcal M$ was given above.
Actually, $\mathcal M$ endowed with the two coactions $\Delta_L$
and $\Delta_R$ is a left and right comodule {\em algebra} over $\mathcal F$,
{\rm i.e.,\/}\ a corepresentation space of the quantum group $\mathcal F$ such that
\begin{eqnarray}
\Delta_{L,R}(zw) &=& \Delta_{L,R}(z) \, \Delta_{L,R}(w) \nonumber \\
\Delta_{L,R}(\mbox{\rm 1}\hskip-2.8pt \mbox{\rm l}) &=& \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l} {\otimes} \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l} \ .
\label{comodule-algebra-condition}
\end{eqnarray}
\section{The dual $\mathcal H$ of $\mathcal F$, and its action
on $\mathcal M$}
\label{sec:q-group-H}
Being $\mathcal F$ a quantum group (a Hopf algebra), its dual
${\mathcal H} \doteq {\mathcal F}^*$ is a quantum group as well. Let
$u_i \in \mathcal F$ and $X_i \in \mathcal H$. We call $< X_i, u_j >$
the evaluation of $X_i$ on $u_j$ (a complex number).
\begin{itemize}
\item
Starting with the coproduct $\Delta$ on $\mathcal F$, one defines a
product on $\mathcal H$, by
$<X_1 X_2, u>\doteq <X_1 \otimes X_2, \Delta u >$.
\item
Using the product in $\mathcal F$, one defines a coproduct (that we
again denote $\Delta$) in $\mathcal H$ by
$<\Delta X, u_1 \otimes u_2 > \doteq < X, u_1 u_2>$.
\item
The interplay between unit and counit is given by:
$<\mbox{\rm 1}\hskip-2.8pt \mbox{\rm l}_{\mathcal H}, u> = \epsilon_{\mathcal F}(u)$
and $< X,\mbox{\rm 1}\hskip-2.8pt \mbox{\rm l}_{\mathcal F}> = \epsilon_{\mathcal H}(X)$.
\end{itemize}
\noindent
The two structures of algebra and coalgebra are clearly interchanged by
duality.
It is clear that $\mathcal H$ is a vector space of dimension $N^{3}$. It
can be generated, as a complex algebra, by elements $X_\pm$, $K$ dual to
the generators of $\mathcal F$:
$$
<K,a> = q \qquad <X_+,b> = 1 \qquad <X_-,c> = 1 \ ,
$$
all other pairings between generators being zero. In this way we get:
\begin{description}
\item[Multiplication:]
\begin{eqnarray}
K X_{\pm} &=& q^{\pm 2} X_{\pm} K \nonumber \\
\left[X_+ , X_- \right]
&=& {1 \over (q - q^{-1})} (K - K^{-1}) \\
K^N &=& \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l} \nonumber \\
X_+^N = X_-^N &=& 0 \ . \nonumber
\nonumber
\label{H-products}
\end{eqnarray}
\item[Comultiplication:]
\begin{eqnarray}
\Delta X_+ & = & X_+ \otimes \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l} + K \otimes X_+ \nonumber \\
\Delta X_- & = & X_- \otimes K^{-1} + \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l} \otimes X_- \\
\Delta K & = & K \otimes K \nonumber \\
\Delta K^{-1} & = & K^{-1} \otimes K^{-1} \ . \nonumber
\nonumber
\label{H-coproducts}
\end{eqnarray}
It extends to the whole $\mathcal H$ as an algebra morphism, {\rm i.e.,\/}\
$\Delta(XY) = \Delta X \, \Delta Y$.
\item[Antipode:]
It is defined by:
$ S \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l} = \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l} $,
$ S K = K^{-1} $,
$ S K^{-1} = K $,
$ S X_+ = - K^{-1} X_+ $,
$ S X_- = - X_- K $,
and it extends as an anti-automorphism, {\rm i.e.,\/}\ $S(XY) = SY \, SX$. As
usual, the square of the antipode is an automorphism, given by
$S^2 u = K^{-1} u K$.
\item[Counit:]
The counit $\epsilon$ is defined by
$ \epsilon \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l} = \epsilon K = \epsilon K^{-1} = 1 $,
$ \epsilon X_+ = \epsilon X_- = 0 $.
\end{description}
Warning: When $q^N=1$, one can also factorize the universal algebra over the
relations $K^{2N} = \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l}$, $X_{\pm}^N = 0$, rather than $K^N = \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l}$,
$X_{\pm}^N = 0$. These relations also define a Hopf ideal but the obtained
Hopf algebra is twice as big as ours ($K^{N}$ is then a central element
but is not equal to $\mbox{\rm 1}\hskip-2.8pt \mbox{\rm l}$).
\subsection{Action of $\mathcal H$ on $\mathcal M$}
\label{subsec:actions-of-H}
Using the fact that $\mathcal F$ coacts on $\mathcal M$ in two possible
ways, and that elements of $\mathcal H$ can be interpreted as distributions
on $\mathcal F$, we obtain two commuting actions of $\mathcal H$ on the
quantum space $\mathcal M$. We shall describe the left action for arbitrary
elements and give explicit results for the generators.
Let $z \in \mathcal M$, $X \in \mathcal H$, and
$\Delta_R z = z_\alpha \otimes v_\alpha$ with $z_\alpha \in \mathcal M$
and $v_\alpha \in \mathcal F$ (implied summation). The operation
\begin{equation}
X^L[z] \doteq (\mbox{\it id\,} \otimes <X,\cdot>) \Delta_R z
= <X, v_\alpha> z_\alpha \ .
\label{L-H-action-on-M}
\end{equation}
is a {\em left} action of $\mathcal H$ on $\mathcal M$ (dual to the
{\em right}-coaction of $\mathcal F$). With this $L$-action we can check
that $\mathcal M$ is indeed a left-$\mathcal H$-module algebra.
For the case $N=3$, complete tables are given in \cite{CoGaTr, CoGaTr-E},
and ---with other conventions--- in \cite{Dabrowski}. The results can be
summarized as follows:
\begin{eqnarray}
K^L \cro{x^{r}y^{s}} &=& q^{(r-s)}x^{r}y^{s} \nonumber \\
X_+^L \cro{x^{r}y^{s}} &=& q^{r}(\frac{1-q^{-2s}}{1-q^{-2}})x^{r+1}y^{s-1} \\
X_-^L \cro{x^{r}y^{s}} &=& q^{s}(\frac{1-q^{-2r}}{1-q^{-2}})x^{r-1}y^{s+1}
\nonumber
\label{actionofHonM}
\end{eqnarray}
with $1 \le r,s \le N$.
As ${\mathcal M}$ is a module (a representation space) for the quantum group
${\mathcal H}$, we can reduce it into indecomposable modules. To this end it
is necessary to know at least part of the representation theory of
${\mathcal H}$, but note that for the $N=3$ case we have (up to
multiplicative factors)
\vspace{0.5cm}
\begin{equation}
\begin{diagram}
y^2 & \rDotsto 0 \\
\uDotsto{X_-^L} \dTo{X_+^L} \\
xy & \\
\uDotsto{X_-^L} \dTo{X_+^L} \\
x^2 & \rTo 0
\end{diagram}
\quad\hbox{,}\quad
\begin{diagram}
y & \rDotsto & 0 \\
& \luTo & \\
\uDotsto{X_-^L} \dTo{X_+^L}
& & x^2 y^2 \\
& \ldDotsto & \\
x & \rTo & 0
\end{diagram}
\quad\hbox{and}\quad
\begin{diagram}
xy^2 & & \\
& \rdDotsto & \\
\uDotsto{X_-^L} \dTo{X_+^L}
& & \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l} \\
& \ruTo & \dDotsto \dTo \\
x^2 y & & 0
\end{diagram}
\label{graph:left-H-action-on-M}
\end{equation}
\vspace{0.3cm}
\noindent We see clearly on these diagrams that the algebra of $3\times 3$
matrices can be written as a sum of three inequivalent, 3-dimensional,
indecomposable representations of ${\mathcal H}$: an irreducible one, and
two indecomposable but reducible modules (each of these two contains a non
trivial invariant subspace).
\subsection{The structure of the non semisimple algebra $\mathcal H$}
\label{subsec:structure-of-H}
Using a result by \cite{Alekseev}, the explicit structure (for all values
of $N$) of those algebras, including the expression of generators
$X_{\pm}, K$ themselves, in terms of matrices with coefficients both in
$\mbox{\rm C}\hskip-5.5pt \mbox{l} \;$ and in the Grassmann\footnote{
Remember that $\theta_1^{2} = \theta_2^{2} = 0$ and that
$\theta_1 \theta_2 = -\theta_2 \theta_1$.
}
algebra $Gr(2)$ with two generators $\theta_1,\theta_2$, was obtained by
\cite{Ogievetsky} (it was explicitly described for $N = 3$ by
\cite{Coquereaux}, see also \cite{Coquereaux-Schieber} for a general $N$).
We shall not need the general theory but only the following fact:
when $N$ is odd, ${\mathcal H}$ is isomorphic with the direct sum
\begin{equation}
\mathcal{H} = M_N \oplus \Mlo{N-1|1} \oplus \Mlo{N-2|2} \oplus \cdots
\cdots \oplus \Mlo{\frac{N+1}{2}|\frac{N-1}{2}}
\label{isomorphism}
\end{equation}
where:
\begin{itemize}
\item[-] $M_N$ is an $N\times N$ complex matrix
\item[-] An element of the $\Mlo{N-p|p}$ part (space that we shall just
call $M_{N-p|p}$) is an $(N-p,p)$ block matrix of the following form:
\begin{equation}
\begin{pmatrix}{
\bullet & \cdots & \bullet & \circ & \cdots & \circ \cr
\vdots & \scriptscriptstyle{(N-p)\times(N-p)}
& \vdots & \vdots & & \vdots \cr
\bullet & \cdots & \bullet & \circ & \cdots & \circ \cr
\circ & \cdots & \circ & \bullet & \cdots & \bullet \cr
\vdots & & \vdots & \vdots &
\scriptscriptstyle{p\times p} & \vdots \cr
\circ & \cdots & \circ & \bullet & \cdots & \bullet }
\end{pmatrix}
\end{equation}
We have introduced the following notation: \\
$\bullet$ is an even element of the ring $Gr(2)$ of Grassmann numbers
with two generators, {\rm i.e.,\/}\ of the kind
$\bullet = \alpha + \beta \theta_1 \theta_2$, $\alpha,\beta \in \mbox{\rm C}\hskip-5.5pt \mbox{l} \;$. \\
$\circ$ is an odd element of the ring $Gr(2)$ of Grassmann numbers with
two generators, {\rm i.e.,\/}\ $\circ = \gamma\theta_1 + \delta \theta_2$,
$\gamma, \delta \in \mbox{\rm C}\hskip-5.5pt \mbox{l} \;$.
\end{itemize}
When $N$ is even, the discussion depends upon the parity of $N/2$ and
we shall not discuss this here.
Notice that ${\mathcal H}$ is \underline{not} a semi-simple algebra:
its Jacobson radical ${\mathcal J}$ is obtained by selecting in equation
(\ref{isomorphism}) the matrices with elements proportional to Grassmann
variables. The quotient ${\mathcal H} / {\mathcal J}$ is then
semi-simple\ldots but no longer Hopf!
Projective indecomposable modules (PIM's, also called principal modules) for
${\mathcal H}$ are directly given by the columns of the previous matrices.
\begin{itemize}
\item[-] From the $M_{N}$ block, one obtains $N$ equivalent irreducible
representations of dimension $N$ that we shall denote $N_{irr}$.
\item[-] From the $M_{{N-p}\vert p}$ block (\underline{assume} $p<N-p$),
one obtains
\begin{itemize}
\item $(N-p)$ equivalent indecomposable projective modules of
dimension $2N$ that we shall denote $P_{N-p}$ with
elements of the kind
$$
(\underbrace{\bullet \bullet \cdots \bullet}_{N-p}
\underbrace{\circ \circ \cdots \circ}_{p})
$$
\item $p$ equivalent indecomposable projective modules (also
of dimension $2N$) that we shall denote $P_{p}$ with
elements of the kind
$$
(\underbrace{\circ \circ \cdots \circ}_{N-p}
\underbrace{\bullet \bullet \cdots \bullet}_{p})
$$
\end{itemize}
\end{itemize}
Other submodules can be found by restricting the range of parameters
appearing in the columns defining the PIM's and imposing stability under
multiplication by elements of ${\mathcal H}$. In this way one can
determine, for each PIM, the lattice of its submodules. For a given PIM of
dimension $2N$ (with the exception of $N_{irr}$), one finds totally ordered
sublattices (displayed below) with exactly three non trivial terms: the
radical (here, it is the biggest non trivial submodule of a given PIM),
the socle (here it is the smallest non trivial submodule), and one
``intermediate'' submodule of dimension exactly equal to $N$.However the
definition of this last submodule (up to equivalence) depends on the choice
of an arbitrary complex parameter $\lambda$, so that we have a chain of
inclusions for every such parameter.
\subsection{Decomposition of ${\mathcal M} = M_N(\mbox{\rm C}\hskip-5.5pt \mbox{l} \;)$ in representations
of $\mathcal H$}
\label{subsec:M-in-reps-H}
Since there is an action of $\mathcal H$ on $\mathcal M$, it is clear
that $\mathcal M$, as a vector space, can be analysed in terms of
representations of $\mathcal H$. The following result was shown in
\cite{Coquereaux-Schieber}:
Under the left action of $\mathcal{H}$, the algebra of $N\times N$
matrices can be decomposed into a direct sum of invariant subspaces of
dimension $N$, according to
\begin{itemize}
\item $N_N = N_{irr}$: irreducible
\item $N_{N-1}$: reducible indecomposable, with an invariant subspace
of dimension $N-1$.
\item $N_{N-2}$: reducible indecomposable, with an invariant subspace
of dimension $N-2$.
\item $\vdots$
\item $N_{1}$: reducible indecomposable, with an invariant subspace
of dimension $1$.
\end{itemize}
The elements of the module called $N_{p}$ (of dimension $N$) are of the
kind:
$$
{\underline{N}}_{p} = \begin{pmatrix}{
\gamma_1\theta_{\lambda_1} & \gamma_2\theta_{\lambda_2} & \cdots &
\gamma_{N-p}\theta_{\lambda_{N-p}} & \beta_1\theta_1\theta_2 &
\beta_2\theta_1\theta_2 & \cdots & \beta_{p}\theta_1\theta_2}
\end{pmatrix}
$$
\noindent This submodule is the direct sum of an invariant sub-module of
dimension $p$, and a vector subspace of dimension $N-p$
$$
{\underline{N}}_{p} = \underline{p} \ensuremath{\,\,\subset\!\!\!\!\!\!+\,\,\,\,} (N-p)
$$
with
$$
\underline{p} = \begin{pmatrix}{ 0 & 0 & \cdots & 0 &
\beta_1\theta_1\theta_2 & \beta_2\theta_1\theta_2 & \cdots &
\beta_{p}\theta_1\theta_2}
\end{pmatrix}
$$
Using these notations, the algebra of complex matrices $N \times N$ can be
written
$$
\mathcal{M} = N_N \oplus N_1 \oplus N_2 \oplus \cdots \oplus N_{N-1}
$$
In the particular case $N=3$, $3_{odd}$ is an abelian subalgebra of
$\mathcal M$, actually isomorphic with the algebra $\mbox{\rm C}\hskip-5.5pt \mbox{l} \;[\mbox{\rm Z}\hskip-5pt \mbox{\rm Z}_3]$ of the
abelian group $\mbox{\rm Z}\hskip-5pt \mbox{\rm Z}_3$. Hence, we may write
$$
{\mathcal M} = \mbox{\rm C}\hskip-5.5pt \mbox{l} \;[\mbox{\rm Z}\hskip-5pt \mbox{\rm Z}_3] \oplus x \, \mbox{\rm C}\hskip-5.5pt \mbox{l} \;[\mbox{\rm Z}\hskip-5pt \mbox{\rm Z}_3] \oplus x^2 \, \mbox{\rm C}\hskip-5.5pt \mbox{l} \;[\mbox{\rm Z}\hskip-5pt \mbox{\rm Z}_3]
= 3_{odd} \oplus 3_{eve} \oplus 3_{irr} \ .
$$
Moreover, it can be shown that
$$
Inverse(3_{eve}) \subset 3_{irr} \ , \quad
Inverse(3_{irr}) \subset 3_{eve} \ , \quad \text{but} \quad
Inverse(3_{odd}) \subset 3_{odd} \ .
$$
\subsection{The universal $R$-matrix}
The finite dimensional Hopf algebra $\mathcal H$ we have been studying is
actually braided and quasi-triangular (as it is well known, the quantum
enveloping algebra of $SL(2)$ does {\sl not} possess these properties
when $q$ is specialized to a root of unity). The $R$-matrix of $\mathcal H$
can be obtained directly from a general formula given by \cite{Rosso} but we
can also get it in a pedestrian way by starting from a reasonable ansatz and
imposing certain conditions. Here we take $N=3$. We start from
$$
R = R_K R_X
$$
where
\begin{eqnarray*}
R_K &=& \sum_{i,j = 0,1,2} c_{ij} K^i \otimes K^j \\
R_X &=& \left[ \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l} \otimes \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l} + \alpha X_- \otimes X_+
+ \beta X_-^2 \otimes X_+^2 \right]
\end{eqnarray*}
Here $\alpha$, $\beta$ and the $c_{ij}$ are complex numbers (symmetric in
$i,j$).
A quasi-triangular R matrix should satisfy $(\epsilon \otimes \mbox{\it id\,})R = \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l}$.
As $\epsilon(K)=1$, this condition implies that
\begin{eqnarray*}
R_K &=& (1-c_{01}-c_{02}) \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l} \otimes \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l}
+ c_{01} (\mbox{\rm 1}\hskip-2.8pt \mbox{\rm l} \otimes K + K \otimes \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l})
+ c_{02} (\mbox{\rm 1}\hskip-2.8pt \mbox{\rm l} \otimes K^2 + K^2 \otimes \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l}) \\
& & + c_{12} (K \otimes K^2 + K^2 \otimes K)
- (c_{01}+c_{12}) K \otimes K
- (c_{02}+c_{12}) K^2 \otimes K^2 \ .
\end{eqnarray*}
Also, we should have $(S \otimes S)R = R$. Comparing terms with zero
$X_\pm$ grading ({\rm i.e.,\/}\ with no $X_\pm$ terms) we find
$(S \otimes S)R_K = R_K$, and thus $c_{01} = c_{02}$. Making use of the
terms in $X_- \otimes X_+$ we get $c_{01} = 1/3$, $c_{12} = q^2/3$. In
the same way, one finds $\alpha = (q-q^{2})$ and $\beta = 3q$.
The universal $R$-matrix now reads explicitly
\begin{eqnarray}
R &=& \frac{1}{3} \left[
\mbox{\rm 1}\hskip-2.8pt \mbox{\rm l} \otimes \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l} + (\mbox{\rm 1}\hskip-2.8pt \mbox{\rm l} \otimes K + K \otimes \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l})
+ (\mbox{\rm 1}\hskip-2.8pt \mbox{\rm l} \otimes K^2 + K^2 \otimes \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l}) \right. \nonumber \\
& & \quad \left. + q^2 (K \otimes K^2 + K^2 \otimes K)
+ q K \otimes K + q K^2 \otimes K^2 \right] \\
& & \times \left[ \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l} \otimes \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l} + (q-q^{-1}) X_- \otimes X_+
+ 3q X_-^2 \otimes X_+^2 \right] \nonumber
\label{R-matrix}
\end{eqnarray}
Using the explicit numerical matrices $X_\pm, K$ given in Appendix~E of
\cite{CoGaTr-E}, one can obtain the numerical $R$ matrices in various
representations of interest (irreducible or indecomposable ones).
Note that $R^{-1} = R_X^{-1} R_K^{-1}$, where
\begin{eqnarray*}
R_X^{-1} = \left[ \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l} \otimes \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l} - \alpha X_- \otimes X_+
+ (\alpha^2 - \beta) X_-^2 \otimes X_+^2 \right]
\end{eqnarray*}
and $R_K^{-1}$ is given by the same expression as $R_K$ but with $q$ and
$q^2$ interchanged. Here we already see that our algebra is {\em not}
triangular, $R_{21}$ (the flipped $R$) has terms of the form
$X_+ \otimes X_-$, whereas $R^{-1}$ only contains terms of the form
$X_- \otimes X_+$. It can be straightforwardly verified (but it is
cumbersome) that the requirements of almost-cocommutativity and
quasi-triangularity hold, namely
\begin{eqnarray}
\Delta^{op}(h) = R \Delta(h) R^{-1} \qquad h \in \mathcal H \ ,
\end{eqnarray}
and
\begin{eqnarray}
(\Delta \otimes \mbox{\it id\,})(R) &=& R_{13} R_{23} \nonumber \\
(\mbox{\it id\,} \otimes \Delta)(R) &=& R_{13} R_{12}
\end{eqnarray}
\noindent For a generic (odd) $N$ the universal $R$-matrix reads
\begin{eqnarray*}
R &=& \frac{1}{N} \left[
\sum_{0 \le m,n < N} q^{mn} \: K^m \otimes K^n \right]
\left[ \sum_{0 \le k < N} {1\over [k]_q !}
(1-q^{-2})^k q^{k(k+1)/2} \: X_-^k \otimes X_+^k \right]
\end{eqnarray*}
In the case $N=3$, the algebra $\mathcal H$ has three projective
indecomposable modules of dimensions denoted $3_{irr}$, $6_{odd}$ and
$6_{eve}$. The first one is irreducible whereas the last two are not.
The quotient of $6_{odd}$ (respectively $6_{eve}$) by their radical
of respective dimensions $5$ and $4$ give irreducible representations
of dimensions $1$ and $2$. Moreover, the tensor products between
irreducible representations and projective indecomposable ones can be
reduced as follows:
$$
\begin{tabular}{llllll}
$\underline{2} \times \underline{2}$ & $\equiv$ &
$\underline{1} + \underline{3}_{irr}$ &
$\underline{6}_{eve} \times \underline{3}_{irr}$ & $\equiv$ &
$\underline{6}_{eve} + \underline{6}_{eve} + \underline{3}_{irr} +
\underline{3}_{irr}$ \\
$\underline{2} \times \underline{3}_{irr}$ & $\equiv$ &
$\underline{6}_{eve}$ &
$\underline6_{odd} \times \underline{3}_{irr}$ & $\equiv$ &
$\underline{6}_{eve} + \underline{6}_{eve} + \underline{3}_{irr} +
\underline{3}_{irr}$ \\
$\underline{3}_{irr} \times \underline{3}_{irr}$ & $\equiv$ &
$\underline{6}_{odd} + \underline{3}_{irr}$ &
$\underline{6}_{eve} \times \underline{6}_{eve}$ & $\equiv$ &
$4(\underline{6}_{eve}) + 4(\underline{3}_{irr})$ \\
$\underline{6}_{eve} \times \underline{2}$ & $\equiv$ &
$\underline{6}_{odd} + \underline{3}_{irr} + \underline{3}_{irr}$ &
$\underline{6}_{eve} \times \underline{6}_{odd}$ & $\equiv$ &
$4(\underline{6}_{eve}) + 4(\underline{3}_{irr})$ \\
$\underline{6}_{odd} \times \underline{2}$ & $\equiv$ &
$\underline{6}_{eve} + \underline{3}_{irr} + \underline{3}_{irr}$ &
$\underline{6}_{odd} \times \underline{6}_{odd}$ & $\equiv$ &
$2(\underline{6}_{odd}) + 2(\underline{6}_{eve}) +
4(\underline{3}_{irr})$
\end{tabular}
$$
Notice that products of irreducible representations are not always direct
sums of irreducibles (this is not a modular category). One can define a
concept of truncated (or fusion) tensor product by using the notion of
quantum trace and discarding those representations of $q$-dimension zero.
The algebra $\mathcal H$ is indeed a Ribbon Hopf algebra and the notion of
quantum trace (and of quantum dimension of representations) makes sense.
This quantum dimension $Tr_q$ has the property of being multiplicative with
respect to tensor products. It can be seen that $Tr_q(X) = Tr(KX)$ so that
the $q$-dimension of the projective indecomposable representations vanishes,
whereas it is equal to the $q$-number $[n]$ for the irreducible
representations of (usual) dimensions $n$. Notice that for each value of
$q$ being a primitive $N$-th root of unity ($N$ odd), there exists a
particular projective indecomposable representation of (usual) dimension
$N$ which is, at the same time, irreducible; the $q$ dimension of this
particular irreducible representation vanishes. For $N = 3$, for instance,
one can check, using the Appendix~E of \cite{CoGaTr-E} that the
$q$-dimension of the projective indecomposable representations (the
$3_{irr}$, $6_{eve}$ and $6_{odd}$) vanishes, whereas it is equal
respectively to $1$ and $-1$ for the irreducible representations of
(usual) dimensions $1$ and $2$.
The previous table of results for tensor products of representations of
$\mathcal H$ was obtained in \cite{CoGaTr-E} without using knowledge of
the $R$-matrix and without using the concept of $q$-dimension (or truncated
tensor product).
\section{Reality structures}
\label{sec:stars}
\subsection{Real forms and stars on quantum groups}
A $*$-Hopf algebra $\mathcal F$ is an associative algebra that satisfies the
following properties (for all elements $a, b$ in $\mathcal F$):
\begin{enumerate}
\item
$\mathcal F$ is a Hopf algebra (a quantum group), with coproduct
$\Delta$, antipode $S$ and counit $\epsilon$.
\item
$\mathcal F$ is an involutive algebra, {\rm i.e.,\/}\ it has an involution $*$
(a `star' operation). This operation is involutive ($**a = a$),
antilinear ($*(\lambda a) = \overline{\lambda} *a$, where $\lambda$ is
a complex number), and anti-multiplicative ($*(ab) = (*b)(*a)$).
\item
The involution is compatible with the coproduct, in the following sense:
if $\Delta a = a_1 \otimes a_2$, then $\Delta *a = *a_1 \otimes *a_2$.
\item
The involution is also compatible with the counit:
$\epsilon(*a) = \overline{\epsilon(a)}$.
\item
The following relation with the antipode holds: $S * S * a = a$.
\end{enumerate}
\noindent Actually, the last relation is a consequence of the others. It
can also be written $ S \, * = * \, S^{-1}$. It may happen that $S^2 = \mbox{\it id\,}$,
in which case $S \, * = * \, S$, but it is not generally so (usually the
square of the antipode is only a conjugacy).
If one wishes, using the $*$ on $\mathcal F$, one can define a star
operation on the tensor product ${\mathcal F} \otimes {\mathcal F}$, by
$*(a \otimes b) = *a \otimes *b$. The third condition reads then
$\Delta \, * = * \, \Delta$, so one can say $\Delta$ is a $*$-homomorphism
between $\mathcal F$ and ${\mathcal F} \otimes {\mathcal F}$ (each with its
respective star). It can also be said that $\epsilon$ is a $*$-homomorphism
between $\mathcal F$ and $\mbox{\rm C}\hskip-5.5pt \mbox{l} \;$ with the star given by complex conjugation.
A star operation as above, making the Hopf-algebra a $*$-Hopf algebra,
is also called a {\sl real form\/} for the given algebra. An element
$u$ that is such that $* u = u$ is called a real element.
\subsubsection{Twisted stars on quantum groups}
It may happen that one finds an involution $*$ on a Hopf algebra for
which the third axiom fails in a special way, namely, the case where
$\Delta a = a_1 \otimes a_2$ but where $\Delta *a = *a_2 \otimes *a_1$.
In this case $S \, * = * \, S$. Such an involution is called a
{\sl twisted star\/} operation. Remember that, whenever one has a coproduct
$\Delta$ on an algebra, it is possible to construct another coproduct
$\Delta^{op}$ by composing the first one with the tensorial flip. If
one defines a star operation on the tensor product (as before) by
$*(a \otimes b) \doteq *a \otimes *b$, the property defining a twisted
star reads
$$
\Delta \, * = * \, \Delta^{op} \ .
$$
One should be cautious: some authors introduce a different star operation
on the tensor product, namely $*'(a \otimes b) \doteq *b \otimes *a$. In
that case, a twisted star operation obeys $\Delta \, * = *' \, \Delta$!
Twisted star operations are often used in conformal field theory
(\cite{Mack}).
\subsubsection{Remark about superstars on differential algebras}
On a real manifold, the star operation has a ``strange property''.
Indeed, it is natural to take $*x = x$, $*y = y$ (on the algebra of
functions) and extend it to the algebra of differential forms by requiring
that the star is that $*dx = dx$, so that it is ``real'' in the sense of
not being complex. However, antimultiplicativity of the star leads
immediately to $*(dx \, dy) = (*dy) \, (*dx) = dy \, dx = - dx \, dy$, so
that $dx \, dy$ cannot be a ``real element'' for this reasonable star!
This strange feature does not arise when we stop at the level of the
algebra of functions but it shows up as soon as we want to promote a given
star to the $\mbox{\rm Z}\hskip-5pt \mbox{\rm Z}_{2}$-graded algebra of differential forms, something that
we shall do later in our context. In order to solve this ``problem'',
which already appears on a usual manifold, it is always possible ---but
not necessary--- to introduce a superstar (nothing to do with the twist
described in the previous subsection), {\rm i.e.,\/}\ a $\mbox{\rm Z}\hskip-5pt \mbox{\rm Z}_{2}$-graded star, with
the constraint:
$$
*(a \, b) = (-1)^{(\#a \#b)} \, *b \, *a
$$
where $\#a$ is the $\mbox{\rm Z}\hskip-5pt \mbox{\rm Z}_{2}$-parity of $a$. Using such superstars allow one
to identify ``real elements'' (in the usual sense of not being complex) with
real elements for the $*$ ({\rm i.e.,\/}\ such that $*u=u$).
\subsection{Real forms on $\mathcal F$}
As can be easily found in the literature, one has three possibilities
for the ---not twisted--- star operations on $Fun(SL_q(2,\mbox{\rm C}\hskip-5.5pt \mbox{l} \;))$ (up to
Hopf automorphisms):
\begin{enumerate}
\item
The real form $Fun(SU_q(2))$. The matrix elements obey
$a^* = d$, $b^* = -qc$, $c^* = -q^{-1} b$ and $d^* = a$. Moreover,
$q$ should be real.
\item
The real form $Fun(SU_q(1,1))$. The matrix elements obey
$a^* = d$, $b^* = qc$, $c^* = q^{-1} b$ and $d^* = a$. Moreover,
$q$ should be real.
\item
The real form $Fun(SL_q(2,\mbox{\rm I}\hskip-2pt \mbox{\rm R}))$. The matrix elements obey
$a^* = a$, $b^* = b$, $c^* = b$ and $d^* = d$.
Here $q$ can be complex but it should be a phase.
\end{enumerate}
\noindent Therefore, taking $q$ a root of unity is incompatible with the
$SU_q$ real forms, and the only possibility is to choose the star
corresponding to $Fun(SL_q(2,\mbox{\rm I}\hskip-2pt \mbox{\rm R}))$. This already tells us that there
is at most one real form on its quotient $\mathcal F$. We only have to
check that the star operation preserves the ideal and coideal defined by
$a^3 = d^3 = \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l}$, $b^3 = c^3 = 0$. This is trivial because $a^* = a$,
$b^* = b$, $c^* = c$ and $d^* = d$. Hence, this star operation can be
restricted to $\mathcal F$.
This real form can be considered as a reduced quantum group associated
with the real form $Fun(SL_q(2,\mbox{\rm I}\hskip-2pt \mbox{\rm R}))$ of $Fun(SL_q(2,\mbox{\rm C}\hskip-5.5pt \mbox{l} \;))$.
Of course, one can also discuss {\sl twisted} star operations: see,
in particular the comment at the end of the next subsection.
\subsection{Real structures and star operations on $\mathcal M$
and $\mathcal H$}
Now we want to introduce an involution (a star operation) on the
comodule algebra $\mathcal M$. This involution should be compatible with
the coaction of $\mathcal F$. That is, we are asking for covariance of
the star under the (right,left) $\mathcal F$-coaction,
\begin{equation}
(\Delta_{R,L} \, z)^* = \Delta_{R,L} (z^*) \ , \quad
\text{for any} z \in {\mathcal M} \ .
\label{*-delta-condition}
\end{equation}
\noindent Here we have used the same notation $*$ for the star on the
tensor products, which are defined as $(A \otimes B)^* = A^* \otimes B^*$.
Using, for instance, the left coaction in (\ref{*-delta-condition}),
we see immediately that the real form $Fun(SL_q(2,\mbox{\rm I}\hskip-2pt \mbox{\rm R}))$ corresponds
to choosing on ${\mathcal M} = M_N(\mbox{\rm C}\hskip-5.5pt \mbox{l} \;)$ the star given by
\begin{equation}
x^* = x \ , \qquad y^* = y \ .
\label{*-on-M}
\end{equation}
We now want to find a compatible $*$ on the algebra $\mathcal H$.
As $\mathcal H$ is dual to $\mathcal F$ (or $U_q(sl(2))$ dual to
$Fun(SL_q(2,\mbox{\rm C}\hskip-5.5pt \mbox{l} \;))$), we should have dual $*$-structures. This means
the relation
\begin{equation}
<h^*, u> = \overline{<h, (Su)^*>} \ , \quad
h \in{\mathcal H}, u \in{\mathcal F}
\label{dual-*-structures}
\end{equation}
holds. In this way we obtain:
\begin{equation}
X_+^* = -q^{-1} X_+ \ , \quad X_-^* = -q X_- \ , \quad K^* = K \ .
\label{*-on-H}
\end{equation}
Moreover, the covariance condition for the star, equation
(\ref{*-delta-condition}), may also be written dually as a condition for
$*$ under the action of $\mathcal H$. This can be done pairing the
$\mathcal F$ component of equation (\ref{*-delta-condition}) with some
$h \in {\mathcal H}$. One gets finally the constraint on
$*_{\mathcal H}$ to be $\mathcal H$ covariant,
\begin{equation}
h(z^*) = \left[ (S h )^* z \right]^* \ , \quad
h \in{\mathcal H}, z \in{\mathcal M} \ .
\label{*-action-condition}
\end{equation}
Adding the non quadratic relations $x^N = \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l}$, $y^N = \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l}$ in
$\mathcal M$, and the corresponding ones in the algebra $\mathcal H$,
does not change anything to the determination of the star structures.
This is because the (co)ideals are preserved by the involutions, and
thus the quotients can still be done.
Remark that the set of $N \times N$ matrices is endowed with a usual
star operation, the hermitian conjugacy $\dag$. It is clear that $x$
and $y$ are unitary elements with respect to $\dag$: $x^\dag = x^{-1}$
($=x^{2}$ if $N=3$) and $y^\dag = y^{-1}$ ($=y^{2}$ if $N=3$). But
this is {\sl not} the star operation that we are using now, the one that
is compatible with the quantum group action, at least when $x$ and $y$
are represented by the $3 \times 3$ matrices given in
Section~\ref{sec:red-q-plane}.
Note finally that one could be tempted to chose the involution defined
by $K^\star = K^{-1}$, $X_+^\star = \pm X_-$ and $X_-^\star = \pm X_+$.
However, this is a {\em twisted} star operation. This last operation is
the one one would need to interpret the unitary group of $\mathcal H$,
in the case $N=3$, as $U(3) \times U(2) \times U(1)$, which could be
related to the gauge group of the Standard Model \cite{Connes-2}.
\subsection {A quantum group invariant scalar product on the space of
$N \times N$ matrices}
\label{subsec:scalar-product-on-M}
\subsubsection{Identification between star operation and adjoint}
A possible scalar product on the space ${\mathcal M} = M_N(\mbox{\rm C}\hskip-5.5pt \mbox{l} \;)$ of
$N \times N$ matrices is the usual one, namely, $m_1, m_2 \rightarrow
Tr(m_1^\dag m_2)$. For every linear operator $\ell$ acting on $\mathcal M$
we can define the usual adjoint $\ell^\dag$; however, this adjoint does not
coincide with the star operation introduced previously. Our aim in this
section is to find another scalar product better suited for our purpose.
We take $z, z' \in {\mathcal M}$, and $h \in \mathcal H$. We know that the
first ones act on $\mathcal M$ like multiplication operators and that $h$
acts on $\mathcal M$ by twisted derivations or automorphisms. We also
know the action of our star operation $*$ on these linear operators. We
shall now obtain our scalar product by imposing that $*$ coincides with the
adjoint associated with this scalar product (compatibility condition). That
is, we are asking for an inner product on $\mathcal M$ such that the actions
of $\mathcal M$ and $\mathcal H$ (each with its respective star) on that
vector space may be thought as $*$-representations. Due to the fact that
$(z, z') = (\mbox{\rm 1}\hskip-2.8pt \mbox{\rm l}, z^* z')$, it is enough to compute $(\mbox{\rm 1}\hskip-2.8pt \mbox{\rm l}, z)$ for all the
$z$ belonging to $\mathcal M$. The above compatibility condition leads to a
single solution \cite{CoGaTr-E}: the only non vanishing scalar product
between elements of the type $(\mbox{\rm 1}\hskip-2.8pt \mbox{\rm l}, z)$ is $(\mbox{\rm 1}\hskip-2.8pt \mbox{\rm l}, x^{N-1} y^{N-1})$.
From this quantity we deduce $N^{2}-1$ other non-zero values for the scalar
products $(z,z')$ where $z$ and $z'$ are basis elements $x^r y^s$. For
instance,
$(x, x^{N-2}y^2) = (\mbox{\rm 1}\hskip-2.8pt \mbox{\rm l}, x^* x^{N-2} y^{N-1}) = (\mbox{\rm 1}\hskip-2.8pt \mbox{\rm l}, x^{N-1} y^{N-1})$.
Hermiticity with respect to $*$ implies that $(xy,xy)$ should be real,
so we set $(xy,xy) = 1$.
\subsubsection{Quantum group invariance of the scalar product}
We should now justify why the above scalar product was called a quantum
group invariant one. Remember we only said the scalar product was such that
the stars coincide with the adjoint operators, or such that the actions are
given by $*$-representations.
We refer the reader to \cite{CoGaTr}, where it is shown that the
$*$-representation condition on the scalar product,
\begin{equation}
(h z,w) = (z,h^* w) \ , \quad h \in {\mathcal H} \ ,
\label{star_rep}
\end{equation}
automatically fulfills one of the two alternative invariance conditions
that can be imposed on the scalar product, namely
\begin{equation}
((Sh_1)^* z,h_2 w) = \epsilon(h) (z,w) \ , \quad
\text{with} \Delta h = h_1 \otimes h_2 \ .
\label{cond_for_scalar_prod_inv}
\end{equation}
The relations dual to (\ref{cond_for_scalar_prod_inv}) and
(\ref{star_rep}) are those that apply to the coaction of $\mathcal F$
instead of the action of $\mathcal H$. These are
\begin{equation}
(\Delta_R \, z, \Delta_R \, w) = (z,w) \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l}_{\mathcal F} \ ,
\label{dual_cond_for_scalar_prod_inv}
\end{equation}
where $(\Delta_R \, z, \Delta_R \, w)$ should be understood as
$(z_i, w_j) \, T_i^* \, T_j$ if $\Delta_R \, z = z_i \otimes T_i$, and
\begin{equation}
(z,\Delta_R w) = ((1\otimes S)\Delta_R z,w) \ ,
\label{dual_star_rep}
\end{equation}
respectively. We have used here the right-coaction, but the formulas for
the left coaction can be trivially deduced from the above ones.
It is worth noting that these equations for the coaction of $\mathcal F$
{\em imply} the previous ones for the action of $\mathcal H$, and are
completely equivalent assuming non-degeneracy of the pairing
$<\cdot,\cdot>$. Moreover, (\ref{dual_cond_for_scalar_prod_inv}) is a
requirement analogous to the condition of classical invariance by a group
element action.
Using the unique Hopf compatible star operation $*$ on $\mathcal H$, we
can calculate the most general metric on the vector spaces of each of the
indecomposable representations of $\mathcal H$. Obviously, one should
restrict the inner product to be a quantum group invariant one. This is done
in \cite{CoGaTr-E}, where we refer for details. Here it suffices to say that
one obtains for the projective indecomposable modules nondegenerate
but indefinite metrics, and the submodules carry a metric which is,
moreover, degenerate.
\section{The Manin dual ${\mathcal M}^!$ of $\mathcal M$}
\label{sec:manin-dual}
Our algebra $\mathcal M$ is not quadratic since we impose the
relations $x^N = y^N = \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l}$. Nevertheless, we define its Manin dual
${\mathcal M}^!$ as the algebra generated {\sl over the complex
numbers\/} by $\xi^1, \xi^2$, satisfying the relations
$$
(\xi^1)^2 = 0 \: , \quad
q\xi^1 \xi^2 + \xi^2 \xi^1 = 0 \: , \quad
(\xi^2)^2 = 0 \ ,
$$
as in the unreduced case. We shall write $dx \doteq \xi^1$ and
$dy \doteq \xi^2$, so ${\mathcal M}^!$ is defined by these two
generators and the relations
\begin{equation}
dx^2 = 0 \: , \quad
dy^2 = 0 \: , \quad
q dx \, dy + dy \, dx = 0 \ .
\label{M!-relations}
\end{equation}
Once the coaction of $\mathcal F$ on $\mathcal M$ has been defined as
in Section~\ref{sec:q-group-F}, it is easy to check that its coaction
on ${\mathcal M}^!$ is given by the same formulae. Namely, writing
$$
\pmatrix{dx' \cr dy'} =
\pmatrix{a & b \cr c & d} \otimes \pmatrix{dx \cr dy}
$$
and
$$
\pmatrix{\tilde dx & \tilde dy} =
\pmatrix{dx & dy} \otimes \pmatrix{a & b \cr c & d}
$$
ensures that $q \, dx' \, dy' + dy' \, dx' = 0$ and
$q \, \tilde dx \, \tilde dy + \tilde dy \, \tilde dx = 0$,
once the relation $q \, dx \, dy + dy \, dx = 0$ is satisfied.
The left and right coactions can be read from those formulae, for
instance $\Delta_R (dx) = dx \otimes a + dy \otimes c$.
Since the formulae for the coactions on the generators and on their
differentials are the same, the formulae for the actions of $\mathcal H$
on ${\mathcal M}^!$ must also coincide. For instance, using $X_-^L[x] = y$
we find immediately $X_-^L[dx] = dy$. This corresponds to an irreducible
two dimensional representation of $\mathcal H$. We shall return to this
problem in the next section, since we are going to analyse the
decomposition in representations of a differential algebra
$\Omega_{WZ}({\mathcal M})$ built using $\mathcal M^!$.
\section{Covariant differential calculus on $\mathcal M$}
\label{sec:diff-calculus}
Given an algebra $\mathcal A$, there is a universal construction that
allows one to build the so-called {\sl algebra of universal differential
forms\/} $\Omega({\mathcal A}) = \sum_{p=0}^\infty \Omega^p({\mathcal A})$
over $\mathcal A$. This differential algebra is universal, in the sense that
any other differential algebra with $\Omega^0({\mathcal A}) = {\mathcal A}$
will be a quotient of the universal one. For practical purposes, it is often
not very convenient to work with the algebra of universal forms. First of
all, it is very ``big''. Next, it does not remember anything of the coaction
of ${\mathcal F}$ on the algebra ${\mathcal M}$ (the $0$-forms).
Starting from a given algebra, there are several constructions that allow
one to build ``smaller'' differential calculi. As already mentioned, they
will all be quotients of the algebra of universal forms by some
(differential) ideal. One possibility for such a construction was
described by \cite{Connes}, another one by \cite{Dubois-Violette},
and yet another one by \cite{Coquereaux-Haussling-Scheck}. In the present
case, however, we use something else, namely the differential calculus
$\Omega_{WZ}$ introduced by Wess and Zumino \cite{Wess-Zumino} in the case
of the quantum $2$-plane. Its main interest is that it is covariant with
respect to the action (or coaction) of a quantum group. Its construction
was recalled in \cite{CoGaTr} where it was also shown that one can
further take another quotient by a differential ideal associated with the
constraints $x^{N}=y^{N}=\mbox{\rm 1}\hskip-2.8pt \mbox{\rm l}$ (so that, indeed $d(x^{N})=d(y^{N}) = 0$
automatically).
\subsection{The reduced Wess-Zumino complex}
The algebra $\Omega_{WZ}$ is a differential algebra first
introduced by \cite{Wess-Zumino} for the ``full'' quantum plane.
First of all
$\Omega_{WZ} = \Omega_{WZ}^0 \oplus \Omega_{WZ}^1 \oplus \Omega_{WZ}^2$
is a graded vector space.
\begin{itemize}
\item
Forms of grade $0$ are just functions on the quantum plane,
{\rm i.e.,\/}\ elements of $\mathcal M$.
\item
Forms of grade $1$ are of the type
$a_{rs} x^r y^s dx + b_{rs} x^r y^s dy$, where $dx$ and $dy$ are the
generators of the Manin dual ${\mathcal M}^!$.
\item
Forms of grade $2$ are of the type $c_{rs} x^r y^s dx \, dy$.
\end{itemize}
\noindent Next, $\Omega_{WZ}$ is an algebra. The relations between $x$, $y$,
$dx$ and $dy$ are determined by covariance under the quantum group action:
\begin{equation}
\begin{tabular}{ll}
$xy = qyx$ & \\
$x\,dx = q^2 dx\,x$ \qquad & $x\,dy = q \, dy\,x + (q^2 - 1) dx\,y$ \\
$y\,dx = q \, dx\,y$ & $y\,dy = q^2 dy\,y$ \\
$dx^2 = 0$ & $dy^2 = 0$ \\
$dx\,dy + q^2 dy\,dx = 0$ & \\
\end{tabular}
\label{WZ-relations}
\end{equation}
Moreover, we want $\Omega_{WZ}$ to be a differential algebra, so we
introduce an operator $d$ and set $d(x) = dx$, $d(y) = dy$; the Leibniz rule
(for $d$) should also hold. Finally, we impose $d \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l} = 0$ and $d^2 = 0$.
In the case $q^N = 1$, we add to $\Omega_{WZ}$ the extra defining relation
(coming from the definition of the reduced quantum plane):
$x^N = y^N = \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l}$. The fact that $\Omega_{WZ}$ is still well defined as a
differential algebra is not totally obvious and requires some checking (see
\cite{CoGaTr-E}). Note that $\dim(\Omega_{WZ}^0) = N^{2}$,
$\dim(\Omega_{WZ}^1) = 2N^{2}$ and $\dim(\Omega_{WZ}^2) = N^{2}$.
\subsection{The action of $\mathcal H$ on $\Omega_{WZ}({\mathcal M})$}
\label{subsec:H-action-on-Omega}
Since $\mathcal H$ acts on ${\mathcal M}$ (and we know how this module
decomposes under the action of $\mathcal H$), it is clear that we can also
decompose $\Omega_{WZ}$ in representations of $\mathcal H$. This was done
explicitly (for the case $N=3$) in \cite{CoGaTr}, and the action of
$\mathcal H$ on $\Omega_{WZ}^1$ (for an arbitrary $N$) was described in
\cite{Coquereaux-Schieber}. The cohomology of $d$ is actually non trivial
and was also studied in \cite{CoGaTr-E}.
\subsection{The space of differential operators on $\mathcal M$}
\label{app:diff-operators-on-M}
We now summarize the structure of the space of differential operators on
the reduced quantum plane $\mathcal M$, {\rm i.e.,\/}\ the algebra of $N \times N$
matrices.
\subsubsection*{The space $\mathcal D$ of differential operators on
$\mathcal M$}
We already know what the operator
$d: \Omega_{WZ}^0 = {\mathcal M} \rightarrow \Omega_{WZ}^1$ is.
{\it A priori\/} $d f = dx \, \partial_x( f) + dy \, \partial_y (f)\:$,
where $\partial_x f, \partial_y f \in {\mathcal M}$, and this defines
$\partial_x$ and $\partial_y$ as (linear) operators on ${\mathcal M}$.
Generally speaking, operators of the type $f(x,y) \partial_x$ or
$f(x,y) \partial_y$ are called differential operators of order $1$.
Composition of such operators gives rise to differential operators of
order higher than $1$. Multiplication by elements of $\mathcal M$ is
considered as a differential operator of degree $0$. The space of all
these operators is a vector space
${\mathcal D} = \oplus_{i=0}^{i=4}{\mathcal D}_i$.
\subsubsection*{The twisting automorphisms $\sigma$\footnote{
Some properties of these automorphisms are discussed
in \cite{Manin-2}
}}
Since we know how to commute $x,y$ with $dx,dy$, we can write, for any
element $f \in {\mathcal M}$
\begin{eqnarray*}
f dx &=& dx \, \sigma_x^x(f) + dy \, \sigma_y^x(f) \cr
f dy &=& dx \, \sigma_x^y(f) + dy \, \sigma_y^y(f)
\end{eqnarray*}
where each $\sigma_{j}^{i}$ is an element of $End({\mathcal M})$ to be
determined (just take $f=x$ and $f=y$ in the above to get the results that
one is looking for).
Let $f$ and $g$ be elements of $\mathcal M$. From the associativity
property $(fg) dz = f(g dz)$ we find
$$
\sigma_i^j(fg) = \sigma_i^k(f) \, \sigma_k^j(g)
$$
with a summation over the repeated indices. The map
$\sigma: f \in {\mathcal M} \rightarrow \sigma(f) \in M_2({\mathcal M})$
is an algebra homomorphism from the algebra ${\mathcal M}$ to the algebra
$M_2({\mathcal M})$ of $2 \times 2$ matrices, with elements in $\mathcal M$.
The usual Leibniz rule for $d$, namely $d(fg) = d(f)g+fd(g)$, implies
that
$$
\partial_i(fg) = \partial_i(f) \, g + \sigma_i^j (f) \, \partial_j(g) \ .
$$
This shows that $\partial_x$ and $\partial_y$ are derivations twisted
by an automorphism.
\subsubsection*{Relations in $\mathcal D$}
For calculational purposes, it is useful to know the commutation relations
between $x,y$ and $\partial_x, \partial_y$, those between
$\partial_x, \partial_y$ and $\sigma_i^j$ and the relations between the
$\sigma_i^j$. Here are some results (see also \cite{Wess-Zumino,Manin}).
\begin{eqnarray*}
\partial_x\, x &=& 1 + q^2 x\, \partial_x + (q^2-1) y \, \partial_y \cr
\partial_x\, y &=& q y \, \partial_x \cr
{} &{}& \cr
\partial_y\, x &=& q x \, \partial_y \cr
\partial_y\, y &=& 1 + q^2 y \, \partial_y
\end{eqnarray*}
Moreover,
$$
\partial_y \, \partial_x = q \partial_x \, \partial_y
$$
Also, the relations $x^N = y^N = \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l}$ lead to other constraints on the
powers of the derivations. For example, for $N=3$ these imply the
constraint:
$$
\partial_x\partial_x\partial_x = \partial_y\partial_y\partial_y = 0
$$
Finally, the commutation relations between the $\sigma$'s can be calculated
from the values of the $\sigma^i_j(x)$.
\subsubsection*{Differential operators on $\mathcal M$ associated with the
$\mathcal H$ action}
The twisted derivations $\partial_x, \partial_y$ considered previously
constitute a $q$-analogue of the notion of vector fields. Their powers
build up arbitrary differential operators. Elements of $\mathcal H$ act
also like powers of generalized vector fields (consider, for instance, the
left action generated by $X_\pm^L,K^L$). Of course, they are differential
operators of a special kind. One can say that elements of $\mathcal H$ act
on $\mathcal M$ as {\sl invariant\/} differential operators since they are
associated with the action of a (quantum) group on a (quantum) space.
A priori, the generators $X_\pm^L,K^L$ can be written in terms of
$x,y, \partial_x, \partial_y$. The coefficients of such combination
can be determined by imposing that equations (\ref{H-products}),
(\ref{actionofHonM}) are satisfied. A rather cumbersome calculation
leads to a unique solution ({\rm cf.\/}\ \cite{CoGaTr-E}) that can be
written simply in terms of the scaling operators \cite{Ogievetsky-2}\
$\mu_x \equiv \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l} + (q^2-1)(x \partial_x + y \partial_y)$\ , \
$\mu_y \equiv \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l} + (q^2-1)(y \partial_y)$:
\begin{eqnarray*}
K^L_- &=& \mu_x \mu_y \\
K^L &=& \mu_x \mu_x \mu_y \mu_y \\
X^L_+ &=& q^{-1} x \partial_y \mu_y \mu_y \\
X^L_- &=& q y \partial_x \mu_x
\end{eqnarray*}
Notice that elements of $\mathcal M$ acting by multiplication on itself can
be considered as differential operators of order zero. It makes therefore
perfect sense to study the commutation relations between the generators
$x,y$ of the quantum plane and $X_{\pm},K$. This is also done in
\cite{CoGaTr-E}.
\subsection{Star operations on the differential calculus
$\Omega_{WZ}({\mathcal M})$}
\label{subsec:*-on-Omega}
Given a $*$ operation on the algebra $\mathcal M$, we want to extend it
to the differential algebra $\Omega_{WZ}({\mathcal M})$. This can be
done in two ways, either by using a usual star operation, or by using
a superstar operation (see subsection superstar). Here we use
the ``usual'' star operation formalism, so that the star has to be
involutive, complex sesquilinear, and anti-multiplicative for the algebra
structure in $\Omega_{WZ}({\mathcal M})$. We impose moreover that it
should be compatible with the coaction of $\mathcal F$. The quantum
group covariance condition is, again, just the commutativity of
the $*$, $\Delta_{R,L}$ diagram, or, algebraically,
\begin{equation}
(\Delta_{R,L} \omega)^* = \Delta_{R,L} (\omega^*) \ .
\label{*-coaction-on-omega-condition}
\end{equation}
However, there is no reason {\it a priori\/} to impose that $*$ should
commute with $d$. In any case, it is enough to determine the action of $*$
on the generators $dx$ and $dy$, since we already determined the $*$
operation on $\mathcal M$ ($* x = x$, $* y = y$).
Taking $\Delta_{R,L} dx = a \otimes dx + b \otimes dy$, we get
$(\Delta_{R,L} dx)^* = a \otimes dx^* + b \otimes dy^*$, to be compared
with $\Delta_{R,L} (dx^*)$. Expanding $dx^*$ as a generic element of
$\Omega^1_{WZ}({\mathcal M})$ (we want a grade-preserving $*$), it can be
seen that the solution $dx^* = dx$ is the only possible one, up to complex
phases. Doing the same with $dy$ we get:
\begin{equation}
dx^* = dx \ , \qquad dy^* = dy \ .
\label{star-in-Omega}
\end{equation}
\noindent The star being now defined on $\Omega_{WZ}^0 = {\mathcal M}$ and
on the $d$ of the generators of ${\mathcal M}$, it is extended to the whole
of the differential algebra $\Omega_{WZ}$ by imposing the
anti-multiplicative property
$*(\omega_1 \omega_2) = (* \omega_2) (* \omega_1)$.
With this result, it can be checked that
\begin{equation}
d * \omega = (-1)^p * d \omega \quad \text{when} \quad
\omega \in \Omega_{WZ}^p \ .
\label{d-*-relation}
\end{equation}
The above involution is not the only one that one can define on the
Wess-Zumino complex. However, any other involution would not be compatible
with the coaction of $\mathcal F$. Loosing the compatibility with the
quantum group is clearly unacceptable, since the main interest of the
Wess-Zumino differential complex rests on the fact that it is compatible
with the coaction.
\section{Non commutative generalized connections on $\mathcal M$ and their
curvature}
\label{sec:connections}
Let $\Omega$ be a differential calculus over a unital associative algebra
$\mathcal M$, {\rm i.e.,\/}\ a graded differential algebra with
$\Omega^0 = \mathcal M$. Let $\mathcal M$ be a right module over
$\mathcal M$. A covariant differential $\nabla$ on $\mathcal M$ is a map
${\mathcal M} \otimes_{\mathcal M} \Omega^p \mapsto
{\mathcal M} \otimes _{\mathcal M} \Omega^{p+1}$, such that
$$
\nabla( \psi \lambda) = (\nabla \psi) \lambda + (-1)^s \psi \, d \lambda
$$
whenever $\psi \in {\mathcal M} \otimes_{\mathcal M} \Omega^s$
and $\lambda \in \Omega^t$. $\nabla$ is clearly not linear with respect to
the algebra $\mathcal M$ but it is easy to check that the curvature
$\nabla^2$ is a linear operator with respect to $\mathcal M$.
In the particular case where the module $\mathcal M$ is taken as the
algebra $\mathcal M$ itself, any one-form $\omega$ (any element of
$\Omega^1$) defines a covariant differential. One sets simply
$\nabla \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l} = \omega$, where $\mbox{\rm 1}\hskip-2.8pt \mbox{\rm l}$ is the unit of the algebra
$\mathcal M$ and we call curvature the quantity $\rho = \nabla^{2}\mbox{\rm 1}\hskip-2.8pt \mbox{\rm l}$,
$$
\rho \doteq \nabla \omega = \nabla \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l} \omega = (\nabla \mbox{\rm 1}\hskip-2.8pt \mbox{\rm l}) \omega +
\mbox{\rm 1}\hskip-2.8pt \mbox{\rm l} d \omega = d \omega + \omega^2 \ .
$$
\subsection{Connections on $\mathcal M$ and their curvature}
We now return to the specific case where $\mathcal M$ is the algebra of
functions over the quantum plane at a $N$-th root of unity.
The most general connection is defined by an element $\phi$ of
$\Omega_{WZ}^1(\mathcal M)$. Since we have a quantum group action of
$\mathcal H$ on $\Omega_{WZ}$, it is convenient to decompose $\phi$ into
representations of this algebra as obtained in
Section~\ref{subsec:H-action-on-Omega}.
The exact expression of the curvature $\rho = d \phi + \phi \phi$ is not
very illuminating but it can be simplified in several particular
cases (see \cite{CoGaTr-E}).
As we know, the only Hopf star operation compatible with the quantum group
action of $\mathcal H$ on the differential algebra $\Omega_{WZ}$, when $q^N
= 1$, is the one described in Section~\ref{subsec:*-on-Omega} ($dx^* = dx$,
$dy^* = dy$). Imposing the hermiticity property $\phi = \phi^*$ on the
connection leads to constraints on the coefficients. Again we refer to
\cite{CoGaTr-E} for a discussion of the results.
\section{Incorporation of Space-Time}
\label{sec:space-time}
\subsection{Algebras of differential forms over $C^\infty(M) \otimes
{\mathcal M}$}
Let $\Lambda$ be the algebra of usual differential forms over a space-time
manifold $M$ (the De Rham complex) and
$\Omega_{WZ} \doteq \Omega_{WZ}({\mathcal M})$
the differential algebra over the reduced quantum plane introduced in
Section~\ref{sec:diff-calculus}. Remember that
$\Omega_{WZ}^0 = {\mathcal M}$,
$\Omega_{WZ}^1 = {\mathcal M} \: dx + {\mathcal M} \: dy$, and
$\Omega_{WZ}^2 = {\mathcal M} \: dx \, dy$.
We call $\Xi$ the graded tensor product of these two differential algebras:
$$
\Xi \doteq \Lambda \otimes \Omega_{WZ}
$$
\begin{itemize}
\item
A generic element of $\Xi^0 = \Lambda^0 \otimes \Omega_{WZ}^0$ is a
$3 \times 3$ matrix with elements in $C^\infty(M)$. It can be thought
as a scalar field valued in $M_3(\mbox{\rm C}\hskip-5.5pt \mbox{l} \;)$.
\item
A generic element of
$\Xi^1 = \Lambda^0 \otimes \Omega_{WZ}^1 \oplus
\Lambda^1 \otimes \Omega_{WZ}^0$
is given by a triplet $\omega = ( A_\mu, \phi_x, \phi_y )$, where
$A_\mu$ determines a one-form (a vector field) on the manifold $M$
with values in $M_3(\mbox{\rm C}\hskip-5.5pt \mbox{l} \;)$ (that we can consider as the Lie algebra of the
Lie group $GL(3,\mbox{\rm C}\hskip-5.5pt \mbox{l} \;)$), and where $\phi_x, \phi_y$ are $M_3(\mbox{\rm C}\hskip-5.5pt \mbox{l} \;)$-valued
scalar fields. Indeed
$\phi_x (x^{\mu}) \: dx + \phi_y (x^{\mu}) \: dy
\in \Lambda^0 \otimes \Omega_{WZ}^1$.
\item
A generic element of
$\Xi^2 = \Lambda^0 \otimes \Omega_{WZ}^2 \oplus
\Lambda^1 \otimes \Omega_{WZ}^1 \oplus
\Lambda^2 \otimes \Omega_{WZ}^0$
consists of
\begin{itemize}
\item
a matrix-valued $2$-form $F_{\mu \nu} dx^\mu dx^\nu$ on the
manifold $M$, {\rm i.e.,\/}\ an element of $\Lambda^2 \otimes \Omega_{WZ}^0$
\item
a matrix-valued scalar field on $M$, {\rm i.e.,\/}\ an element of
$\Lambda^0 \otimes \Omega_{WZ}^2$
\item
two matrix-valued vector fields on $M$, {\rm i.e.,\/}\ an element of
$\Lambda^1 \otimes \Omega_{WZ}^1$
\end{itemize}
\end{itemize}
The algebra $\Xi$ is endowed with a differential (of square zero, of course,
and obeying the Leibniz rule) defined by
$d \doteq d \otimes \mbox{\it id\,} \pm \mbox{\it id\,} \otimes d$. Here $\pm$ is the (differential)
parity of the first factor of the tensor product upon which $d$ is applied,
and the two $d$'s appearing on the right hand side are the usual De Rham
differential on antisymmetric tensor fields and the differential of the
reduced Wess-Zumino complex, respectively.
If $G$ is a Lie group acting on the manifold $M$, it acts also (by
pull-back) on the functions on $M$ and, more generally, on the differential
algebra $\Lambda$. For instance, we may assume that $M$ is Minkowski space
and $G$ is the Lorentz group. The Lie algebra of $G$ and its enveloping
algebra $\mathcal U$ also act on $\Lambda$, by differential operators.
Intuitively, elements of $\Xi$ have an ``external'' part ({\rm i.e.,\/}\ functions on
$M$) on which $\mathcal U$ act, and an ``internal'' part ({\rm i.e.,\/}\ elements
belonging to $\mathcal M$) on which $\mathcal H$ acts. We saw that
$\mathcal H$ is a Hopf algebra (neither commutative nor cocommutative)
whereas $\mathcal U$, as it is well known, is a non commutative but
cocommutative Hopf algebra. To conclude, we have an action of the Hopf
algebra ${\mathcal U} \otimes {\mathcal H}$ on the differential algebra
$\Xi$.
\subsection{Generalized gauge fields}
\label{subsec:generalized-gauge-fields}
Since we have a differential algebra $\Xi$ associated with the associative
algebra $C^\infty(M) \otimes {\mathcal M}$, we can define, as usual,
``abelian''-like connections by choosing a module which is equal to the
associative algebra itself. A Yang-Mills potential $\omega$ is an arbitrary
element of $\Xi^1$ and the corresponding curvature, $d \omega + \omega^2$,
is an element of $\Xi^2$. Using the results of the previous subsection, we
see that $\omega = (A_\mu, \phi_x, \phi_y)$ consists of a usual Yang-Mills
field $A_\mu$ valued in $M_3(\mbox{\rm C}\hskip-5.5pt \mbox{l} \;)$ and a pair $\phi_x, \phi_y$ of scalar
fields also valued in the space of $3 \times 3$ matrices. We have
$\omega = A + \phi$, where $A = A_\mu dx^\mu$ and
$\phi = \phi_x dx + \phi_y dy \in \Lambda^0 \otimes \Omega_{WZ}^1
\subset \Xi^1$.
We can also decompose $A = A^\alpha \lambda_\alpha$, with $\lambda_\alpha$
denoting the usual Gell-Mann matrices (together with the unit matrix) and
$A^\alpha$ a set of complex valued one-forms on the manifold $M$. Let us
call $\delta$ the differential on $\Xi$, $\underline{d}$ the differential
on $\Lambda$ and $d$ the differential on $\Omega_{WZ}$ (as before).
The curvature is then $\delta \omega + \omega^2$. Explicitly,
$$
\delta A = (\underline{d} A^\alpha) \lambda_\alpha -
A^\alpha d\lambda_\alpha
$$
and
$$
\delta\phi = (\underline{d}\phi_x) dx + (\underline{d}\phi_y) dy +
(d \phi_x) dx + (d \phi_y) dy \ .
$$
\noindent It is therefore clear that the corresponding curvature will have
several pieces:
\begin{itemize}
\item
The Yang-Mills strength $F$ of $A$
$$
F \doteq (\underline{d} A^\alpha) \lambda_\alpha + A^2 \quad
\in \Lambda^2 \otimes \Omega_{WZ}^0
$$
\item
A kinetic term ${\mathcal D \phi}$ for the scalars, consisting of three
parts: a purely derivative term, a covariant coupling to the gauge
field and a mass term for the Yang-Mills field (linear in the $A_\mu$'s).
$$
{\mathcal D \phi} \doteq (\underline{d}\phi_x) dx +
(\underline{d}\phi_y) dy +
A \phi + A^\alpha d\lambda_\alpha
\quad \in \Lambda^1 \otimes \Omega_{WZ}^1
$$
\item
Finally, a self interaction term for the scalars
$$
(d \phi_x) dx + (d \phi_y) dy + \phi^2 \quad
\in \Lambda^0 \otimes \Omega_{WZ}^2
$$
\end{itemize}
\noindent Hence we recover the usual ingredients of a Yang-Mills-Higgs model
(the mass term for the gauge field, linear in $A$, is usually obtained
from the ``$A \phi$ interaction'' after shifting $\phi$ by a constant).
By choosing an appropriate scalar product on the space $\Xi^2$, one obtains a
quantity that is quadratic in the curvatures (quartic in the fields) and
that could be a candidate for the Lagrangian of a theory of Yang-Mills-Higgs
type. However, if we do not make specific choices for the connection
(for instance by imposing reality constraints or by selecting one or
another representation of $\mathcal H$), the results are a bit too general
and, in any case, difficult to interpret physically. Actually, many choices
are possible and we do not know, at the present level of our analysis,
which kind of constraint could give rise to interesting physics.
\section{Concluding remarks}
Physical models of the gauge type will involve the consideration of
one-forms. If we restrict ourselves to the ``internal space'' part of these
one-forms, we have to consider objects of the form
$\Phi = \sum \varphi_i \omega_i $. Here $\{ \omega_i \}$ is a basis of some
non-trivial indecomposable representation of $\mathcal H$ (or any other
non-cocommutative quantum group) on the space of $1$-forms, and $\varphi_i$
are functions over some space-time manifold. What about the transformation
properties of the fields $\varphi_i$? This is a question of central
importance, since, ultimately, we will integrate out the internal space
(whatever this means), and the only relic of the quantum group action on the
theory will be the transformations of the fields $\varphi_i$'s. There are
several possibilities: one of them, as suggested from the results of
Section~\ref{sec:space-time} is to consider $\mathcal H$ as a discrete
analogue of the Lorentz group (actually, of the enveloping algebra
$\mathcal U$ of its Lie algebra). In such a case, ``geometrical
quantities'', like $\Phi$ should be $\mathcal H$-invariant (and
$\mathcal U$-invariant). This requirement obviously forces the $\varphi_i$
to transform. Another possibility would be to assume that $\Phi$ itself
transforms according to some representation of this quantum group (in the
same spirit one can study, classically, the invariance of particularly
chosen connections under the action of a group acting also on the base
manifold). In any case, the $\varphi_i$ are going to span some non-trivial
representation space of $\mathcal H$.
Usually, the components $\phi_i$ are real (or complex) numbers and are,
therefore, commuting quantities. However, this observation leads to the
following problem: If the components of the fields commute, then we
get $h.(\varphi_i \varphi_j) = h.(\varphi_j \varphi_i)$, for any
$h \in {\mathcal H}$. This would imply (here $\Delta h = h_1 \otimes h_2$)
\begin{eqnarray*}
(h_1.\varphi_i)(h_2.\varphi_j) &=& (h_1.\varphi_j)(h_2.\varphi_i) \\
&=& (h_2.\varphi_i)(h_1.\varphi_j) \ .
\end{eqnarray*}
This equality cannot be true in general for a non-cocommutative
coproduct. Hence we should generally have a nonabelian product for the
fields. In our specific case, there is only one abelian $\mathcal H$-module
algebra, the $3_{odd}$ one. Only fields transforming according to this
representation could have an abelian product. However, covariance strongly
restricts the allowable scalar products on each of the representation spaces
(for instance, in the case of $\mathcal H$ we get both indefinite and
degenerate metrics). This fact is particularly important as one should have
a positive definite metric on the physical degrees of freedom. To this end
one should disregard the non-physical (gauge) ones, and look for
representations such that only positive definite states survive. Thus we
see that the selection of the representation space upon which to build the
physical model is not simple.
The fact of having noncommuting fields has a certain resemblance with the
case of supersymmetry. As the superspace algebra is noncommutative, the
scalar superfield must have noncommuting component fields in order to
match its transformation properties. As a consequence, instead of having
---on each space-time point--- just the Grassmann algebra over the complex
numbers, we see the appearance of an enlarged algebra generated by both the
variables and the fields. It is reasonable to expect that the addition of a
non-trivial quantum group as a symmetry of space forces a more
constrained algebra.
We should point out that the above reasoning is very general, and
is independent of the details of the fields. That is, it relies only in the
existence of a non-cocommutative Hopf algebra acting in a nontrivial way on
the fields.
| proofpile-arXiv_065-8886 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{References}
| proofpile-arXiv_065-8895 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The astrophysical properties of the electron scattering atmosphere
have been obtained through studies of the polarized radiative transfer
by many researchers \markcite{cha60, ang69, phi86}(Chandrasekhar 1960,
Angel 1969, Phillips \& M\'esz\'aros 1986, etc).
One of the main results from these studies
is that the emergent flux is polarized with a high degree of linear
polarization up to 11.7 \%
from a very thick plane-parallel atmosphere in the
direction of the plane. A representative application of this study
has been found in active galactic nuclei (AGN), where the accretion disk
presumed to form the central engine with the supermassive black hole
has a corona component that may be regarded as
a plane-parallel electron scattering atmosphere \markcite{ebs92, rdb90}
(Emmering, Blandford \& Shlosman 1992, Blandford 1990).
Thus far the study of polarization of the Thomson scattered radiation
has been concentrated on the continuum radiation due to the independence
of the Thomson scattering cross section on the frequency. However,
the upper part of the accretion disk is believed to be exposed to the
broad emission line region
and therefore the Thomson reflected component is expected to
be present in the broad emission line.
A line photon incident upon an electron scattering atmosphere will
get a wavelength shift due to the thermal motion of a scatterer.
Assuming that the atmosphere is governed by
a Maxwell-Boltzmann distribution, a typical wavelength shift per scattering
is given by the Doppler width. Therefore, the transfer process is
approximated by a diffusive one both in real space and in frequency space.
It is naturally expected from the random walk nature that the average
scattering number before escape is smaller in the line center part
than in the wing part. It is also well known that the polarization of
the scattered flux is sensitively dependent on the scattering number before
escape. Therefore, in an electron scattering atmosphere illuminated by
a monochromatic light source, the polarization of the emergent radiation
will be dependent on the wavelength shift from the line center and may form
a ``polarization structure'', which is expected to be characteristic of
the Thomson
optical depth of the scattering medium. From this consideration, we may
expect that the polarized flux will have a different profile from
that of the Thomson-scattered flux.
In this paper, we investigate the polarization of the radiation
both reflected by and transmitted through an electron scattering
slab illuminated by a line-emitting source using a Monte Carlo method
and discuss possible applications to astrophysical sources containing
emission line regions with an electron-scattering atmosphere such as AGN.
\section{ Polarized Radiative Transfer of Thomson-Scattered Lines}
The polarized radiative transfer in an electron-scattering atmosphere
can be simply treated by a Monte Carlo
method (e.g. \markcite{ang69} Angel 1969). The polarization state associated
with an ensemble of photons can be described by a density operator
represented by a $2\times 2$ hermitian density matrix $\rho$.
\markcite{lbw94} (e.g. Lee, Blandford \& Western 1994).
A Monte Carlo code can be made by recording the density
matrix along with the Doppler shift obtained in each scattering.
In the absence of circular polarization, the density matrix associated with
the scattered radiation is related
with that of the incident radiation explicitly by
\begin{eqnarray}
\rho'_{11} &=& \cos^2\Delta\phi \ \rho_{11}
-\cos\theta\sin2\Delta\phi \ \rho_{12} +
\cos^2\theta\sin^2\Delta\phi \ \rho_{22} \nonumber \\
\rho'_{12} &=&{1\over 2}\cos\theta'\sin2\Delta\phi \ \rho_{11}
+(\cos\theta\cos\theta'\cos2\Delta\phi +\sin\theta\sin\theta'\cos\Delta\phi)
\ \rho_{12} \nonumber \\
&& -\cos\theta(\sin\theta\sin\theta'\sin\Delta\phi+
{1\over2}\cos\theta\cos\theta'\sin2\Delta\phi)\ \rho_{22} \\
\rho'_{22} &=& \cos^2\theta' \sin^2\Delta\phi \ \rho_{11}
+ \cos\theta'(2 \sin\theta\sin\theta'\sin\Delta\phi +
\cos\theta\cos\theta'\sin2\Delta\phi) \ \rho_{12} \nonumber \\
&& +(\cos\theta\cos\theta'\cos\Delta\phi + \sin\theta\sin\theta')^2
\ \rho_{22} \nonumber
\end{eqnarray}
where the incident radiation is characterized by the wavevector
$\hat{\bf k}_i =(\sin\theta\cos\phi, \sin\theta\sin\phi, \cos\theta)$
and the outgoing wavevector $\hat{\bf k}_f$ is correspondingly given with
angles $\theta'$ and $\phi'$ with $\Delta\phi = \phi'-\phi$.
The circular polarization is represented by the imaginary part of the
off-diagonal element, which is zero in a plane-parallel system and is shown
to be decoupled from the other matrix elements. The
angular distribution of the radiation field is described by the trace
part, from which the scattered wavevector is naturally chosen in the Monte
Carlo code.
\section{ Result}
\subsection{Profile formation and polarization of the transmitted component}
The radiative transfer in an electron-scattering atmosphere as a diffusive
process is studied by \markcite{wey70} Weymann (1970). He considered
a plan-parallel atmosphere with Thomson optical depth $\tau_T$ that embeds
a monochromatic source in the midplane and computed the line profile of
the emergent flux by adopting the Eddington approximation.
According to his result, the mean intensity $J$ satisfies the diffusion equation
given by
\begin{equation}
{\partial^2 \over \partial \tau^2} J +{3\over 8}\ {\partial^2\over
\partial x^2} J = -3\delta(x),
\end{equation}
where the wavelength shift $x\equiv (\lambda-\lambda_0)/\Delta\lambda_D$.
Here, $\Delta\lambda_D =\lambda_0 v_{th}/c$ is the Doppler width and
$\lambda_0$ is the wavelength of the monochromatic source.
With the two-stream type boundary conditions, \markcite{wey70}Weymann(1970)
proposed an approximate solution given by
\begin{equation}
J(x, \tau) = \sum_{n=1}^{\infty} A_n \cos[a_n(\tau-\tau_T/2)] \exp
[-2\sqrt{6}\ a_n |x|/3 ],
\end{equation}
where $a_n$ is determined from the relation
\begin{equation}
(a_n \tau_T) \tan(a_n \tau_T /2)=\sqrt{3}\ \tau_T ,
\end{equation}
and $A_n$ is obtained from
\begin{equation}
A_n = 4\sqrt{6}\ \tau_T^{-1}\ \sin(a_n\tau_T /2)\
[(a_n\tau_T)^2 +a_n\tau_T\sin(a_n \tau_T)]^{-1} .
\end{equation}
\placefigure{fig1}
In Fig. 1, by the solid line we plot the profile obtained
for $\tau_T=8$ investigated by \markcite{wey70}
Weymann (1970), and by the dotted line we show the corresponding
Monte Carlo results. Here, the radiation source is isotropic located in
the midplane. The agreement between these two results is good within
1-$\sigma$ except near the center.
The thermal broadening of the profile depends sensitively
on the Thomson optical depth $\tau_T$. The scattering number contributing
to a given wavelength shift is plotted by the dashed line in the lower panel
and it generally increases monotonically from the line center to the wings.
\placefigure{fig2}
In Fig.~2 we show the profile and the polarization of the reflected
and the penetrated radiation transferred through an
electron-scattering slab illuminated by an anisotropic monochromatic source
outside the slab, viewed at $\mu=0.5$.
The horizontal axis represents the wavelength
shift in units of the Doppler width
$\lambda_0 v_{th}/c$ associated with the electronic thermal motion.
Here, the scattering medium is assumed to be governed by a Maxwell-Boltzmann
distribution with temperature $T$ and $\lambda_0$ is the wavelength
of the monochromatic incident radiation.
The Thomson scattering optical depth $\tau_T$ of the slab
is assumed to take values of 0.5, 3, 5.
We first discuss the transmitted radiation.
Because of the random walk nature of the Thomson scattering process, the
average number of scattering increases as the wavelength shift increases
toward the wing regime. When $\tau_T\le 1$, near the line center, the emergent
photons are scattered mostly only once. This implies that these
singly scattered photons are mainly contributed from those
incident nearly normally. The photons propagating in the grazing
direction (with small $\mu_i = \hat{\bf k}_i \cdot \hat{\bf z}$) tend to
be scattered more than once due to the large Thomson optical
depth $\tau_T/\mu_i$ in this direction. However, with small $\tau_T$
there is non-negligible contribution from photons with grazing directions.
The resultant polarization is determined from the competition of
the parallel polarization from photons with initially
nearly normal incidence and the perpendicular polarization from
grazingly incident photons. Thus, when $\tau_T\le 1$ a weak
perpendicular polarization is obtained near the center and as
$\tau_T$ increases the polarization flips to the parallel direction, which
is shown in the case $\tau_T=3$.
On the other hand, the multiply scattered photons are mostly
those with grazing incidence. These photons mainly contribute to the wing
part. Since the scattering optical depth is small, the scattering plane must
coincide approximately with the slab plane in a thin atmosphere.
The polarization develops in the perpendicular
direction to the scattering plane and therefore, the emergent
photons are polarized in the perpendicular direction to the slab plane,
when the Thomson depth is small.
When $\tau_T\ge 5$, the dependence of the polarization on the
wavelength shift decreases and the overall polarization tends
to lie in the parallel direction to the slab plane. Therefore, the degree
of polarization shows a maximum at the line center, as is shown in the figure.
The overall parallel polarization is obtained because
the contribution of the singly scattered flux decreases and the
increased mean scattering number before escape leads to an anisotropic
radiation field dominantly in the slab plane direction throughout the
wavelength shifts of the emergent radiation. According to
the Monte Carlo result for a continuum source obtained by \markcite
{ang69} Angel (1969),
when $\tau_T\sim 6$, the polarization reaches the limit of semi-infinite
slab which \markcite{cha60}Chandrasekhar (1960) investigated (see also
\markcite{la98}Lee \& Ahn 1998).
\subsection{ Polarization of the Reflected Component}
We next discuss the properties of the reflected component.
Firstly, all the reflected components are polarized in the perpendicular
direction with respect to the slab plane for all the scattering optical
depths. Here, one of the most important points to note
is that the linear degree of polarization shows a local minimum
at the line center. The contribution from the singly-scattered photons
also plays an important role in
determining the polarization behavior of the reflected component around the
line center. In the line center, the mean scattering number is also smaller
than in the wing part. Because the light source is assumed to be
isotropic, the singly-scattered photons constituting the line center
part are contributed almost equally from all the initial directions. Therefore,
the integrated polarization becomes small.
On the other hand, in the wing part, the mean scattering number increases
due to diffusion. The main contribution to the reflected component
is provided by the multiply-scattered photons near the bottom of the slab.
Therefore, the scattering planes just before reflection are mostly
coincident with the slab plane, and hence the reflected radiation
becomes strongly polarized in the perpendicular direction to the slab plane.
As $\tau_T$ increases, the contribution from singly-scattered photons
decreases and the polarization dip in the center part becomes negligible.
\subsection{Application to an Ionized Galactic Halo}
\markcite{lob98}Loeb (1998) proposed to measure the virial temperature of
galactic halos using the scattered flux of quasar emission lines. With this
in mind and for a simple application, we consider a hemispherical halo with the
Thomson optical depth $\tau_T = 0.1$ with an emission line
source located at the center. It is assumed that the emission source
is anisotropic and illuminate the halo uniformly in the range
$\mu\equiv \cos\theta \ge 0.6$, where $\theta$ is the polar angle.
The observer's line of sight is assumed to have the polar angle
$\theta_l =\cos^{-1} 0.5$, so that the direct flux from the emission source
does not reach the observer. Here, the incident line profile
is chosen to be triangular superimposed with a flat continuum. The half width
of the triangular profile at the bottom is set to be 1 Doppler width
$\Delta\lambda_D$.
This choice is rather arbitrary, and nevertheless considering
the complex profiles and widths that typical quasar emission lines
exhibit, our choice can be regarded as a tolerable approximation.
The line strength is normalized so that the
equivalent width $EW_l = 10 \Delta\lambda_D$.
\placefigure{fig3}
Fig. 3 shows the result, where the linear degree of polarization is
shown by the solid line with 1-$\sigma$ error bars, the scattered flux
by the solid line and the polarized flux with the dotted line. Two local
maxima in the polarization are obtained in the wing parts, and accordingly
the polarized flux possesses larger width than the scattered flux does.
The locations of the polarization maxima are nearly equal to the
Doppler width and therefore, they can be a good measure for the electron
temperature. The slight difference of the widths shown in the
scattered flux and the polarized flux may not be useful to put observational
constraints on the physical properties of the Thomson-scattering medium.
However, the dependence of polarization on the wavelength in
the Thomson-scattered emission lines is quite notable and can provide
complementary information in addition to that possibly obtainable from
the scattered flux profile.
\section{Observational Implications : AGN Spectropolarimetry}
Spectropolarimetry has been successfully used toward a unified picture
of AGN, according to which, narrow line AGN such as Seyfert 2 galaxies
and narow line radio galaxies are expected to exhibit
the broad emission lines in the polarized flux spectra
\markcite{am85, ogl97} (Antonucci \& Miller 1985, Ogle et al. 1997).
The ionized component located at high latitude that is responsible for
the polarized broad emission lines is also proposed to give rise to
absorption/reflection features both in X-ray ranges \markcite{kk95}
(Krolik \& Kriss 1995).
The scattering geometry considered in Fig. 3 may be also applicable to
Seyfert 2 galaxies. However, the typical widths of the broad lines
of order $10000\ {\rm km\ s^{-1}}$ requires very hot scattering
gas of $T\sim 10^7\ {\rm K}$ for a possible polarization
structure considered in this work. The strong
narrow emission lines provide a polarization-diluting component,
which dominates the center part of the broad line features.
Especially
in the case of hydrogen Balmer lines, atomic effects may also leave
a similar polarization dip in the center part \markcite{ly98}(Lee \& Yun 1998).
Another application of the Thomson scattering process is found
in the upper part of the accretion disk of AGN.
Little or negligible polarization is obtained from a large number of
polarimetric observations of AGN, which is inconsistent with the
expectation that the emergent radiation from a thick electron-scattering
plane-parallel atmosphere can be highly polarized up to 11.7 percent.
There have been various suggestions including the ideas of corrugated disk
geometry, magnetic field effects, atomic absorptions \markcite{lnp90, kor98,
ab96}(e.g. Laor, Netzer \& Piran 1990, Koratkar et al. 1998, Agol \& Blaes
1996). Negligible polarization is also obtained
when the atmosphere has a small Thomson optical depth $\tau_T \le 3$
\markcite{cht97}(e.g. Chen, Halpern \& Titarchuk 1997).
It remains
an interesting possibility that the upper part of the disk is illuminated
by the broad emission line sources and therefore the broad lines
may include the Thomson-reflected component from the disk.
The polarization in general will be sensitively dependent on the relative
location of the emission region with respect to the accretion disk.
The origin of the broad emission line region is still controversial
ranging from models invoking a large number of clumpy clouds confined
magnetically or thermally to an accretion disk wind model \markcite{mc95}
(Murray \& Chiang 1995). The existence and nature
of the outflowing wind around an accretion disk also constitute main
questions of the unified view of broad absorption line quasars,
and the polarization of the
broad lines reflects the importance of resonant scattering
and electron scattering \markcite{lb97}(e.g. Lee \& Blandford 1997).
Several polarimetric observations reveal some hints of
polarized broad emission lines \markcite{gm95, coh95}(e.g. Goodrich \&
Miller 1995, Cohen et al. 1995).
\acknowledgments
This work was supported by the Post-Doc program at Kyungpook National
University. The author is very grateful to the referee Dr. Patrick Ogle
for many helpful suggestions, which improved greatly the presentation
of this paper.
| proofpile-arXiv_065-8902 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
A search for stable and long-lived\footnote{Throughout the paper
stable particles include long-lived particles decaying outside the detector.}
heavy charged particles
in {\it all} final states is reported using the data taken by the DELPHI
experiment at energies from 130 to 183 GeV. These results
extend those reported in \cite{ref:hdelphi} by including the
130-136 and 183 GeV data taken in 1997. The other LEP experiments have searched
for stable and long-lived heavy charged particles in low multiplicity final
states \cite{ref:hlep}.
In most models of Supersymmetry (SUSY) the supersymmetric partners of standard
particles are unstable and have short lifetimes, except the lightest supersymmetric
particle (LSP) which could be neutral and stable.
In most of the searches it is therefore assumed that the
supersymmetric particles decay promptly.
However, it is possible that a stable or
long-lived heavy charged SUSY-particle exists.
In the Minimal Supersymmetric Standard Model (MSSM)
with the neutralino as the LSP\cite{ref:susy},
if the mass difference between the chargino and neutralino is small
the chargino can have a
sufficiently long lifetime to be observed as stable in the detector.
In the MSSM with a very small amount of R-parity violation
the LSP can be a charged slepton or squark and decay with a long lifetime
into Standard Model particles \cite{ref:rpar}.
In gauge mediated supersymmetric models the gravitino
is the LSP and the next to lightest supersymmetric particle
(NLSP) could have a long lifetime in a very natural way
for large values of the SUSY-breaking scale \cite{ref:slepton}.
This is possible for sleptons, for example when the stau is the NLSP.
In certain
variations of the minimal model the squark
can be the NLSP and become long-lived \cite{ref:giu}.
Other SUSY and non-SUSY models predict
stable and long-lived heavy charged leptons, quarks
and hadrons not present in the Standard Model.
Free (s)quarks might even exist \cite{ref:free}.
The published analyses from DELPHI \cite{ref:hdelphi} and the other LEP
experiments \cite{ref:hlep} covered masses, $m$, above 45 GeV/c$^2$.
The present analysis has been further optimized for squarks
and extended down to masses of 2 GeV/c$^2$.
This extension is important for the stable and long-lived squark search.
Stable long-lived free squarks of charge $\pm \frac{2}{3}e$ were excluded
by the data taken at the Z$^0$ peak \cite{ref:sdelphi}. However, the upper limits
on the production cross-section of squarks, where the squark dresses up and
becomes a charged or neutral shadron in a hadronization or
fragmentation process,
are worse than those of free squarks. In particular,
hadronizing stop and sbottom quarks with
so-called typical mixing and down-type right-handed squarks
are not ruled out in the mass region from $\sim$15 to 45 GeV/c$^2$
due to the small production cross-section at Z$^0$ energies.
Limits on the production cross-section and masses will be given
for stable and long-lived sleptons, charginos,
free (not hadronizing) squarks of charge q = $\pm \frac{2}{3}e$ and hadronizing squarks
(q = $\pm \frac{1}{3}e$ or $\pm \frac{2}{3}e$) forming shadrons. No search
is made for free squarks of charge q = $\pm \frac{1}{3}e$, because
the tracking system is not sensitive enough to record the ionization of
these particles.
A dedicated simulation
program was used for the hadronization of squarks.
It is assumed that the sleptons, charginos, free squarks and shadrons
decay outside the tracking volume of the detector,
which extends to a typical radius of 1.5 m. It is further assumed
that these particles
do not interact more strongly
than ordinary matter particles (protons or electrons)
and reach the main tracking device.
Heavy stable particles are selected by looking for high momentum
charged particles with either anomalous ionization loss dE/dx
measured in the Time Projection Chamber (TPC), or the absence of
Cherenkov light in the gas and liquid radiators of
the Barrel Ring Imaging CHerenkov (RICH).
The combination of the data from the TPC and RICH detectors and kinematic
cuts provide an efficient detection of new heavy particles with
a small background for masses from 2 GeV/c$^2$ to the kinematic limit.
The data taken during the period from 1995 to 1997 corresponds to an integrated luminosity
of 11.9 pb$^{-1}$ at an energy of 130-136 GeV (including 6 pb$^{-1}$ taken in
1997) 9.8 pb$^{-1}$ at an energy of 161 GeV, 9.9 pb$^{-1}$ at an energy of 172 GeV,
and 54.0 pb$^{-1}$ at an energy of 183 GeV.
\section{Event selection}
A description of the DELPHI apparatus and its performance can be
found in ref.\cite{ref:delp}, with more details on the Barrel RICH in
ref. \cite{ref:brich} and
particle identification using the RICH in ref. \cite{ref:ribmean}.
Charged particles were selected if their impact parameter with respect to the
mean beam-spot
was less than 5 cm in the
$xy$ plane (perpendicular to the beam), and less than 10 cm in $z$ (the
beam direction),
and their polar angle ($\theta$) was between 20 and 160 degrees.
The relative error on the measured momentum was required to be less than 1
and the track length larger than 30 cm. The energy of a charged particle was
evaluated from its momentum\footnote{In the following, `momentum' means the apparent momentum, defined as
the momentum divided by the charge $|q|$, because this is the physical quantity
measured from the track curvature in the 1.23 T magnetic field.}
assuming the pion mass. Neutral particles were selected
if their deposited energy was larger than 0.5 GeV and their polar angle
was between 2 and 178 degrees.
The event was divided into two hemispheres using the thrust axis.
The total energy in one hemisphere was required to be larger than 10 GeV
and the total energy of the charged particles in the other
hemisphere to be larger than 10 GeV.
The event must have at least
two reconstructed charged particle tracks including
at least one charged particle with momentum
above 5 GeV/c reconstructed by the TPC and also
inside the acceptance of the Barrel RICH, $|\cos\theta|<0.68$.
Cosmic muons were removed by putting tighter cuts on the impact parameter
with respect to the mean beam-spot position.
When the event had two charged particles with at least one
identified muon in the muon chambers, the impact parameter
in the XY plane was required to be less than 0.15 cm, and below
1.5 cm in Z.
The highest momentum (leading) charged particle in a given
hemisphere was selected and identified
using a combination of the following signals
(where the typical sensitive mass range for pair produced
sleptons at an energy of 183 GeV is
shown in brackets):\\
(1) the Gas Veto: no photons were observed in the Gas RICH
($m>$1 GeV/c$^2$) \\
(2) the Liquid Veto: four or less photons were observed in
the Liquid RICH ($m>$65 GeV/c$^2$)\\
(3) high ionization loss in the TPC: measured ionization
was above 2 units i.e. twice the energy loss for a minimum ionizing particle
($m>$70 GeV/c$^2$)\\
(4) low ionization loss in the TPC: measured ionization
was below that expected for protons ($m$=1-50 GeV/c$^2$)\\
Selections (1) till (3) are identical to those used in our
previous publication \cite{ref:hdelphi}.
For the Gas and Liquid Vetoes it was required that the RICH was
fully operational and that for a selected track
photons from other tracks or ionization hits were detected
inside the drift tube crossed by the track.
Due to tracking problems electrons often passed a Gas or Liquid Veto.
Therefore it was required that particles that deposit more than 5 GeV
in the electromagnetic calorimeter, had either hits included in
the outer tracking detector or associated RICH ionization hits.
At least 80 from a maximum of 160 wires were required for the measurement
of the ionization in the TPC.
Two sets of cuts selected sleptons
or squarks. One set was defined for `leptonic topologies' for which the number
of charged particles is less than four and another set for `hadronic topologies'
for all other events.
The cuts were optimized using slepton and squark events
generated with SUSYGEN \cite{ref:stavros} and passed through
the detector simulation program \cite{ref:delp}.
Samples with different masses for smuons, free squarks with a charge of
$\pm \frac{2}{3}e$ and hadronizing
sbottom and stop squarks were studied in detail.
The hadronization of squarks was implemented in the following way.
The initial squark four-momenta including initial state
radiation were generated by SUSYGEN. The JETSET parton shower model was
used to fragment the squark-anti-squark string \cite{ref:jetset}.
In the fragmentation process
the Peterson fragmentation function was used with a value for $\epsilon=
0.003\:(5/m)^2$, where $m$ is the mass of the squarks in GeV/c$^2$
\cite{ref:been}.
A shadron was given the mass of the squark plus
150 MeV/c$^2$ for a smeson or plus 300 MeV/c$^2$ for a sbaryon.
In the fragmentation process, approximately 9\% sbaryons were formed and 40\%
of the shadrons were
charged, 60 \% neutral. In the detector simulation program a charged shadron
was given the properties of a heavy muon, a neutral shadron those of
a K$^0_L$\footnote{It was only required that a charged shadron leaves a track as for a
particle with unit charge, and that a neutral shadron deposit most of
its energy in the hadron calorimeter.}. Due to the hard fragmentation function the charged multiplicity
decreases as a function of the mass of the squark. At very high masses
a squark-antisquark pair often produces a low multiplicity final state.
For leptonic topologies an event was selected if the momentum of the charged
particle was above 15 GeV/c and the Gas Veto (1) was confirmed by a Liquid Veto (2) or
a low ionization loss (4) (in boolean notation (1)$\cdot$(2)+(1)$\cdot$(4)) or if
the momentum of the charged particle
was above 5 GeV/c and the Gas Veto was confirmed by a high ionization loss
((1)$\cdot$(3)). The event was also accepted if both hemispheres had
charged particles with momenta
above 15 GeV/c and both leading charged particles had a Gas Veto or a high ionization
loss or both a low ionization loss (((1)+(3))$\cdot$((1')+(3'))+(4)$\cdot$(4')),
where the primed selections refer to the opposite hemisphere.
For hadronic topologies the following kinematic quantities were
used to select events where a large fraction of the energy is taken by
a heavy particle. The energy fraction, $F_{c}$, is
defined as the momentum of the identified
charged particle
divided by the total energy in a given hemisphere, and $F_{n}$ the ratio of the
neutral energy with respect to the total energy in a hemisphere.
The energy fraction $F$ is the maximum of $F_{c}$ and $F_{n}$.
The background from normal $q\bar{q}$ events
was greatly reduced by requiring a minimum energy fraction $F$,
because heavy shadrons take most of the energy.
An event in a hadronic topology was selected if the momentum of the
leading charged particle
was above 15 GeV/c, the energy fraction $F$ was above 0.6 in one
hemisphere and above 0.9 in the other.
The selected charged particle had to be identified by a Gas Veto or
a high or a low ionization loss ((1)+(3)+(4)).
An event was also selected if the energy fraction $F$
in one hemisphere was above 0.6. In this case
the momenta of the charged particles in both hemispheres had to be above 15 GeV/c and
both leading charged particles had a Gas Veto, or both had high ionization,
or both low ionization ((1)$\cdot$(1')+(3)$\cdot$(3')+(4)$\cdot$(4')).
\section{Analysis results}
No event was selected in the leptonic topology. The expected background
was evaluated from the data and estimated to be 0.7 $\pm$ 0.3 events.
In Figure 1 the data taken at 183 GeV are shown for leptonic topologies.
The measured ionization and the measured Cherenkov angle
in the liquid radiator are shown after applying the Gas Veto.
Three events were selected in the hadronic topology:
one at 130 GeV, one at 161 GeV and one at 183 GeV.
The expected background was estimated to be 3.5 $\pm$ 1.5 events
using the real data and assuming that the
background is from Standard Model processes, when the
RICH or TPC misidentifies a particle known to be a pion (electron, muon, kaon or proton) as
a heavy particle. The misidentification probability was evaluated from the
data and used to estimate the expected background. The procedure
was cross-checked by simulation studies.
The three candidate events have total charged multiplicities
of 6, 4 and 5. The masses of the hypothetical squarks were estimated from a
constrained fit using energy and momentum conservation
and found to be 48, 21 and 30 GeV/c$^2$ with typical uncertainties
of about $\pm$10 GeV/c$^2$. The mass is also correlated to
the charged multiplicity. The most likely squarks masses
based both on these masses and the observed charged multiplicities are
41, 30 and 42 GeV/c$^2$. The resulting probability density
distribution is not very gaussian. The characteristics of the candidate events
are compatible with the background expectation.
Figure 2 shows the data taken at 183 GeV for hadronic
topologies. The data are shown after the kinematic cut (see section 2)
requiring that the energy fraction $F$ was above 60\% in both
hemispheres and in one of the hemispheres above 90\%.
One candidate event passes the Gas Veto (Fig. 2b).
The efficiency for selecting an event
was evaluated as a function of the mass at different energies
for right-handed smuons, mixed free stop quarks of charge
q=$\pm \frac{2}{3}e$, mixed hadronizing stop quarks and
mixed hadronizing sbottom quarks.
The term `mixed' refers to a typical mixing angle between left- and right-handed
particles for which the cross-section is minimal. The angle
is $\sim$60 degrees for stop quarks and $\sim$70 degrees for sbottom quarks.
The efficiency curves for a centre-of-mass energy of 183 GeV are shown
in Figures 3a to 6a. The efficiency approaches zero at masses below
1 GeV/c$^2$, where the Gas Veto becomes inefficient. Therefore the lowest
upper limit on the mass is put at 2 GeV/c$^2$.
The efficiency curves for left- and right-handed squarks are slightly
different due the different kinematical distributions,
but this difference can be neglected because it has no
influence on the quoted upper limits.
The efficiency curves have an overall systematic error
of $\pm$5\% coming from the modelling of the detector.
For the hadronization of squarks the following effects
were studied using the simulation:
a change in the fraction of neutral shadrons,
the response of the calorimeter to a neutral shadron and
the fragmentation function.
In the simulation the fraction of neutral shadrons is
60\%. This was changed to 50\% and an efficiency increase of
15\% was found. In the simulation it was assumed that a neutral
shadron behaves like a $K^0_L$. If one assumes that a neutral shadron
deposits only 20\% of the energy of a $K^0_L$ and the rest escapes,
the efficiency is only reduced by 10\%. Finally the fragmentation function
was softened assuming that $\epsilon$ is inversely proportional to the squark mass
with $\epsilon=0.003\:(5/m)$. The efficiency at a centre-of-mass
energy of 183 GeV increased by 20\% around a
squark mass of 45 GeV/c$^2$ and decreased by 15\% around 70 GeV/c$^2$.
From these studies it was concluded that the efficiencies
for squarks are sufficiently stable under these large changes.
The observed numbers of events in the leptonic
and hadronic topologies are compatible with the expected background.
Experimental upper limits at 95\% confidence level
are obtained on the cross-section in the leptonic
and hadronic topologies.
In the leptonic topology the 95\% confidence level upper limit corresponds to 3 events.
In the hadronic topology it corresponds to 5.4 events in
the case of
3 observed events with 3 expected background events.
The masses and charged particle multiplicity distributions of
the candidates are included in the experimental upper limit.
From the simulation, the probability distribution
as a function of the squark mass is obtained for each candidate and
the sum of these 3 probability distributions is shown
in Figure 7.
The upper limit on the number of events
at 95\% confidence level
is derived from this distribution by scaling it and adding it to 3.
Zero probability in this figure would thus correspond to an upper limit of 3 events.
The scale factor is adjusted such that
3 observed events with a flat probability distribution
would correspond to an upper limit of 5.4 events.
It was cheked that this procedure is sufficiently precise
for the present analysis.
The experimental upper limit on the cross-section was derived
from the upper limit on the number of events,
the signal efficiencies, integrated luminosities and cross-section ratios
at different energies as explained in footnote 6 of ref. \cite{ref:hdelphi}.
Figures 3 and 4 summarize the results for the leptonic topology for
stable and long-lived sleptons, charginos and free squarks while Figures 5 and
6 summarize the results for the hadronic and leptonic
topologies for stable and long-lived squarks.
Figure 3b shows the expected production cross-section for right- and left-handed
smuons (staus) as a function of the mass at a centre-of-mass energy
of 183 GeV. The combined experimental upper limit at 95\% confidence level
on the cross-section varies between 0.06 and 0.5 pb in the mass range from 2 to
90 GeV/c$^2$.
Right(left)-handed smuons or
staus are excluded in their mass range from 2 to 80 (81) GeV/c$^2$.
From the same data, stable and long-lived charginos are excluded
in the mass region from 2 to 87.5 GeV/c$^2$ for
sneutrino masses above 41 GeV/c$^2$. For sneutrino masses
above 200 GeV/c$^2$ the excluded mass goes up to 89.5 GeV/c$^2$.
Figure 4b shows the expected production cross-section for free
mixed (right, left-handed) stop quarks as a function of the mass at an energy
of 183 GeV. The combined experimental upper limit at
95\% confidence level
varies between 0.06 and 0.5 pb in the mass range from 2 to
90 GeV/c$^2$.
Free mixed (right, left-handed) stop quarks
are excluded in the mass range from 2 to 84 (84, 86) GeV/c$^2$.
Similarly, free right(left)-handed
up-type squarks of charge $\pm \frac{2}{3}e$
are excluded in the range from 2 to 84 (86) GeV/c$^2$.
Figure 5b shows the expected production cross-section for
mixed (right, left-handed) stop quarks as a function of the mass
at an energy of 183 GeV.
The combined experimental upper limit at 95\% confidence level on the
cross-section varies between 0.1 and 0.5 pb in the mass range from 5 to
90 GeV/c$^2$.
Hadronizing mixed (right, left-handed) stop quarks are excluded in
the mass range from 2 to 80 (81, 85) GeV/c$^2$.
Similarly, hadronizing right(left)-handed
up-type squarks are excluded in the range from 2 to 81 (85) GeV/c$^2$.
Figure 6b shows the expected production cross-section for
mixed (right, left-handed) sbottom quarks as a function of the mass
at an energy of 183 GeV.
The combined experimental upper limit at 95\% confidence level on the
cross-section is also shown. It varies between 0.15 and 0.5 pb in the mass range from 5 to
90 GeV/c$^2$.
Hadronizing mixed (right, left-handed) sbottom quarks
are excluded in the mass range from 5 (5, 2) to 38 (40, 83) GeV/c$^2$.
Similarly, right(left)-handed down-type squarks
are excluded in the range from 5 (2) to 40 (83) GeV/c$^2$.
These results supersede those previously published \cite{ref:hdelphi}.
\section{Conclusions}
A search is made for stable and long-lived heavy charged particles
in leptonic and hadronic final states
at energies from 130 to 183 GeV, using particles identified
by the Cherenkov light in the RICH and the
ionization loss in the TPC.
No event is observed in the leptonic topology with an expected background of
0.7 $\pm$ 0.3 events. In the hadronic topology 3 events were observed with an expected
background of 3.5 $\pm$ 1.5 events.
The upper limit at 95\% confidence level
on the cross-section at a centre-of-mass energy of
183 GeV for sleptons
and free squarks of charge $\pm \frac{2}{3}e$ varies between 0.06
and 0.5 pb in the mass range from 2 to 90 GeV/c$^2$.
The upper limit for hadronizing
squarks varies between 0.15 and 0.5 pb in the mass range
from 5 to 90 GeV/c$^2$.
Table 1 summarizes the excluded mass region at 95\% confidence level
for different stable and long-lived supersymmetric particles.
\begin{table}[ht]
\begin{center}
\begin{tabular}{|c|c|}\hline
particle & excluded mass range \\
& $GeV/c^2$ \\ \hline \hline
leptonic topologies & \\ \hline
$\tilde{\mu}_{R}$ or $\tilde{\tau}_{R}$ & 2-80 \\ \hline
$\tilde{\mu}_{L}$ or $\tilde{\tau}_{L}$ & 2-81 \\ \hline
$\tilde{\chi}^{\pm}$ ($m_{\tilde{\nu}}>41$ GeV/c$^2$) & 2-87.5 \\ \hline
$\tilde{\chi}^{\pm}$ ($m_{\tilde{\nu}}>200$ GeV/c$^2$) & 2-89.5 \\ \hline
free squarks & \\ \hline
$\tilde{t}$ mixed & 2-84 \\ \hline
$\tilde{t}_{R}$ or up-type $\tilde{q_{R}}$ & 2-84 \\ \hline
$\tilde{t}_{L}$ or up-type $\tilde{q_{L}}$ & 2-86 \\ \hline\hline
hadronic and leptonic topologies & \\ \hline
hadronizing squarks & \\ \hline
$\tilde{t}$ mixed & 2-80 \\ \hline
$\tilde{t}_{R}$ or up-type $\tilde{q_{R}}$ & 2-81 \\ \hline
$\tilde{t}_{L}$ or up-type $\tilde{q_{L}}$ & 2-85 \\ \hline
$\tilde{b}$ mixed & 5-38 \\ \hline
$\tilde{b}_{R}$ or down-type $\tilde{q_{R}}$ & 5-40 \\ \hline
$\tilde{b}_{L}$ or down-type $\tilde{q_{L}}$ & 2-83 \\ \hline\hline
\end{tabular}
\caption{Excluded mass range at 95\% confidence level for stable and long-lived particles}
\end{center}
\end{table}
\pagebreak
\subsection*{Acknowledgements}
\vskip 3 mm
We are greatly indebted to our technical
collaborators, to the members of the CERN-SL Division for the excellent
performance of the LEP collider, and to the funding agencies for their
support in building and operating the DELPHI detector.\\
We acknowledge in particular the support of \\
Austrian Federal Ministry of Science and Traffics, GZ 616.364/2-III/2a/98, \\
FNRS--FWO, Belgium, \\
FINEP, CNPq, CAPES, FUJB and FAPERJ, Brazil, \\
Czech Ministry of Industry and Trade, GA CR 202/96/0450 and GA AVCR A1010521,\\
Danish Natural Research Council, \\
Commission of the European Communities (DG XII), \\
Direction des Sciences de la Mati$\grave{\mbox{\rm e}}$re, CEA, France, \\
Bundesministerium f$\ddot{\mbox{\rm u}}$r Bildung, Wissenschaft, Forschung
und Technologie, Germany,\\
General Secretariat for Research and Technology, Greece, \\
National Science Foundation (NWO) and Foundation for Research on Matter (FOM),
The Netherlands, \\
Norwegian Research Council, \\
State Committee for Scientific Research, Poland, 2P03B06015, 2P03B03311 and
SPUB/P03/178/98, \\
JNICT--Junta Nacional de Investiga\c{c}\~{a}o Cient\'{\i}fica
e Tecnol$\acute{\mbox{\rm o}}$gica, Portugal, \\
Vedecka grantova agentura MS SR, Slovakia, Nr. 95/5195/134, \\
Ministry of Science and Technology of the Republic of Slovenia, \\
CICYT, Spain, AEN96--1661 and AEN96-1681, \\
The Swedish Natural Science Research Council, \\
Particle Physics and Astronomy Research Council, UK, \\
Department of Energy, USA, DE--FG02--94ER40817. \\
\newpage
| proofpile-arXiv_065-8904 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Underdetermination within a frictionless pack}
\label{counting}
In this section we consider how the constraints inherent in the packing of
impenetrable, frictionless beads determine the contact forces between the
beads. Forces in frictionless packs have been studied by
simulation\cite{Lacasse.Grest.Levine,Langer.Liu,Luding.frictionless,Farr}.
Theoretical properties of the forces have been established for simplified
systems
\cite{Ball.Edwards.counting,Moukarzel.PRL,Oron.Herrmann}. We begin with a
summary of the well-recognized enumeration of equations and unknowns, as
discussed, \textit{eg.} by Alexander\cite{floppy.networks} and by the Cambridge
group\cite{Ball.Edwards.counting}. We then consider the role of external
forces acting on a subregion of the pack, and show that a subset of these
suffices to determine the others.
For definiteness we consider a system of rigid spherical beads whose sizes are
chosen randomly from some continuous distribution. We suppose that the
diameters of all the spheres are specified and the topology of their packing
is given. That is, all the pairs of contacting beads are specified. The
topology can be characterized by the average coordination number, \textit{i.e.,} the
average number of nearest neighbors, $\overline Z =2N_c/M$ (here $N_c$ is the
total number of contacts, and $M$ is the number of beads). The necessary
condition for the packing of a given topology to be realizable in a space of
dimensionality $d$ is that the coordinates of the bead centers, ${\bf
x}_\alpha$ satisfy the following equations, one for each pair of contacting
beads $\alpha$ and $\beta$:
\begin{equation} \label{beadaverage}
\left({\bf x}_\alpha-{\bf x}_\beta\right)^2=\left(R_\alpha+R_\beta\right)^2
\end{equation}
Here $R_\alpha$, $R_\beta$ are the radii of the beads. There are $N_c$ such
equations (one for each contact), and $Md$ variables ($d$ for each bead).
The number of equations should not exceed the number of variables;
otherwise, the co-ordinates $\vector x_\alpha$ are overdetermined. Thus,
the coordination number of a geometrically-realizable packing should not
exceed the critical value of $2d$: $\overline{Z}\leq 2d$. We
assume all the equations imposed by the topological constraints to be
independent. If they were not independent, they would become so upon
infinitesimal variation of the bead sizes. For instance, the hexagonal
packing in two dimensions has the coordination number 6 which is higher
then the critical value, $\expectation{ Z}=4$; but the extra contacts are
eliminated by an infinitesimal variation of the bead diameters. In other
words, the creation of a contact network with coordination number higher
than $2d$ occurs with probability zero in an ensemble of spheres with a
continuous distribution of diameters. We shall ignore such zero-measure
situations henceforth.
\par
The above consideration gives the upper limit on the average coordination
number, $\overline{Z}$. The lower limit can be obtained from the analysis
of mechanical stability of the packing: it gives a complementary
inequality: $\overline{Z} \geq 2d$.
We will consider a packing to be mechanically stable if there is a non-zero
measure set of external forces which can be balanced by inter-bead ones.
The packing of frictionless spheres is always characterized by
$\expectation{ Z} =2d$ , as we now show.
Stability requires that the net force on each bead be zero; there are $Md$
such equations. The forces in these $Md$ equations are the $N_c$ contact
forces. The $Md$ equilibrium conditions determine the magnitudes of the
$N_c$ contact forces. (Their directions are determined by the geometry of
the packing.) The number of equilibrium equations $Md$ should not exceed
the number of force variables $N_c$; otherwise these forces would be
overdetermined. Thus $Md \leq N_c$, or
$\overline Z \geq 2d$. To avoid both overdetermined
co-ordinates
and overdetermined forces, we must thus have $\overline Z = 2d$.
\par
Similar counting arguments have been discussed
previously\cite{floppy.networks,Moukarzel.PRL}. A subset of them have been
applied to granular packs with friction\cite{Ball.Edwards.counting}. Here we
emphasize a further feature of a frictionless bead pack that has not been well
appreciated: the co-ordinates and forces within a subregion of a large bead
pack are necessarily
{\it underdetermined}. Quantifying this indeterminacy will play an important
role in our reasoning below. To exhibit the indeterminacy, we consider some
compact region within the packing, containing $M'$ beads. This unbiased
selection of beads must have the same average co-ordination number $\overline
Z$ as the system as a whole: $\overline Z' = 2d$. Let $N_{\rm ext}$ be the
number of contacts of this sub-system with external beads, and $N_{\rm int}$ be
the number of the internal contacts. The average coordination number
$\overline{Z}'$ can be expressed $\overline{Z}'=(N_{\rm ext}+2N_{\rm int})/M'$
(any internal contact should be counted twice). Since there are $M' d$
equations of force balance for these beads, one is able to determine all
$N_{\rm ext}+N_{\rm int}$ contact forces in the system, whenever $M'd = N_{\rm
ext} + N_{\rm int}$.
Evidently, if the forces on the $N_{\rm ext} $ contacts are not
specified, the internal forces cannot be computed: the system is
underdetermined. The number of external forces $N_0$ required is given by
$N_0=M'd - N_{\rm int}$. This $N_0$ may be related to the average
co-ordination number $\overline{Z}'$:
\begin{equation}
N_0 = M'\left[ d - {\overline{Z}' \over 2} \right] + \frac{N_{\rm ext}}{2}
\end{equation}
\par
We now observe that the quantity in $[ ... ]$ vanishes on average. This
is because the average of $\overline{Z}'$ for any subset of particles is
the same as the overall average. There is no systematic change of
$\overline{Z}'$ with $M'$. Thus if one half (on average) of
mutually-independent external forces is known (let us call them ``incoming"
ones), the analysis of force balance in the region enables one to determine
all the remaining forces, including the other half of external ones
(``outgoing"). We are free to choose the incoming contacts at will,
provided these give independent constraint equations.
\par
This observation supports the unidirectional, propagating stress picture,
discussed in the Introduction. Indeed, one can apply the above arguments
to the slabs of the packing created by cutting it with horizontal surfaces.
In a given slab of material, we choose the forces from the slab above as
the incoming forces. According to the preceding argument, these should
determine the outgoing forces transmitted to the slab beneath. This must
be true provided that the constraints from the upper slab are independent.
Such force transmission contrasts with that of a solid body, as emphasized
in the Introduction. If a given set of forces is applied to the top of a
slab of elastic solid, the forces on the bottom are free to vary, provided
the total force and torque on the slab are zero. Yet in our bead pack,
which appears equally solid, we have just concluded that stability
conditions determine all the bottom forces individually.
In deducing this peculiar behavior, we did not exclude tensile forces; we
may replace all the contacts by stiff springs that can exert strong positive or
negative force, without altering our reasoning. In this sense our result
is different from the recent similar result of Moukarzel\cite{Moukarzel.PRL}.
The
origin of the peculiar behavior lies in the minimal connectivity of the beads.
\par
In a subregion of the minimal network, the constraints can be satisfied with
no internal forces. Moreover, numerous (roughly
$N_{\rm ext}/2$) small external displacements can be applied to the subregion
without generating proportional restoring forces. We call these motions with
no restoring force ``soft modes". If we replace these external
displacements with external
forces and require no motion, compensating forces must be applied elsewhere to
prevent motion of the soft modes. If the applied forces perturb all the soft
modes, there must be one compensating force for each applied force to prevent
them from moving---on average $N_{\rm ext}/2$ of them. The subregion is
``transparent" to external forces, allowing them to propagate individually
through the region.
\par
This transparent behavior would be lost if further springs were added to
the minimal network, increasing $\overline{Z}$. Then the forces on a
subregion would be determined even without external contacts. The addition
of external displacements would deform the springs, and produce
proportional restoring forces. There would be no soft modes, and no
transparency to external forces.
A simple square lattice of springs provides a concrete example of the
multiple soft modes predicted above. Its elastic energy has the form
\begin{equation}
H = K \int {\rm d}x{\rm d}y \left[ (u^{xx})^2 + (u^{yy})^2 \right]
\end{equation}
This functional does not depend on $u^{xy}$, thus there are shear
deformations ($u^{xx}=u^{yy}=0$) which cost no elastic energy. This means
that the stress field should be orthogonal to any such mode, \textit{i.e.,}
\begin{equation}\sigma^{ij}{u_o}^{ij}=0
\end{equation}
where ${u_o}^{xx}= {u_o}^{yy}=0$, and ${u_o}^{xy}$ is an arbitrary function
of $(x;y)$. The above equation implies that $\sigma^{xy}=0$, \textit{i.e.,} the
principal axes of the stress tensor are fixed and directed along $x$ and
$y$. This provides a necessary closure for the standard macroscopic
equation of force balance,
\begin{equation}
\partial^i \sigma^{ij}=f^j_{\rm ext}
\label{balance}
\end{equation}
here $\bf f_{\rm ext}$ is an external force. Since $\sigma^{xy}=0$ the two
unknown components of the stress field, $\sigma^{xx}$ and $\sigma^{yy}$
propagate independently along the corresponding characteristics, $x=const$
and $y=const$:
\begin{equation}
\partial^x \sigma^{xx}=f^x_{\rm ext}\end{equation}
\begin{equation}
\partial^y \sigma^{yy}=f^y_{\rm ext}\end{equation}
\par
The propagation of the solution along characteristics is a property of
hyperbolic problems such as wave equation. The above equations without
external force imply that each component of the stress tensor $\hat \sigma$
satisfies a wave equation of the form
\begin{equation}
\left ({\partial^2 \over \partial t^2} - {\partial^2 \over \partial s^2}
\right )\hat \sigma = 0
\end{equation}
where $t \equiv x+y$ and $s \equiv x-y$.
Thus, the fact that the original elastic energy has the soft modes results
in hyperbolic, rather than elliptic equations for the stress field. One has
now to specify the surface forces (or displacements) at a single
non-characteristic surface---a line not parallel to $x$ or $y$--- in order
to determine the stress field in all the sample.
\par
A frictionless granular packing behaves like this example: they both are
minimally coupled; they both have soft modes; they both have unidirectional
propagation. In both examples only the surface of the sample stabilizes the
soft modes.
The above consideration of regular lattice can be easily extended to the
case of arbitrary angle between the characteristic directions, $x$ and $y$.
Instead of starting with a square lattice, we could have applied a uniform
$x-y$ shear, altering the angle between the horizontal and vertical
springs. The reasoning above holds for this lattice just as for the
original square one.
\par
The nature of the soft modes in a disordered bead pack is less obvious than
in this lattice example. We have not proven, for instance, that all the
forces acting on the top of a slab correspond to independent soft modes,
which determine the forces at the bottom. Otherwise stated, we have not
shown that the soft modes seen in the microscopic displacements have
continuum counterparts in the displacement field of the region. However,
the following construction, like the lattice example above, suggests that
the soft modes survive in the continuum limit
\par
To construct the pack, we place circular disks one at a time into a
two-dimensional vertical channel of width $L$. (Such sequential packings
will figure prominently in the next section.) Since the disks are of
different sizes, the packing will be disordered. We place each successive
disk at the lowest available point until the packed disks have reached a
height of order $L$, as shown in Figure \ref{channel}. We now construct a
second packing, starting from a channel of slightly greater width $L +
\delta$. We reproduce the packing constructed in the first channel as far
as possible. We use an identical sequence of disks and place each at the
lowest point, as before. There must be a nonvanishing range of $\delta$
for which the contact topology is identical. The motion of the wall over
this range is thus a soft mode. As the side wall moves, the top surface
will move by some amount $\epsilon$, proportional to $\delta$. Now,
instead of holding the side wall fixed, we exert a given force $f^x$ on it.
Likewise, we place a lid on the top, remove gravity, and exert another
force $f^y$. Evidently unless $f^x/f^y = \epsilon/\delta$, a motion of the
soft mode would result in work, and the system would move. Thus $f^y$ plus
the condition of no motion determines $f^x$. This condition translates
into an imposed proportionality between the stresses $\sigma^{yy}$ and
$\sigma^{xx}$, as in the lattice example above. The soft modes have
continuum consequences.
\section{Sequential packing under gravity}\label{sequential}
In the previous
section we have shown that a packing of frictionless spherical beads is an
anomalous solid from the point of view of classical elastic theory. The fact
that the average coordination number in such a packing is exactly $2d$ for the
infinite system supports unidirectional, propagating stress. Now we elaborate
this concept in more detail, by deriving particular laws for microscopic and
macroscopic force transfer adopting a particular packing procedure. We
suppose that the beads are deposited one by one in the presence of gravity. The
weight of any new sphere added to the existing packing must be balanced by
the reactions of the supporting beads. This is possible only if the number of
such supporting contacts is equal to the dimensionality $d$. Any larger number
of contacts requires a specific relationship between the sizes and coordinates
of the supporting beads, and and thus occurs with vanishing probability. As a
result, the eventual packing has an average coordination number $2d$, like any
stable, frictionless pack. In addition, it has a further property: a partial
time-like ordering. Namely, among any two contacting beads there is always one
which has found its place earlier than the other (the supporting one), and any
bead has exactly $d$ such supporting neighbors. Note that the supporting bead
is not necessarily situated below the supported one in the geometrical sense.
The discussed ordering is topological rather than spatial.
\par
One could expect that although any bead has exactly $d$ supporters at the
moment of deposition, this may
change later. Specifically, adding an extra bead to the packing may
result in the
violation of positivity of some contact force in the
bulk\cite{Claudin.PRL.1997}.
This will lead to a rearrangement of the network. For the moment we
assume that the
topology of the sequential packing is preserved in the final state of the
system, and
return to the effect of rearrangements in Section \ref{reality}.
\par
The partial ordering of the sequential packing considerably simplifies
the calculation of the force distribution. Indeed, any force applied to a
bead can be uniquely decomposed into the $d$ forces on the supporting
contacts. This means that the force balance enables us to determine all the
``outcoming" (downward) forces if the ``incoming" ones are known.
Therefore, there is a simple unidirectional procedure of determination of
all the forces in the system. Below, we use this observation to construct a
theory of stress propagation on the macroscopic scale.
\section{Mean-field stress}\label{macroscopic}
We will characterize any inter-bead contact in a sequential packing with a
unit vector directed from the center of supported bead $\alpha$ toward the
supporting one $\beta$,
\begin{equation}
{\bf n}_{\alpha \beta}= \frac{{\bf x}_\beta - {\bf x}_\alpha }{\left|{\bf
x}_\beta - {\bf x}_\alpha\right|}
\end{equation}
The stress distribution in the frictionless packing is given if a
non-negative magnitude of the force acting along any of the above contact
unit vector is specified. We denote such scalar contact force as
$f_{\alpha\beta}$
\par
The total force to be transmitted from some bead $\alpha$ to its
supporting neighbors is the sum of all the incoming and external (e.g.
gravitational) forces:
\begin{equation}{\bf F}_\alpha=({\bf f}_{\rm ext})_\alpha+\sum
_{\beta(\rightarrow\alpha)} {\bf n}_{\beta\alpha} f_
{\beta\alpha}\end{equation}
\par
Here $\beta(\rightarrow
\alpha)$ denotes all the beads supported by $\alpha$. Since there are
exactly $d$ supporting contacts for any bead in a sequential packing, the
above force can be uniquely decomposed onto the corresponding $d$
components, directed along the outcoming vectors ${\bf n}_
{\alpha\gamma}$. This gives the values of the outcoming forces. The $f$'s
may be compactly expressed in terms of a generalized scalar product
$\expectation{...|...}_\alpha$:
\begin{equation}
f_ {\alpha\gamma}= \expectation{{\bf F_{\alpha}}|{\bf n}_
{\alpha\gamma}}_\alpha
\end{equation}
The scalar product $\expectation{...|...}_\alpha$ is defined such that
$\expectation{{\bf n}_{\alpha \gamma}|{\bf n}_
{\alpha\gamma'}}_\alpha=\delta_{\gamma \gamma'}$. (all the Greek indices
count beads, not spatial dimensions). In general, it does not coincide with
the conventional scalar product. If now some force is applied to certain
bead in the packing, the above projective procedure allows one to determine
the response of the system, i.e. the change of the contact forces between
all the beads below the given one. In other words one can follow how the
perturbation propagates downward. Since the equations of mechanical
equilibrium are linear, and beads are assumed to be rigid enough to
preserve their sizes, the response of the system to the applied force is
also linear. This linearity can be destroyed only by violating the
condition of positivity of the contact forces, which implies the
rearrangement of the packing. While the topology (and geometry) of the
network is preserved, one can introduce the Green function to describe the
response of the system to the applied forces. Namely, force ${\bf
f}_\lambda$ applied to certain bead $\lambda$ results in the following
additional force acting on another bead, $\mu$ (lying below $\lambda$):
\omitt{\begin{equation}
f_{\lambda \mu} = \expectation{{\bf f}_\lambda |
\widehat{G}_{\lambda\mu}}_\lambda
\end{equation}
}
\begin{equation}
\vector f_\mu = \widehat{\bf G}_{\mu \lambda} \cdot \vector f_\lambda
\end{equation}
\par
Here $\widehat{\bf G}_{\lambda\mu}$ is a tensor Green function, which can
be calculated as the superposition of all the projection sequences (i.e.
trajectories), which lead from $\lambda$ to $\mu$.
\par
The stress field $\sigma^{ij}$ in the system of frictionless spherical
beads can be introduced in the following way \cite{stress}:
\begin{equation}
\sigma^{ij}({\bf x})=\sum_{\alpha}\sum_{\beta(\leftarrow
\alpha)}f_{\alpha\beta} n_{\alpha\beta}^i n_{\alpha\beta}^j
R_{\alpha\beta}\delta({\bf x}_\alpha-{\bf x})\end{equation}
\par
Here $R_{\alpha\beta}=\left|{\bf x}_\alpha - {\bf x}_\beta
\right|$. As we have just shown, the magnitude of the force
$f_{\alpha\beta}$ transmitted along the contact unit vector ${\bf
n}_{\alpha\beta}$ can be expressed as an appropriate projection of the
total force $F_\alpha$ acting on the bead $\alpha$ from above. This allows
one to express the stress tensor in terms of the vector field $F_\alpha$:
\begin{equation} \label{sigma.of.n}
\sigma^{ij}({\bf x})=\sum_{\alpha}\sum_{\beta(\leftarrow
\alpha)} \expectation{ F_{\alpha}| {\bf n}_{\alpha\beta}}_\alpha
n_{\alpha\beta}^i n_{\alpha\beta}^j R_{\alpha\beta}\delta({\bf
x}_\alpha-{\bf x})
\end{equation}
\par
In order to obtain the continuous macroscopic description of the system,
one has to perform the averaging of the stress field over a region much
larger than a bead. At this stage we make a mean-field approximation for
the force $\vector F_\alpha$ acting on a given bead from above: we replace
$\vector F_\alpha$ by its average $\overline{\vector F}$ over the region.
To be valid, this assumption requires that
\begin{eqnarray}
\sum_{\alpha \beta} \expectation{(\vector F_\alpha - \overline{\vector F}) |
{\bf n}_{\alpha \beta} }_\alpha
{\bf n}^i_{\alpha \beta} {\bf n}^j_{\alpha \beta} R_{\alpha \beta}
\nonumber\\
\ll
\sum_{\alpha \beta} \expectation{\overline{\vector F} | {\bf n}_{\alpha
\beta} }_\alpha
{\bf n}^i_{\alpha \beta} {\bf n}^j_{\alpha \beta} R_{\alpha \beta}
\end{eqnarray}
\par
For certain simple geometries, the mean-field approximation is exact. One
example is the simple square lattice treated in Section \ref{counting}. In
any regular lattice with one bead per unit cell, all the $\vector
F_\alpha$'s must be equal under any uniform applied stress. Thus replacing
$\vector F_\alpha$ by its average changes nothing. If this lattice is
distorted by displacing its soft modes, the $\vector F_\alpha$ are no
longer equal and the validity of the mean-field approximation can be
tested. Figure \ref{rhombuses} shows a periodic distortion with fourbeads
per unit cell. For example, under an applied vertical force, the bottom
forces oscillate to the left and right. Nevertheless, the stress crossing
the bottom row, like that crossing the row above it, is the average force
times the length. One may verify that the $\vector F_\alpha$ may also be
replaced by its average when the applied force is horizontal. Though the
mean-field approximation is exact in these cases, it is clearly not exact
in all. In the lattice of Figure \ref{rhombuses} the mean field
approximation may be inexact if one considers a region not equal to a whole
number of unit cells.
A disordered packing may be viewed as a superposition of periodic soft
modes like those of Figure \ref{rhombuses}. Each such mode produces
fluctuating forces, like those of the example. But after averaging over an
integer number of unit cells, the stress may depend on only the average
force $\overline{\vector F}$. A disordered packing need not have a fixed
co-ordination number as our example does. This is another possible source
of departure from the mean-field result.
\par
Now, it becomes an easy task to perform a local averaging of the Eq.
\ref{sigma.of.n} for the stress field in the vicinity of a given point $\bf
x$, replacing $\vector F_\alpha$ by its average:
\begin{equation}
\overline{\sigma^{ij}}({\bf x})=\rho \overline{F^k}({\bf x})
\tau^{kij}({\bf x})
\label{constitutive}
\end{equation}
Here $\rho$ is the bead density, $\overline{\bf F}({\bf x})$ is the force
${\bf F}_\alpha$ averaged over the beads $\alpha$ in the vicinity of the
point ${\bf x}$, and the third-order tensor $\hat\tau$ characterizes the
local geometry of the packing:
\begin{equation}\tau^{kij}({\bf x})= \overline{\ket{{\bf n}_{\alpha\beta}
}^k_\alpha n_{\alpha\beta}^i n_{\alpha\beta}^j R_{\alpha\beta}}
\label{tau}
\end{equation}
This equation is similar in spirit to one derived by Edwards for the case
of a $d+1$ co-ordinated packing of spheres with
friction\cite{Edwards.les.Houches}. Our geometric tensor $\tau$ plays a
role analogous to that of the fabric tensor in that treatment.
\par
\par
The stress field satisfies the force balance equation, Eq.
({\ref{balance}). Since this is a vector equation, it normally fails to
give a complete description of the tensor stress field. In our case,
however, the stress field has been expressed in terms of the vector field
$\vector F$. This creates a necessary closure for the force balance
equation. It is important to note that the proposed macroscopic formalism
is complete for a system of arbitrary dimensionality: there is a single
vector equation and a single vector variable. We now discuss the
application of the above macroscopic formalism in two special cases. First
we consider the equations of stress propagation in two dimensions. Then we
discuss a packing of arbitrary dimensionality but with uniaxial symmetry.
It is assumed to have no preferred direction other than that of gravity.
\par
\subsection {Two-dimensional packing.}
In two dimensions, according to Eq. (\ref{constitutive}), the stress tensor
$\hat \sigma$ can be written as a linear combination of two $\tau$ tensors.
\begin{equation}
\hat \sigma = F_{1}\hat \sigma_{1} + F_{2}\hat \sigma_{2},
\label{ATs20}
\end{equation} where $[\hat \sigma_{1}]^{ij} = \tau^{1ij}$ and $[\hat
\sigma_{2}]^{ij} = \tau^{2ij}$. Since the $\hat \sigma_{1}$ and $\hat
\sigma_{2}$ are properties of the medium and are presumed known, the
problem of finding the stress profile $\hat \sigma(x)$ becomes that of
finding $F_{1}$ and $F_{2}$ under a given external load. Rather than
determining these $F$'s directly, we may view Eq. (\ref{ATs20}) as a
constraint on $\hat
\sigma$. The form (\ref{ATs20}) constrains $\hat\sigma$ to lie in a
subspace of the three-dimensional space of stress components $\vec \sigma
\equiv (\sigma^{xx}, \sigma^{yy}, \sigma^{xy})$. It must lie in the
two-dimensional subspace spanned by $\vec \sigma_{1}$ and $\vec
\sigma_{2}$. This constraint amounts to one linear constraint on the
components of $\sigma$, of the form
\begin{equation}
\sigma^{ij}u^{ij}=0
\label{null}
\end{equation} where the $\hat u$ tensor is determined by
$\hat\sigma_{1}$, and $\hat\sigma_{2}$. Specifically, $\hat u$ may be
found by observing that the determinant of the vectors $\vec\sigma$,
$\vec\sigma_{1}$, $\vec\sigma_{2}$ must vanish. Expanding the determinant
by minors to obtain the coefficients of the $\sigma^{ij}$, one finds
\begin{equation}\hat{u}=
\left( \begin {array}{cc} \left|
\begin {array}{cc}
\sigma_{1}^{yy} & \sigma_{2}^{yy}\\
\sigma_{1}^{xy} & \sigma_{2}^{xy} \end {array}
\right| & \left|
\begin {array}{cc} \sigma_{1}^{xx} &
\sigma_{2}^{xx}\\ \sigma_{1}^{yy} &
\sigma_{2}^{yy} \end {array} \right|
\\ & \\ \left|
\begin {array}{cc} \sigma_{1}^{xx} & \sigma_{2}^{xx}\\
\sigma_{1}^{yy} & \sigma_{2}^{yy} \end {array}
\right| & \left| \begin {array}{cc}
\sigma_{1}^{xy} & \sigma_{2}^{xy}\\ \sigma_{1}^{xx} &
\sigma_{2}^{xx} \end {array} \right|
\end{array}
\right)
\end{equation}
\par
Eq. \ref{null} has the same ``null-stress" form as that introduced by
Wittmer et al \cite{Wittmer.Nature}, whose original arguments were based
on a qualitative analysis of the problem. By an appropriate choice of the
local co-ordinates ($\xi$, $\eta$), the $\hat u$ tensor can be transformed
into co-ordinates such that $u^{\xi\xi} = u^{\eta\eta} = 0$. Then the
null stress condition becomes $\sigma^{\xi\eta} = \sigma^{\eta\xi} = 0$.
This implies that, according to force balance equation (\ref{balance}),
the non-zero diagonal components of the stress tensor ``propagate"
independently along the corresponding characteristics, $\xi=const$ and
$\eta=const$:
\begin{equation}
\partial^\xi \sigma^{\xi\xi}=f_{\rm ext}^\xi \quad \quad
\partial^\eta \sigma^{\eta\eta}=f_{\rm ext}^\eta
\end{equation}
Our microscopic approach gives an alternative foundation for the
null-stress condition, Eq. (\ref{null}), and allows one to relate the
tensor $\hat {u}$ in this equation to the local geometry of the packing.
Our general formalism is not limited to the two-dimensional case, and in
this sense, is a generalization of the null-stress approach.
\par
\subsection {Axially-symmetric packing.}
Generally, there are two preferred directions in the sequential packing:
that of the gravitational force, ${\bf g}$ , and that of the growth
surface ${\bf n}$. In the case when these two directions coincide, the form
of the third-order tensor $\hat{\tau}$, Eq. (\ref{tau}), should be
consistent with the axial symmetry associated with the single preferred
direction, ${\bf n}$. Since $\tau^{kij}$ is symmetric with respect to
$i\leftrightarrow j$ permutation, it can be only a linear combination of
three tensors: $n^kn^in^j$, $n^k\delta^{ij}$ and
$\delta^{ki}n^j+\delta^{kj}n^i$, for general spatial dimension $d$.
Let $\sigma^{ij}$ be the stress tensor in the $d$-dimensional space
($i,j=0...d-1$, and index '0' corresponds to the vertical direction). From
the point of view of rotation around the vertical axis the stress splits
into scalar $\sigma^{00}$, $d-1$ dimensional vector $\sigma^{0 a}$
($a=1...d-1$) $d-1$ dimensional tensor $\sigma^{a b}$. According to our
constitutive equation \ref{constitutive}, the stress should be linear in
vector $\bf F$, which itself splits into a scalar $F^0$ and a vector $F^a$
with respect to horizontal rotations. Since the material tensor $\tau$ is
by hypothesis axially symmetric, the only way that the ``scalar''
$\sigma^{00}$ may depend on $\bf F$ is to be proportional to ``scalar''
$F^0$. Likewise, the only way ``tensor'' $\sigma^{a b}$ can be linear in
$\bf F$ is to be proportional to $\delta^{a b} F^0$. Therefore, in the
axially-symmetric case
\begin{equation}
\sigma^{a b}= \lambda \delta^{a b}\sigma^{00},
\end{equation}
where the constant $\lambda$ is \textit{eg.} $\tau^{0 1 1}/\tau^{0 0 0}$. This
constitutive equation allows one to convert the force balance equation
(\ref{balance}) to the following form: \omitt{TW bk 110 p 44}
\begin{equation}
\partial^0 \sigma^{00}+\partial^a \sigma^{a 0}=f^0_{ext};\indent
\partial^0 \sigma^{a 0}+\lambda\partial^a \sigma^{00}=f^{a}_{ext}
\end{equation}
In the case of no external force, we may take $\partial^0$ of the first
equation and combine with the second to yield a wave equation for
$\sigma^{0 0}$. Evidently $\sigma^{a b}$, being a fixed multiple of
$\sigma^{0 0}$, obeys the same equation. Similar manipulation yields the
same wave equation for $\sigma^{0 a}$ and $\sigma^{a 0}$. Thus every
component of stress satisfies the wave equation with vertical direction
playing the role of time and $\sqrt \lambda$ being the propagation
velocity.
\section{ Discussion}\label{reality}
In this section we consider how well our model should describe real systems
of rigid, packed units. As stated above, our model is most relevant for
emulsions or dense colloidal suspensions, whose elementary units ae well
described as frictionless spheres. Under very weak compression the forces
between such units match our model assumptions. However, our artificial
procedure of sequential packing bears no obvious resemblance to the
arrangements in real suspensions. We argue below that our model may well
have value even when the packing is not sequential. More broadly we may
consider the connection between our frictionless model and real granular
materials with friction. The qualitative effect of adding friction to our
sequential packing is to add constraints so that the network of contacts is
no longer minimally connected. Thus the basis for a null-stress
description of the force transmission is compromised. We argue below that
friction should cause forces to propagate as in an elastic medium, not via
null-stress behavior.
\subsection{Sequential packing}
We first consider the consequences of our sequential packing assumption.
One consequence is that each bead has exactly $d$ supporting contacts.
These lead successively to earlier particles, forming a treelike
connectivity from supported beads to supporters. Although the counting
arguments of Section \ref{counting} show that the propagating stress
approach should be applicable to a wide class of frictionless systems, the
continuum description of Section \ref{macroscopic} depends strongly on the
assumed sequential order. Now, most packings are not sequential, and even
when beads are deposited in sequence, they may undergo rearrangements that
alter the network of connections. However, it is possible to modify our
arguments to take account of such re-arrangements. Our reasoning depends
on the existence of $d$ supporting contacts for each bead. Further, every
sequence of supporting contacts starting at a given bead must reach the
boundary of the system without passing through the starting bead: there
must be no closed loops in the sequence.
\par
Even in a non-sequential packing we may define a network of supporting
contacts. First we define a downward direction. Then, for any given bead
in the pack, we {\it define} the supporting contacts to be the $d$ lowest
contacts. With probability 1, each bead has at least $d$ contacts.
Otherwise it is not stable. Typically a supporting bead lies lower than
the given bead. Thus the typical sequence of supporting contacts leads
predominantly downward, away from the given bead, and returns only rarely
to the original height. A return to the original {\it bead} must be even
more rare.
One may estimate the probability that there is a loop path of supporting
contacts under simple assumptions about the packing. As an example we
suppose the contacts on a given bead to be randomly distributed amongst the
12 sites of a randomly-oriented close-packed lattice. We further imagine
that these sites are chosen independently for each bead, with at least one
below the horizontal. Then the paths are biased random walks with a mean
steplength of .51 \omitt{bk 110 p. 46} diameters and a root-mean-square
steplength of about 1.2 times the mean. The probability of a net upward
displacement of 1 or more diameter is about one percent. It appears that
our neglect of loop paths is not unreasonable.
\subsection{Friction}
The introduction of friction strongly affects most of our arguments.
Friction creates transverse as well as normal forces at the contacts.
The problem is to determine positions and orientations of the beads that
lead to balanced forces and torques on each. If the contact network is
minimally connected, the forces can be determined without reference to
deformations of the particle. But if the network has additional
constraints, it is impossible to satisfy these without considering
deformation. This is no less true if the beads are presumed very rigid.
We first give an example to show that in a generic packing the
deformability alters the force distribution substantially. We then give a
prescription for defining the deformation and hence the contact forces
unambiguously.
\par
In our example we imagine a two-dimensional sequential packing and focus on
a particular bead, labeled 0, as pictured in Figure \ref{threebeads}. We
presume that the beads are deposited gently, so that each contact forms
without tangential force. Thus when the bead is deposited, it is minimally
connected: its weight determines the two (normal) supporting forces,
labeled 1 and 2. Thenceforth no slipping is allowed at the contact. Later
during the construction of the pack bead 0 must support the force from some
subsequent bead. This new force is normal, since it too arises from an
initial contact. But the new force creates tangential forces on the
supporting contacts 1 and 2. To gauge their magnitude, we first suppose
that there is no friction at contacts 1 and 2, while the supporting beads
remain immobile. Then the added force $F$ leads to a compression. We
denote the compression of the contact 1 as $\delta$. With no friction, the
contact 2 would undergo a slipping displacement by an amount of order
$\delta$. Friction forbids this slipping and decrees deformation of the
contact instead. The original displacement there would create an elastic
restoring force of the same order as the original $F$. Thus the imposition
of friction creates new forces whose strength is comparable to those
without friction. The frictional forces are not negligible, even if the
beads are rigid. Increasing the rigidity lessens the displacements $\delta$
associated with the given force $F$, but it does not alter the ratio of
frictional to normal forces.
Neither are the frictional forces large compared to the normal forces.
Thus a coefficient of friction $\mu$ of order unity should be sufficient to
generate enough frictional force to prevent slipping of a substantial
fraction of the contacts.
\par
The contact forces $T_1$ and $T_2$ cannot be determined by force balance
alone, as they could in the frictionless case. Now the actual contact
forces are those which minimize the elastic energy of deformation near the
two contacts. This argument holds not just for spheres but for general
rounded objects.
\par
Though the new tangential forces complicate the determination of the
forces, the determination need not be ambiguous. We illustrate this point
for a sequential packing on a bumpy surface with perfect friction. We
choose a placement of the successive beads so that no contact
re-arrangements occur. If only a few beads have been deposited in the
container, the forces are clearly well determined. Further, if the forces
are presumed well-determined up to the $M$th bead, they remain so upon
addition of the $M+1$st bead. We presume as before that the new bead
exerts only normal forces on its immediate supporters. Each supporter thus
experiences a well-defined force, as shown in Section \ref{counting}. But
by hypothesis, these supporting beads are part of a well-connected, solid object, whose contacts may be regarded as fastened together. Thus the displacem
ents and rotations of each bead are a well-defined function of any small
applied load. Once the $M+1$'st bead has been added, its supporting
contacts also support tangential force, so that it responds to future loads
as part of the elastic body.
\par We conclude that a sequential packing with perfect friction, under
conditions that prevent contact rearrangements, transmits forces like a
solid body. Small changes in stress $\delta \sigma^{ij}$ in a region give
rise to proportional changes in the strain $\delta \gamma^{k \ell}$. This
proportionality is summarized by an elasticity tensor $K^{ijk\ell}$:
$\delta \sigma^{ij} = K^{ijk\ell} \delta \gamma^{k\ell}$. The elastic
tensor $K$ should depend in general on how the pack was formed; thus it may
well be anisotropic.
\par
This elastic picture is compromised when the limitations of friction are
taken into account. As new beads are added, underlying contacts such as
contacts 1 and 2 of Figure \ref{threebeads} may slip if the tangential
force becomes too great. Each slipping contact relaxes so as to satisfy a
fixed relationship between its normal force $N$ and its tangential force
$T$: \textit{viz.} $|T| = \mu |N|$. If $\mu$ were very small, virtually all the
contacts would slip until their tangential force were nearly zero. Then
the amount of stress associated with the redundant constraints must become
small and the corresponding elastic moduli must become weak. Moreover, as
$\mu$ approaches 0, the material on any given scale must become difficult
to distinguish from a frictionless material with unidirectional stress
propagation. Still, redundant constraints remain on the average and thus
the ultimate behavior at large length scales (for a given $\mu$) must be
elastic, provided the material remains
homogeneous.
\par
\subsection{Force-generated contacts}
Throughout the discussion of frictionless packs we have ignored geometric
configurations with probability zero, such as beads with redundant
contacts. Such contacts occur in a close-packed lattice of identical disks,
for example. Though such configurations are arbitrarily rare in principle,
they may nevertheless play a role in real bead packs. Real bead packs have
a finite compressibility; compression of beads can create redundant
contacts. Thus for example a close-packed lattice of identical spheres has
six contacts per bead, but if there is a slight variability in size, the
number of contacts drops to four. The remaining two beads adjacent to a
given bead do not quite touch. These remaining beads can be made to touch
again if sufficient compressive stress is applied. Such stress-induced
redundant contacts must occur in a real bead with some nonzero density
under any nonzero load. These extra contacts serve to stabilize the pack,
removing the indeterminate forces discussed in Section \ref{counting}.
To estimate the importance of this effect, we consider a large bead pack
compressed by a small factor $\gamma$. This overall strain compresses a
typical contact by a factor of order $\gamma$ as well. The number of new
contacts may be inferred from the pair correlation function $g(r)$. Data
on this $g(r)$ is available for some computer-generated sequential packings
of identical spheres of radius $R$\cite{Pavlovitch}. These data show that
$g(r)$ has a finite value near 1 at $r=2R$. Thus the number of additional
contacts per bead that form under a slight compression by an amount $\delta
r$ is given by $\delta \overline{Z} = 6\phi g(2R)\delta r/R \simeq 4
\gamma$. Here $\phi\simeq .6$ is the volume fraction of the beads.
These extra contacts impose constraints that reduce the number of
undetermined boundary forces in a compact region containing $M'$ beads and
roughly $M'^{2/3}$ surface beads. The remaining number of undetermined
boundary forces now averages $\frac{1}{2} N_{\rm ext} - M'\delta \overline{Z}$.
The first term is of order $M'^{2/3}$, and must thus be smaller than the
second term for $M'^{1/3} \simeq (\delta\overline{Z})^{-1}$. For $M'$
larger than this amount, there are no further undetermined forces and the
region becomes mechanically stable.
Moukarzel\cite{Moukarzel.PRL} reaches a similar conclusion by a somewhat
different route.
\par
If the pack is compressed by a factor of $\gamma$, stability occurs for
$M'^{1/3} \greaterthanorabout 1/\gamma$---a region roughly $1/\gamma$ beads
across. In a typical experiment \cite{Nagel.Liu} the contact compression
$\gamma$ is $10^{-4}$ or less, and the system size is far smaller than
$10^4$ beads.
Thus compression-induced stability should be a minor effect here.
Still, this compression-induced stability might well play a significant
role for large and highly compressed bead packs such as compressed
emulsions\cite{Langer.Liu}. In some of the large packs of Ref.
\cite{load}, compression-induced stability may also be important.
\subsection{Experimental evidence}
We have argued above that undeformed, frictionless beads should show
unidirectional, propagating forces while beads with friction should show
elastic spreading of forces. The most direct test of these contrasting
behaviors is to measure the response to a local perturbing force
\cite{Claudin.PRL.1997}.
Thus \textit{eg.} if the pile of Figure \ref{elastic.vs} is a null-stress medium,
the local perturbing force should propagate along a light cone and should
thus be concentrated in a ring-like region at the
bottom\cite{Bouchaud.Cates.PRL}. By contrast, if the pile is an elastic
medium the perturbing force should be spread in a broad pattern at the
bottom, with a maximum directly under the applied force.
Existing experimental information seems inadequate to test either
prediction, but experiments to measure such responses are in progress
\cite{response.experiments}.
As noted above, emulsions and colloids are good realizations of the
frictionless case. The contacts in such systems are typically organized by
hydrostatic pressure or by flow, rather than by gravity. Still, our
general arguments for unidirectional propagation should apply. Extensive
mechanical measurements of these systems have been made
\cite{Weitz.Mason,VanderWerff}. The shear modulus study of Weitz and Mason
\cite{Weitz.Mason} illustrates the issues. The study spans the range from
liquid-like behavior at low volume fractions to solid-like behavior at high
volume fractions. In between these two regimes should lie a state where
the emulsion droplets are well connected but little deformed. The emulsion
in this state should show unidirectional force transmission. It is not
clear how this should affect the measured apparent moduli.
Other indirect information about force propagation comes from the load
distribution of a granular pack on its container, such as the celebrated
central dip under a conical heap of sand \cite{load}. These data amply
show that the mechanical properties of a pack depend on how it was
constructed. Theories postulating null-stress behavior have successfully
explained these data \cite{Wittmer.Nature}. But conventional
elasto-plastic theories have also proved capable of producing a central dip
\cite{Goddard}. An anisotropic elastic tensor may also be capable of
explaining the central dip.
Another source of information is the statistical distribution of individual
contact forces within the pack or at its boundaries. The measured forces
become exponentially rare for strong
forces\cite{Coppersmith.etal,Mueth}. Such exponential falloff is predicted by
Coppersmith's ``q model" \cite{Coppersmith.etal}, which postulates
unidirectional force propagation. Still, it is not clear whether this
exponential falloff is a distinguishing feature of unidirectional propagation.
A disordered elastic material might well show a similar exponential
distribution.
Computer simulations should also be able to test our predictions. Recent
simulations\cite{Thornton,Langer.Liu} have focussed on stress-induced
restructuring of the force-bearing contact network. We are not aware of a
simulation study of the transmission of a local perturbing force. Such a
perturbation study seems quite feasible and would be a valuable test. We have
performed a simple simulation to test the mean-field description of stress in
frictionless packs. Preliminary results agree well with the predictions. An
account of our simulations will be published separately.
\section{Conclusion}
In this study we have aimed to understand how force is transmitted in
granular media, whether via elastic response or via unidirectional
propagation. We have identified a class of disordered systems that ought
to show unidirectional propagation. Namely, we have shown that in a general
case a system of frictionless rigid particles must be isostatic, or
minimally connected. That is, all the
inter-particle forces can in principle be determined from the force balance
equations. This contrasts with statically undetermined, elastic
systems, in which the forces cannot be determined without
self-consistently finding the displacements induced by those forces. Our
general equation-counting arguments suggest that isostatic property of the
frictionless packing results in the unidirectional propagation of the
macroscopic stress.
We were able to demonstrate this unidirectional propagation explicitly by
specializing to the case of sequential packing. Here the stress obeys a
condition of the previously postulated null-stress form\cite{Wittmer.Nature};
our system provides a microscopic justification for the null-stress postulate.
Further, we could determine the numerical coefficients entering the null-stress
law from statistical averages of the contact angles by using a mean field
hypothesis (decoupling Anzatz). We have devised a numerical simulation to test
the adequacy of the sequential packing assumption and the mean-field
hypothesis. The results will be reported elsewhere.
If we add friction in order to describe macroscopic granular packs more
accurately, the packing of rigid particles no longer needs to be isostatic, and
the system is expected to revert to elastic behavior. This elasticity
does not
arise from softness of the beads or from a peculiar choice of contact network.
It arises because contacts that provide only minimal constraints when created
can provide redundant constraints upon further loading.
We expect our formalism to be useful in understanding experimental granular
systems. It is most applicable to dense colloidal suspensions, where static
friction is normally negligible. Here we expect null-stress behavior to emerge
at scales large enough that the suspension may be considered uniform. We
further expect that our mean-field methods will be useful in estimating the
coefficients in the null-stress laws. In macroscopic granular packs our
formalism is less applicable because these packs have friction. Still, this
friction may be small enough in many situations that our picture remains
useful. Then our microscopic justification may account for the practical
success of the null-stress postulate\cite{Wittmer.Nature} for these systems.
\section*{Acknowledgement}
The authors are grateful to the Institute for Theoretical Physics in Santa
Barbara, for hospitality and support that enabled this research to be
initiated. The authors thank the participants in the Institute's program on
Jamming and Rheology for many valuable discussions. Many of the ideas reported
here had their roots in the floppy network theory of Shlomo Alexander. The
authors dedicate this paper to his memory. This work was supported in part by
the National Science Foundation under Award numbers PHY-94 07194, DMR-9528957
and DMR 94--00379.
| proofpile-arXiv_065-8942 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Critical phenomena in gravitational collapse have attracted much
attention \cite{Gu1997} since the pioneering work of Choptuik
\cite{Ch1993}. From the known results the following
emerges \cite{WO1997}: In general critical collapse of a
fixed matter field can be divided into three different classes
according to the self-similarities that the critical solution possesses.
If the critical solution has no self-similarity, continuous or discrete,
the formation of black holes always starts with a mass gap
(Type I collapse), otherwise it will start with zero mass
(Type II collapse), and the mass of black holes takes
the scaling form
$
M_{BH} \propto (P - P^{*})^{\gamma}$,
where $P$ characterizes the strength
of the initial data.
In the latter case, the collapse can be further
divided into two subclasses according to whether
the critical solution has continuous self-similarity (CSS)
or discrete self-similarity (DSS). Because of this difference,
the exponent $\gamma$ is usually
also different. Whether the critical solution is CSS, DSS, or none of
them, depending on both the matter field
and the regions of the initial data space
\cite{Gu1997}. The co-existence of Type I and Type II collapse
was first found in the SU(2) Einstein-Yang-Mills case \cite{CCB1996},
and later extended to both the Einstein-scalar case \cite{CLH1997}
and the Einstein-Skyme case \cite{BC1998},
while the co-existence of CSS and DSS critical solutions was
found in the Brans-Dicke theory \cite{LC1996}. The uniqueness of the
exponent in Type II collapse is well understood in terms of perturbations
\cite{HKA1996}, and is closely related to the fact that the critical
solution has only one unstable mode. This property now is considered as
the main criterion for a solution to be critical \cite{Gu1997}.
While the uniqueness of the exponent $\gamma$ crucially
depends on the numbers of the unstable
modes of the critical solution, that whether or not the formation
of black holes starts with a mass gap
seemingly only depends on whether
the spacetime has self-similarity or not. Thus, even the
collapse is not critical, if a spacetime has CSS or DSS,
the formation of black holes may still turn on
with zero mass. If this speculation is correct, it may have
profound physical implications. For example, if Nature forbids
the formation of zero-mass black holes, which are essentially naked
singularities \cite{Gu1997}, it means that Nature forbids solutions
with self-similarity \cite{Si1996}. To study this problem
in its generality term, it is found difficult. In \cite{WRS1997},
which will be referred as Paper I, gravitational
collapse of massless scalar field and radiation fluid is studied, and
it was found that when solutions have CSS, the formation of black holes
indeed starts with zero-mass, while when solutions have no
self-similarity it starts with a mass gap.
In this Letter, we shall generalize the studies given in Paper I
to the case of perfect fluid with the equation of state $p = k
\rho$, where $\rho$ is the energy density of the fluid, $p$ the
pressure, and $k$ an arbitrary constant, subjected to $0 \le k \le 1$.
We shall show that the emerging results are consistent with the ones
obtained in Paper I. Specifically, we shall present two
classes of exact solutions to the
Einstein field equations that represent spherical gravitational
collapse of perfect fluid, one has CSS, and the other has neither
CSS nor DSS. It is found that such
formed black holes usually do not have finite masses. To remedy
this shortage, we shall cut the spacetime along a time-like
hypersurface, say, $r = r_{0}(t)$, and then join the internal region
$r \le r_{0}(t)$ with an asymptotically flat out-going Vaidya
radiation fluid, using Israel's method \cite{Is1965}. It turns out that
in general such a junction is possible only when
a thin matter shell is present
on the joining hypersurface \cite{BOS1989}. Thus, the finally
resulting spacetimes
will represent the collapse of a compact packet of
perfect fluid plus a thin matter shell. The effects of the thin shell
on the collapse are also studied. It should be noted that
by properly choosing the
solution in the region $r \ge r_{0}(t)$, in principle one
can make the thin shell disappear, although in this Letter we shall not
consider such possibilities. The notations will closely follow
the ones used in Paper I.
\section{Exact Solutions Representing Gravitational Collapse of
perfect Fluid}
In this section, we shall present two classes
of solutions to the Einstein
field equations,
$$
R_{\mu\nu} - \frac{1}{2}g_{\mu\nu}R =
(\rho + p)u_{\mu}u_{\nu} - p g_{\mu\nu},
$$
where $u_{\mu}$ is the four-velocity of the perfect fluid considered.
The general metric of spherically symmetric spacetimes that are conformally
flat is given by \cite{WRS1997}
\begin{equation}
\label{eq2}
ds^{2} = G(t, r) \left[ dt^{2} - h^{2}(t, r)\;\left(dr^{2}
+ r^{2}d\Omega^{2}\right)\right],
\end{equation}
where $d\Omega^{2} \equiv d\theta^{2} + \sin^{2}\theta d\varphi^{2}$,
$\{x^{\mu}\}\equiv \{t, r, \theta, \varphi\}\; (\mu = 0, 1, 2, 3)$ are
the usual spherical coordinates, and
\begin{equation}
\label{eq3}
h(t, r) = \left\{
\begin{array}{c}
1, \\
{\left(f_{1}(t) + r^{2}\right)}^{-1},
\end{array}
\right.
\end{equation}
with $f_{1}(t) \not= 0$. The Friedmann-Robertson-Walker (FRW) metric
corresponds to $G(t, r) = G(t)$ and $f_{1}(t) = Const.$
The corresponding Einstein field equations
are given by Eqs.(2.20) - (2.23) in Paper I. Integrating those equations,
we find two classes of solutions. In the following, we shall
present them separately.
{\bf $\alpha$) Class A Solutions:} The first class of the solutions
is given by
\begin{eqnarray}
\label{eq4}
G(t, r) &=& (1 - Pt)^{2\xi}, \;\;\; \;\;\; \;\;\; h(r) = 1,\nonumber\\
p = k \rho &=& 3 k \xi^2 P^2\left( 1 - Pt
\right)^{-2(\xi+1)},\;\;
u_{\mu} = \left(1 - Pt \right)^{-\xi} \delta^{t}_{\mu},
\end{eqnarray}
where $P$ is a constant and characterizes the strength of the solutions
(See the discussions given below),
and $\xi \equiv 2/(1 + 3k)$. This class of solutions is actually the
FRW solutions and has CSS symmetry \cite{PZ1996}. However, in this Letter
we shall study them in the context of gravitational collapse.
To study the physical properties of these
solutions, following Paper I
we consider the following physical quantities,
\begin{eqnarray}
\label{eq9}
m^{f}(t, r) &\equiv& \frac{R}{2}(1 +
R_{,\alpha}R_{,\beta} g^{\alpha \beta})
= \frac{\xi^{2}P^{2}r^{3}}{2(1 - Pt)^{2 - \xi}},\nonumber\\
{\cal{R}} &\equiv& R_{\alpha \beta\gamma\delta}
R^{\alpha \beta\gamma\delta} = \frac{18\xi^{2}(1+\xi^{2})P^{4}}
{(1 - Pt)^{4(1+\xi)}},
\end{eqnarray}
where $R$ is the physical radius of the two sphere $t, r = Const.$, and
$m^{f}(t, r)$ is the local mass function \cite{PI1990}. From Eq.(\ref{eq9})
we can see that the spacetime is singular on the space-like hypersurface
$t = P^{-1}$. The nature of the singularity depends on the signature of the
parameter $P$. When $P < 0$, it is naked, and the corresponding solutions
represent white holes \cite{WRS1997}. When $P = 0$, the
singularity disappears and the corresponding spacetime is Minkowski.
When $P > 0$, the singularity hides behind the apparent horizon, which
locates on the hypersurface,
\begin{equation}
\label{eq10}
r = r_{AH} \equiv \frac{1 - P t}{\xi P},
\end{equation}
with $r_{AH}$ being a solution of the equation $R_{,\alpha}R_{,\beta}
g^{\alpha \beta} = 0$. Thus, in the latter case the solutions represent the
formation of black holes due to the gravitational collapse of the fluid.
The corresponding Penrose diagram is similar to that given by Fig.1(a)
in Paper I. Note that, although the spacetime
singularity is always space-like, the nature of the apparent horizon
depends on the choice of the parameter $k$. In fact,
when $1/3 < k \le 1$, it is space-like; when $ k = 1/3$, it is
null; and when $ 0 \le k < 1/3$, it is time-like.
Substituting Eq.(\ref{eq10}) into Eq.(\ref{eq9}), we find that $m^{f}_{AH}(t,
r_{AH}) = (P\xi)^{\xi}r_{AH}^{1 + \xi}$. Thus, as $r_{AH} \rightarrow +
\infty$, we have $m^{f}_{AH} \rightarrow + \infty$. That is, the total mass
of the black hole is infinitely large. To get a physically reasonable
model, one way is to cut the spacetime along a
time-like hypersurface, say, $r = r_{0}(t)$, and then join the part $r \le
r_{0}(t)$ with one that is asymptotically flat \cite{WO1997}.
We shall consider such junctions in the next section.
{\bf $\beta$) Class B Solutions:} The second class of solutions
are given by
\begin{equation}
\label{eq11}
G(t,r) = \sinh^{2\xi}\left[2\alpha{\xi^{-1}}
(t_{0} - \epsilon t)\right], \;\;\;
h(r) = (r^{2} - \alpha^{2})^{-1},
\end{equation}
where $\epsilon = \pm 1$, $\xi$ is defined as in Eq.(\ref{eq4}),
$t_{0}$ and $\alpha (\equiv \sqrt{-f_{1}})$ are
constants. Introducing
a new radial coordinate $\bar{r}$ by $d\bar{r} = h(r)dr$,
the corresponding metric can be written in the form
\begin{equation}
\label{eq13}
ds^{2} = \sinh^{2\xi}[2\xi^{-1}(t_{0} - \epsilon t)]\left\{
dt^{2} - d{r}^{2} - \frac{\sinh^{2}(2
{r})}{4}
d^{2}\Omega\right\}.
\end{equation}
Note that in writing the above equation we had, without
loss of generality, chosen $\alpha = 1$, and dropped the bar
from $\bar{r}$. The energy density and
four-velocity of the fluid are given, respectively, by
\begin{equation}
\label{eq14}
p = k \rho = 12k
\sin^{-2(\xi+1)}[2(t_{0} - \epsilon t)/\xi],\;\;
u_{\mu} = \sin^{-\xi}[2(t_{0} -
\epsilon t)/\xi]\delta^{t}_{\mu},
\end{equation}
while the relevant physical quantities are given by
\begin{eqnarray}
\label{eq15}
m^{f}(r,t) &=& {1 \over 4 } \sinh^3(2 {r} )
\sinh^{\xi -2} {\left[ {2\xi^{-1}} {( t_0 - \epsilon t)}
\right]}, \nonumber\\
{\cal{R}} &=& {288\left(1+ \xi^2 \right) \xi^{-2}}
\sinh^{-4 (\xi +1)}
{\left[ {2 \xi^{-1}} {\left( t_0 - \epsilon t\right)}
\right]}.
\end{eqnarray}
The apparent horizon now is located at
\begin{equation}
\label{eq16}
r = r_{AH} \equiv { \xi^{-1} } (t_0 - \epsilon t).
\end{equation}
From Eq.(\ref{eq15}) we can see that the solutions are singular on the
hypersurface $t = \epsilon t_{0}$. When $\epsilon = - 1$ it can be
shown that the corresponding solutions represent cosmological models
with a naked singularity at the initial time $t = - t_{0}$, while when
$\epsilon = + 1$ the singularity is hidden behind the apparent horizon
given by Eq.(\ref{eq16}), and the solutions represent
the formation of black holes due to the collapse of the fluid.
In the latter case the total mass of black holes is also infinitely large.
To remedy this shortage, in the next section we
shall make ``surgery" to this spacetime, so that
the resulting black holes have finite masses.
\section{Matching the Solutions with Outgoing Vaidya Solution}
In order to have the black hole mass finite, we shall first cut the
spacetimes represented by the solutions given by Eqs.(\ref{eq4}), and
(\ref{eq13}) along a time-like hypersurface, and then join the internal
part with the out-going Vaidya radiation fluid.
In the present two cases since the perfect fluid is
comoving, the hypersurface can be chosen as $r = r_{0} = Const.$
Thus, the metric in the whole spacetime can be written in the
form
\begin{equation}
\label{eq17}
ds^2 = \left\{
\begin{array}{c}
A(t,r)^2 dt^2 - B(t,r)^2dr^{2} -
C(t, r)^2d\Omega^2, \;(r \le r_{0}), \\
\left( 1- \frac{2m(v)}{R}\right)dv^2 + 2dvdR^2
- R^2d\Omega^2, \; (r \ge r_{0}),
\end{array}
\right.
\end{equation}
where the functions $A(t, r),\; B(t, r)$ and $C(t, r)$ can be
read off from Eqs.(\ref{eq2}), (\ref{eq4}) and (\ref{eq13}). On
the hypersurface $r= r_{0}$ the metric reduces to
\begin{equation}
\label{eq18}
ds^2 \left|_{r = r_{0}} = \right. g_{ab}d\xi^{a}d\xi^{b} =
d\tau^2 - R(\tau)^2d\Omega^2,
\end{equation}
where $\xi^{a} = \{\tau, \theta, \varphi\}$ are the intrinsic
coordinates of the surface, and $\tau$ is defined by
\begin{equation}
\label{eq19}
d\tau^{2} = A^{2}(t, r_{0})dt^{2}
= \left(1 - \frac{2M(\tau)}{R}\right)dv^{2}
+ 2dvdR,
\end{equation}
where $v$ and $R$ are functions of $\tau$ on the surface,
and $R(\tau) \equiv C(t, r_{0}),\; M(\tau) \equiv m(v(\tau))$.
The extrinsic curvature oon the two sides
of the surface defined by
\begin{equation}
\label{eq20}
K^{\pm}_{ab} = - n^{\pm}_{\alpha}\left[
\frac{\partial^{2}x^{\alpha}_{\pm}}{\partial \xi^{a}
\partial \xi^{b}}
- \Gamma^{\pm \alpha}_{\beta\delta}\frac{\partial
x^{\beta}_{\pm}}
{\partial \xi^{a}}\frac{\partial x^{\delta}_{\pm}}
{\partial \xi^{b}}\right],
\end{equation}
has the following non-vanishing components \cite{Ch1997}
\begin{eqnarray}
\label{eq21}
K_{\tau\tau}^{-} &=& - \frac{{\dot t}^2 A_{,r} A}{B},\;\;
K_{\theta\theta}^{-} = \sin^{- 2}\theta K_{\varphi\varphi}^{-}=
\frac{C_{,r} C}{B},\nonumber\\
K_{\tau\tau}^+ &=& \frac{\ddot v}{\dot v} -
\frac{{\dot v} M(\tau)}{R^2},\;\;
K_{\theta\theta}^+ = \sin^{-2}\theta K_{\varphi\varphi}^+
= R\left\{
{\dot v}\left( 1- \frac{2M(\tau)}{R}\right) + \dot{R} \right\},
\end{eqnarray}
where $\dot{t} \equiv dt/d\tau,\; (\;)_{,\mu} \equiv
\partial(\;)/\partial x^{\mu}$ and $n_{\alpha}^{\pm}$ are the normal
vectors defined in the two faces of the surface. Using the expression
\cite{Is1965}
\begin{equation}
\label{eq22}
\left[K_{ab}\right]^{-} - g_{ab}\left[K\right]^{-} = - 8\pi \tau_{ab}
\end{equation}
we can calculate the surface energy-momentum tensor $\tau_{ab}$, where
$\left[K_{ab}\right]^{-} = K_{ab}^{+} - K_{ab}^{-},\; [K]^{-} = g^{ab}
\left[K_{ab}\right]^{-}$, and $g_{ab}$ can be read off from
Eq.(\ref{eq18}). Inserting Eq.(\ref{eq21})
into the above equation, we find that $\tau_{ab}$
can be written in the form
\begin{equation}
\label{25}
\tau_{ab} = \sigma w_a w_b +
\eta \left(\theta_a \theta_b + \phi_a \phi_b\right),
\end{equation}
where $w_{a}, \; \theta_{a}$ and $\phi_{a}$ are unit vectors
defined on the surface, given respectively by
$
w_a = \delta^{\tau}_a,\;
\theta_a = R\delta^\theta_a,\;
\phi_a = R \sin\theta\delta^\varphi_a$,
and $\sigma$ can be interpreted as the surface energy density,
$\eta$ the tangential pressure, provided that they satisfy
certain energy conditions \cite{HE1973}. In the present case $\sigma$
and $\eta$ are given by
\begin{eqnarray}
\label{eq27}
\sigma &=& \frac{1}{4\pi R}
\left\{\dot{R} - \frac{1}{\dot{v}} + J'(r_{0})\right\},
\nonumber\\
\eta & = & \frac{1}{16\pi R \dot{v}}
\left\{ \dot{v}^{2} - 2\ddot{v}R - 2\dot{v}J'(r_{0}) + 1
\right\},
\end{eqnarray}
where $J(r) = r$ for Class A solutions, and $J(r) = \sinh(2r)/2$ for
Class B solutions, and a prime denotes the ordinary differentiation
with respect to the indicated argument. Note that
in writing Eq.(\ref{eq27}) we had used Eq.(\ref{eq19}), from which
it is found that the total mass of the collapsing ball,
which includes the contribution
from both the fluid and the shell, is given by
\begin{equation}
\label{eq28}
M(\tau) = \frac{R}{2\dot{v}^{2}}
\left(\dot{v}^{2} + 2\dot{v}\dot R - 1\right).
\end{equation}
To fix the spacetime outside
the shell we need to give the equation of state of the shell. In
order to minimize the effects of the shell on the collapse, in the
following we shall consider the case $\eta = 0$, which reads
\begin{equation}
\label{eq29}
{\dot v}^2 - 2 {\ddot v } R - 2 J'(r_{0}){\dot v} + 1 = 0.
\end{equation}
To solve the above equation, let us consider the two classes
of solutions separately.
\subsection{Class A Solutions}
In this case, it can be shown that Eq.(\ref{eq29}) has the first
integral,
\begin{equation}
\label{eq30}
\dot{v}(\tau) = \frac{x - 2(v_{0} - 1)R_{0}}{x - 2v_{0}R_{0}},
\end{equation}
where
$R(\tau) \equiv R_{0}x^{\xi},\;
R_{0} \equiv r_{0}P^{\frac{\xi}{\xi + 1}},\;
x \equiv \left[(\xi + 1)(\tau_{0} - \tau)\right]^{\frac{1}{\xi + 1}}
$, and $v_{0}$ and $\tau_{0}$ are integration constants.
Substituting the above expressions into Eq.(\ref{eq28}), we find that
\begin{equation}
\label{eq31}
M(x) = \frac{R_{0}^{2}x^{\xi -1}}{[x -2(v_{0} - 1)R_{0}]^{2}}
\left\{(2 - \xi)x^{2} + 2(\xi - 1)(2v_{0} - 1)R_{0}x
+ 4 \xi v_{0}(1 - v_{0})R_{0}^{2}\right\}.
\end{equation}
At the moment $\tau = \tau_{AH}\; $(or $x = x_{AH} = \xi R_{0}$),
the shell collapses inside the apparent horizon. Consequently, the
total mass of the formed black hole is given by
\begin{equation}
\label{eq32}
M_{BH} \equiv M(x_{AH}) =
\frac{\xi^{\xi}r_{0}^{1+\xi}P^{\xi}}
{[\xi - 2(v_{0} - 1)]^{2}}\left\{\xi(2-\xi) + 2(\xi -1)(2v_{0} - 1)
+ 4 v_{0}(1 - v_{0})\right\},
\end{equation}
which is finite and can be positive
by properly choosing the parameter $v_{0}$ for any given $\xi$.
The contribution of the fluid and the thin shell to the black
hole mass is given, respectively, by \footnote{
While a unique definition of the total mass of a
thin shell is still absent,
here we simply define it as $m^{shell}_{BH} \equiv 4 \pi R^{2} \sigma$.
Certainly we can
equally use other definitions, such as $m^{shell}_{BH}
\equiv M_{BH} - m^{f}_{BH} $,
but our final conclusions will not depend on them.
}
\begin{eqnarray}
\label{eq33}
m^{f}_{BH} &\equiv& m^{f}_{AH}(\tau_{AH}) =
\frac{\xi^{\xi}r_{0}^{1+\xi}}{2}P^{\xi},\nonumber\\
m^{shell}_{BH} &\equiv& 4\pi R^{2}(\tau_{AH})\sigma(\tau_{AH})
= \frac{\xi^{\xi}r_{0}^{1+\xi}(2v_{0} - \xi)}
{\xi - 2(v_{0} - 1)}P^{\xi}.
\end{eqnarray}
From the above equations we can see that all the three masses are
proportional to $P$, the parameter that characterizes the strength
of the initial data of
the collapsing ball. Thus, when the initial data is very weak
($P \rightarrow 0$), the mass of the formed black hole is very small
($M_{BH} \rightarrow0$). In principle, by properly tuning the parameter
$P$ we can make it as small as wanted. Recall that now the solutions
have CSS. It should be noted that
due the the gravitational interaction between the collapsing fluid
and the thin shell, we have $M_{BH} \not= m^{f}_{BH} + m^{shell}_{BH}$,
unless $\xi = 2$, which corresponds to null dust. In the latter case,
it can be shown that by choosing $v_{0} = 1$ we can make the thin shell
disappear, and the collapse is purely due to the null fluid. Like the cases
with thin shell, by properly tuning the parameter $P$ we can make black holes
with infinitesimal mass. When $\xi = 1/2$ or $1$, which corresponds,
respectively, to the massless scalar field or to radiation fluid, the
solutions reduce to the ones considered in \cite{WRS1997}.
Note that although the mass of black holes takes
a scaling form in terms of $P$, the exponent $\gamma$
is not uniquely defined. This is because
in the present case the solution with $P = P^* = 0$
separates black holes from white holes,
and the latter is not the result
of gravitational collapse. Thus, the solutions considered here
do not really represent the critical collapse. As a result,
we can replace $P$ by any function $P(\bar P)$, and for each
of such replacements, we will have a different $\gamma$ \cite{Gundlach1996}.
However, such replacements do not change the fact that by properly
tuning the parameter we can make black holes with masses as small
as wanted.
\subsection{Class B Solutions}
In this case, the first integral of Eq.(\ref{eq29}) yields
\begin{equation}
\label{eq34}
\dot{v} = \cosh(2r_{0}) - \sinh(2r_{0})\tanh(t + t_{1}),
\end{equation}
where $t_{1}$ is an integration constant. At the moment $ t = t_{AH}$
the whole ball collapses inside the apparent horizon, and the
contribution of the fluid and the shell to the total mass of the just
formed black holes are given, respectively, by
\begin{eqnarray}
\label{eq35}
m^{f}_{BH} &\equiv& m^{f}_{AH}(\tau_{AH}) =
\frac{1}{4}\sinh^{\xi +1}(2r_{0}),\nonumber\\
m^{shell}_{BH} &\equiv& 4\pi R^{2}(\tau_{AH})\sigma(\tau_{AH})
= - \frac{1}{2}\sinh^{\xi+1}(2r_0)
\frac{\cosh[t_1-t_0+\xi r_0]}{\cosh[t_1-t_0- (2-\xi)r_0]}
\end{eqnarray}
From the above expressions we can see that for any given $r_{0},\;
m^{f}_{BH}$ and $ m^{shell}_{BH}$ are always finite and non-zero. Thus,
in the present case black holes start to form with a mass gap.
It should be noted that although $m^{f}_{BH}$ is positive, $m^{shell}_{BH}$
is negative.
This undesirable feature makes the model very unphysical. One may look
for other possible junctions. However, since the fluid is co-moving,
one can always make such junctions across the hypersurface
$r = r_{0} = Const.$. Then, the contribution of the collapsing
fluid to the total mass of black holes will be still given by
Eq.(\ref{eq35}), and the total mass of the formed balck hole
then can be written in the form,
$$
M_{BH} = \frac{1}{4}\sinh^{\xi +2}(2r_{0}) + M_{BH}^{rest},
$$
where $M_{BH}^{rest}$ denotes the mass contribution
of the rest part of the spacetime, which is non-negative for
any physically reasonable junction. Therefore, in this case for any
physical junction, the total mass of black holes will be finite and
non-zero.
\section{CONCLUSION}
In this Letter, we have studied two classes of solutions to the Einstein
field equations, which represent the spherical gravitational collapse of
a packet of perfect fluid, accompanied usually by a thin matter shell.
The first class of solutions has CSS, and black holes
always start to form with a zero-mass, while the second class has
neither CSS nor DSS, and the formation of black holes always starts with a
mass gap. The existence of the matter shell does not affect our
main conclusions. These solutions provide further evidences to support our
speculation that {\em the formation of black holes always starts
with zero-mass for the collapse with self-similarities, CSS or DSS}.
It should be noted that none of these two classes of solutions given
above represent critical collapse. Thus, whether the formation
of black holes starts with zero mass or not is closely related to the
symmetries of the collapse (CSS or DSS), rather than to whether the
collapse is critical or not.
\section*{Acknowledgment}
The financial assistance from CAPES (JFVR), CNPq (AW, NOS), and FAPERJ (AW)
is gratefully acknowledged.
| proofpile-arXiv_065-8944 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{\bf Introduction}
The Friedmann-Roberson-Walker (FRW) model may describe a nonflat
Universe motivated by the observational evidences that the total matter
density of the universe is different from its critical amount.
In fact, the measured matter density of baryonic and nonbaryonic
components, is less than its critical value.
However, theoretical arguments derived from inflationary
models\cite{Gu} and from some current microwave anisotropy measurements
favor a flat universe where the total energy
density equals to the critical density.
In this respect, other forms of matter (components) are added,
which contribute to the total energy density, so it
becomes possible to fullfil the prediction of inflation. Examples of
these kinds are the cosmological constant, $\Lambda $ (or vacuum
energy density), together with the cold dark matter (CDM) component,
which forms the famous $\Lambda$CDM model, which, among others,
seems to be the model which best fits existing observational
data\cite{OsSt}.
Recent cosmological observations, including
those related to the relation between
the magnitude and the redshift \cite{Peetal},
constrain the cosmological parameters. The test
of the standard model, which includes spacetime geometry, galaxy
peculiar velocities, structure formation, and early universe
physics, favors in many of these cases a flat
universe model with the presence of a cosmological
constant\cite{Pe}. In fact, the luminosity distance-redshift
relation (the Hubble diagram) for the IA supernova seems to
indicate that the ratio of the matter content to its critical
value, $\Omega_0$, and the cosmological constant fits best the
values $\Omega_0 = 0.25$ and $\Lambda = 0.75$.
>From a theoretical point of view, another possibility has risen,
which was to consider a closed universe. It seems that quantum field
theory is more consistent on compact spatial surfaces
than in hyperbolic spaces\cite{WhSc}. Also, in quantum cosmology,
the "birth" of a closed universe from nothing,
is considered, which is characterized by having a vanishing total energy,
momentum, and charge \cite{Vi}.
Motivated mainly by inflationary universe models, on the one hand, and
by quantum cosmology, on the other hand, we describe in this paper
the conditions under which a closed universe model may look flat
at low redshift. This kind of situation has been considered
in the literature \cite{KaTo}. There, a closed model was considered
together with a nonrelativistic-matter
density with $\Omega_0<1$, and the openess is obtained by adding a
matter density whose equation of state is $p\,=\,-\rho /3$.
Texture or tangled strings represent this kind of equation of state
\cite{Da}. In a universe with texture, the additional energy density
is redshifted as $a^{-2}$, where $a$ is the scale factor. Thus it
mimics a negative-curvature term in Einstein's equations. As a
result, the kinematic of the model is the same as in an open
universe in which $\Omega_0\,<\,1$. The first person who studied a
universe filled with a matter content with an equation of state
given by $p = - \rho/3$ seems to be Kolb\cite{Ko}. He found that
a closed universe may expand eternally at constant velocity
(coasting cosmology). Also, he distinguishes a model universe
with multiple images at different redshifts of the same object and
a closed universe with a radius smaller than $H_0^{-1}$, among
other interesting consequences.
Very recently, there has been quite a lot of work including in the
CDM model a new component called the "quintessence" component,
with the effective equation of state given by $p\,=\,w \rho$ with
$-1\,<\,w\,<\,0$. This is the so called QCDM
model\cite{St}. The differences between this model and the
$\Lambda$ model are, first, the $\Lambda$ model has an equation of state
with $w=-1$, whereas the $Q$ model has a greater value and
second, the energy density associated to the $Q$ field in
general varies with time, at difference of the $\Lambda$ model.
The final, and perhaps the most important difference, is that the
$Q$ model is spatially inhomogeneous and can cluster
gravitationally, whereas the $\Lambda$ model is totally
spatially uniform. This latter difference is relevant in the
sense that the fluctuations of the $Q$ field could have an
important effect on the observed cosmic microwave background
radiation and large scale structure\cite{CaDaSt}. However, it has
been noticed that a degeneracy problem for the CMB anisotropy
arises\cite{HuDaCaSt}, since any given $\Lambda$ model seems to
be indistinguishable from the subset of quintessence models when
CMB anisotropy power spectra are compared. However, they become
different when the $w$ parameter varies rapidly or becomes a
constant restricted to $w > - \Omega_Q /2$.
>From the observational point of view, there have been
attempts to restrict the value of this parameter. Astronomical
observations of a type IA supernova have indicated that
for a flat universe, the ratio of the pressure of the $Q$-component
to its density is restricted to $w < -0.6$, and if the
model condidered is open, then $w < - 0.5$ \cite{Gaetal}.
Certainly, improvement either in the study of the CMB anysotropy
or the type IA supernova will help us to elucidate the
exact amount of the $Q$ component in the matter content of the universe.
In this paper we discuss cosmological FRW models with a $Q$
field in both Einstein's theory of general relativity
and Brans-Dicke (BD) theories \cite{BrDi}. We shall restrict
ourself to the case in which the $w$ parameter remains constant.
We obtain the potential $V(Q)$ associated to the $Q$ field, and
also determine the angular size as a function of the redshift.
\section{\bf {Einstein theory}}
In this section we review the situation in which the quintessence
component of the matter density, whose equation of state $p=w\rho$, with
$w$ a constant less than zero, contributes to the effective Einstein action
which is given by
\begin{equation}
\displaystyle S\,=\,\int{d^{4}x\,\sqrt{-g}\,\left [\,\frac{1}{16\pi\,G}\,R\,
+\,\frac{1}{2}\,(\partial_{\mu}Q)^2\,-\,V(Q)\,+\,L_{M} \right ] }.
\label{s1}
\end{equation}
Here, $G$ is Newton's gravitational constant,
$R$ the scalar curvature, $Q$ the quintessence
scalar field with associated potential $V(Q)$, and $L_{M}$ represents
the matter constributions other than the $Q$ component.
Considering the FRW metric for a closed universe
\begin{equation}
\displaystyle d\,{s}^{2}\,=\,\,d\,{t}^{2}\,-\,
\,a(\,{t}\,)^{2}\,d\Omega_{k=1}^2,
\end{equation}
with $d\Omega_{k=1}^2$ representing the
spatial line element asociated to the hypersurfaces of homogeneity,
corresponding to a three sphere, and where $a(t)$ represents
the scale factor, which together with the assumption that the
$Q$ scalar field is homogeneous,i.e., $Q=Q({t})$, we obtain
the following Einstein field equations:
\begin{equation}
\displaystyle H\,^{2}\,=\,\frac{8\pi\,G}{3}\,
\left (\,\rho_{M}\,+\,\rho_{Q}\,\right )\,
-\,\frac{1}{a^{2}}
\label{E1}
\end{equation}
and
\begin{equation}
\displaystyle \ddot{Q}\, +\,3\,H\,\dot{Q}\,=-
\,\frac{\partial{V(Q)}}{\partial{Q}},
\label{E2}
\end{equation}
where the overdots specify derivatives respect to ${t}$,
$H\,=\,\dot{{a}}/{a}$ defines the Hubble expansion rate, $\rho_{M}$
is the average energy density of nonrelativistic matter, and $\rho_{Q}$
is the average energy density associated to the quintessence field defined by
$\displaystyle \rho_{Q}\,=\,\frac{1}{2}\dot{Q}^2\,+\,V(Q)\,,$
and average pressure $ \displaystyle p_{Q}\,=\,\frac{1}{2}\dot{Q}^2\,-\,V(Q)\,$.
As was mentioned in the introduction we shall consider a model
where the Q-component has an equation of state defined by
$\,p_{Q}\,=w\rho_{Q}$, where $w$ is considered to lie in the
range $-1<w<0$, in order to be in agreement with the current
observational data\cite{CaDaSt}.
In order to have a universe which is closed, but still have a
nonrelativistic-matter density whose value corresponds to that of
a flat universe, we should impose the following relation
\begin{equation}
\displaystyle \rho_{Q}\,=\,\frac{3}{8\pi\,G\,a^{2}}\,.
\label{roq}
\end{equation}
This kind of situation has been recently considered in
ref.\cite{KaTo}, where a matter density with $\Omega_0<1$ in a
closed universe was described .
Under condition (\ref{roq}), Einstein's equations becomes analogous to that
of a flat universe, in which the matter density $\rho_M$
corresponding to dust is equal to $\rho_M^0\,[a_0/a(t)]^3$,
and the scale factor $a(t)$ is given by $a_0\,(t/t_0)^{2/3}$.
Using the expressions for $\rho_Q$ and $p_Q$ defined above, we obtain
\begin{equation}
\displaystyle Q\,(t)=\,Q_0\,\left (\frac{t}{t_0} \right )^{\frac{1}{3}}
\label{qt}
\end{equation}
with $Q_0$ defined by
$\displaystyle Q_0=3\,\sqrt{3\,(1+w)/8\,\pi\,G}\,
(t_0/a_0))$. The quantities denoted by the subscript 0
correspond to quantities of the current epoch.
>From solution (\ref{qt}), together with the definitions of $\rho_Q$ and
$p_Q$ we obtain an expression for the scalar potential $V(Q)$ given by
\begin{equation}
\displaystyle V(Q)\,=V_0\,\left (\frac{Q_0}{Q} \right )^4\,,
\label{vq}
\end{equation}
where $V_0$ is the present value of the scalar quintessence potential
given by $\displaystyle V_0=3\,(1-w)/16\,\pi\,G\,a_0^2$.
When both solutions (\ref{qt}) and (\ref{vq}) are introduced into the field
equation (\ref{E2}) the $w$ parameter necessarily will be equal
to $\frac{-1}{3}$, as is expected from the approach followed in
ref\cite{KaTo}.
To see that a closed model at low redshift is indistinguishable from a flat
one, we could consider the angular size or the number-redshift relation as a
function of the redshift $z$, as was done in Ref. \cite{KaTo}. Here, we shall
restrict ourselves to consider the angular size only. The results
will be compared with the corresponding analogous results obtained
in BD theory.
The angular-diameter distance $d_A$ between a source at a redshift $z_2$
and $z_1< z_2$, is defined by
\begin{equation}
\displaystyle
d_A(z_1,z_2)\,=\,\frac{a_0\,sin\left [\bigtriangleup \chi(z_1,z_2) \right]}
{1+z_2},
\end{equation}
where $\bigtriangleup \chi(z_1,z_2)$ is the polar-coordinate distance
between a source at $z_1$ and another at $z_2$, in the same line of sight,
(in a flat background) and is given by
\begin{equation}
\displaystyle
\bigtriangleup \chi(z_1,z_2)\,=\,\frac{2}{a_0\,H_0}\,
\left [ \frac{1}{\sqrt{(1+z_1)}}-\frac{1}{\sqrt{(1+z_2)}} \right].
\end{equation}
Here, $H_0$ corresponds to the present value of the Hubble constant,
defined by $\displaystyle H_0\,=\,\sqrt{8\,\pi\,G\,\rho_M^0/3}$. The corresponding
angular size of an object of proper length $l$ at a redshift $z$ results
in $\Theta\,\simeq\,l/d_A(0,z)$, which becomes (in units of $l\,H_0$)
\begin{equation}
\displaystyle
\Theta\,=\,\frac{1}{a_0\,H_0}\,\displaystyle \frac{1+z}
{sin\left \{\frac{2}{a_0\,H_0}\left [ 1-\frac{1}{\sqrt{1+z}}
\right ] \right \}}.
\end{equation}
For a small redshift (or equivalently, for a small time interval) the
angular size is given by
\begin{equation}
\displaystyle
\Theta\,\simeq \,\frac{1}{z}\,+\,\frac{7}{4}\,+\,
\left [\frac{6}{(a_0\,H_0)^2}+\frac{33}{48} \right ] z + O (z^2).
\label{exp1}
\end{equation}
Since for $\Omega_0 >1$ it is found that
\begin{equation}
\displaystyle
\Theta\,=\,\sqrt{\Omega\,-\,1}\,\frac{1+z}
{sin \left \{ 2\,\sqrt{\frac{\Omega-1}{\Omega_0-1}}
\left [ tan^{-1} \left ( \sqrt{\frac{\Omega_0\,z\,+\,1}{\Omega_0\,-\,1}}
\right ) \,-\,
tan^{-1}\left (\sqrt{\frac{1}{\Omega_0\,-\,1}} \right ) \right ] \right \} },
\end{equation}
where, $\Omega$ represents the sum of matter and the
quintessence contribution
to the total matter density, we obtain that, at low redshift,
$\displaystyle \Theta(\Omega_0>1)\,\sim\,1/z$, which coincides with the first
term of the expantion (\ref{exp1}). Therefore, it is expected that the models
with $\Omega_0=1$ and $\Omega_0>1$ become indistinguishable at a
low enough redshift. In Fig 1 we have plotted $\Theta$ as a function of
the redshift $z$ in the range $0.01 \leq z \leq 10$, for
$\Omega_0=1$ and $\Omega_0=3/2$. We have determined the
value of $a_0\,H_0$ by fixing the polar-coordinate distance at
last scattering surface given by $\bigtriangleup \chi(z_{LS})\,=\,\pi$,
with $z_{LS}\,\simeq\,1100$, as was done in Ref. \cite{KaTo}
\section{\bf {BD Theory}}
In this section we discuss the quintessence matter model in a theory where
the "gravitational constant" is considered to be a
time-dependent quantity. The effective action associated to the generalized
BD theory \cite{NoWa} is given by
\begin{equation}
\displaystyle S\,=\,\int{d^{4}x\,\sqrt{-g}\,\left [\,\Phi\,R\,
-\,\frac{\omega_0}{\Phi}\,(\partial_{\mu}\Phi)^2\,-\,V(\Phi)\,
+\,\frac{1}{2}\,(\partial_{\mu}Q)^2\,-\,V(Q)\,+\,L_{M} \right ] }.
\label{s2}
\end{equation}
where $\Phi$ is the BD scalar field related to the effective
(Planck mass squared) value, $\omega_0$ is the BD parameter, and $V(\Phi)$ is
a scalar potential asociated to the BD field. As in the Einstein case, the
matter Lagrangian $L_M$ is considered to be dominated by dust,
with the equation of state $p_M=0$. We also keep the
quintessence component described by the scalar field $Q$.
When the FRW closed metric is introduced into the action (\ref{s2}),
together with the assumptions that the different scalar fields
are time-dependent quantities only, the following set of field
equations are obtained
$$
\displaystyle H^{2}\,+H\,\left(\frac{\dot{\Phi}}{\Phi}\right)\,
=\,\frac{\omega_0}{6}\left(\frac{\dot{\Phi}}
{\Phi}\right)^{2}+\,\frac{8\pi}{3\Phi}
\left(\rho_M\,+\rho_Q\right)-\frac{1}{a^{2}}+
\frac{V(\Phi)}{6\Phi}\,,
$$
\begin{equation}
\displaystyle \ddot{\Phi} +3\,H\,\dot{\Phi}\,
+\,\frac{\Phi^{3}}{2\omega_0+3}\,\,\frac{d}{d\Phi}\left(
\frac{V(\Phi)}{\Phi^{2}}\right)\,=\,\frac{8\pi}{2\omega_0+3}\,\,
\left [\rho_M+(1-3w)\rho_Q\right]\,,
\end{equation}
and
$$
\displaystyle \ddot{Q}\, +\,3\,H\,\dot{Q}\,=-
\,\frac{\partial{V(Q)}}{\partial{Q}}.
\label{qddt2}
$$
As before, we have taken the $Q$-component with equation of state
$p_Q=w\rho_Q$, where $w$ will be determined later on.
In order that the model mimics a flat universe,
we impose the following conditions:
\begin{equation}
\displaystyle \frac{8\pi}{3\Phi}\,\rho_Q\,=\frac{1}{a^{2}}\,
-\,\frac{1}{6\Phi}\,V(\Phi)\,,
\label{cond3}
\end{equation}
and
\begin{equation}
\displaystyle
\rho_Q\,=\,\frac{\Phi^{3}}
{8\,\pi\,(1-3\,w)}\,\frac{d}{d\Phi}\left(\frac{V(\Phi)}{\Phi^{2}}\right)\,.
\label{cond4}
\end{equation}
Under these restrictions, the BD field equations
become equivalent to that of a flat universe, in which
we assume a matter content dominated by dust.
It is known that the solutions of the scale
factor $a(t)$ and the JBD field $\Phi(t)$ are given by
$a(t)=a_0\,(t/t_0)^{2(1+\omega_0)/(4+3\omega_0)}$ and
$\Phi(t)=\Phi_0\,(t/t_0)^2/(4+3\omega_0)$, respectively.
These solutions together with the constrain equations (\ref{cond3}) and
(\ref{cond4}) yield to the following expresion for the quintessence matter
field,
\begin{equation}
\displaystyle
Q\,(t)=\,Q_0\,\left(\frac{t}{t_0}\right)^{(3+\omega_0)/(4+3\omega_0)}
\label{qt2}
\end{equation}
where now $Q_0$ is defined by
$\displaystyle Q_0=\sqrt{\frac{3\,(1+w)(4+3\omega_0)^2\,(3+2\omega_0)}{(3+\omega_0)
(9w+2\omega_0)}\frac{\Phi_0}{8\pi}}\,(\frac{t_0}{a_0})$, and
where, as before, the quantities with the subscript 0 represent
the actual values. Notice that this result reduces
to Einstein solution, (Eq. (\ref{qt})), for
$\omega_0 \longrightarrow \infty$, together with the identification of
the gravitational constant, $\Phi_0=1/G$.
Equation (\ref{qt2}) together with equations (\ref{cond3})
and (\ref{cond4}) yields the potential associated to the BD field
\begin{equation}
V(\Phi)\,=\,\left \{ \begin{array}{ll}
\displaystyle V(\Phi_0)\,\left (\frac{\Phi}{\Phi_0} \right )^{9w} & \mbox{if
$1+2\omega_0+9w=0$}, \\
\displaystyle V(\Phi_0)\,\left (\frac{\Phi_0}{\Phi} \right )^{1+2\omega_0} & \mbox{if
$1+2\omega_0+9w \neq 0$},
\end{array}
\right.
\label{Po2}
\end{equation}
where $V(\Phi_0)$ is given by
\begin{equation}
\displaystyle V(\Phi_0)\,=\,\left \{ \begin{array}{ll}
\displaystyle 3(1-3w)\,\left (\frac{\Phi_0}{a_0^2} \right ) & \mbox{if
$1+2\omega_0+9w=0$}, \\
\displaystyle -3\left (\frac{1-3w}{2\omega_0+9w} \right )\,
\left (\frac{\Phi_0}{a_0^2} \right )^{1+2\omega_0} & \mbox{if
$1+2\omega_0+9w \neq 0$}.
\end{array}
\right.
\end{equation}
We shall considered the second case only, i.e. $1+2\omega_0+9w \neq 0$.
The first case gives $\displaystyle w=-\frac{1}{9}(1+2\omega_0)$, and since
$\omega_0>500$, in agreement with solar system gravity experiments,
one obtaines $w \ll -1$, which results inappropiated for describing
the present astronomical observational data. Notice that the second case
gives a lower bound for the parameter $w$, given by
$w>-\frac{1}{9}(1+2\omega_0)$. However, the experiments motive
us to only consider the range $-1<w<0$.
>From Eq. (\ref{Po2}), together with $\displaystyle V(Q) =
\frac{1}{2}\,[(1-w)/(1+w)] \dot{Q}^2$ we obtain
\begin{equation}
\displaystyle
V(Q)=V(Q_0)\,(\frac{Q_0}{Q})^{2(1+2\omega_0)/(3+\omega_0)},
\label{Pot3}
\end{equation}
where $V(Q_0)$ is defined by $\displaystyle V(Q_0)=\frac{3(1-w)(3+2\omega_0)}
{9w+2\omega_0}\,\frac{\Phi_0}{16\pi}\frac{1}{a_0^2}$.
When these solutions are plugged into the evolution of the Q-field equation
of motion, we find that the paremeter $w$ is given by
$\displaystyle w=-\frac{1}{3}\,\left (\frac{2+\omega_0}{1+\omega_0}\right )$,
for this equation to be valid. Note also that if
$\displaystyle w \longrightarrow -\frac{1}{3}$ in the Einstein limit,
$\omega_0 \longrightarrow \infty$.
The corresponding angular size (in units of $l H_0$)
for this kind of theory is found to be
\begin{equation}
\displaystyle
\Theta\,=\,\frac{1}{a_0\,H_0}\,\frac{1+z}
{sin\left \{\frac{2}{a_0\,H_0}\,\alpha(\omega_0)\,
\left [ 1-(1+z)^{-\beta(\omega_0)/2}
\right ] \right \}},
\end{equation}
where, $\displaystyle \alpha(\omega_0)\,=\,\frac{
\sqrt{\omega_0^2+\frac{17}{6}\omega_0+2}}{\omega_0+2}$,
$\displaystyle \beta(\omega_0)\,=\,\frac{\omega_0+2}{\omega_0+1}$ and
$\displaystyle H_0\,=\,\sqrt{\frac{8\,\Pi\rho_M^0}{3\,\phi_0}}$. In Fig. 2
we have plotted $\Theta$ as a function of $z$ in the Einstein
theory and Brans-Dicke theory with $\omega_0 = 500$.
Note that at $z \sim 10$ or greater, they start to become different.
Since $\omega_0 \gg 1$ and if we take only the first-order term
in $1/\omega_0$, we obtain that
\begin{equation}
\displaystyle
\Theta^{BD}\,=\,\Theta^{E}\,+\,
\frac{1+z}{(a_0\,H_0)^2}\,\frac{cos[\frac{2\,Z(z)}{a_0\,H_0}]}
{sin^2 [ \frac{2 Z(z)}{(a_0\,H_0)^2}]}
\left \{ 2 [Z(z)+1]\,ln(Z(z)+1)-\frac{7}{6} Z(z)
\right \} \,\frac{1}{\omega_0}\,+\,O(\frac{1}{\omega_0})^2,
\end{equation}
where $Z(z)=\sqrt{1+z}-1$, and $\Theta^{BD}$ and
$\Theta^{E}$ represent the angular size for Brans-Dicke
and Einstein theories, respectively. At $z=z_{LS}\simeq 1100$
the difference $\displaystyle \Delta\Theta \equiv
\Theta^{BD}-\Theta^{E} $, becomes
$\Delta\Theta \simeq 147\left (\frac{1}{\omega_0} \right )$,
which for $\omega_0 \sim 500$
becomes $\Delta\Theta\sim 0.3$. This difference for $z=1$, with
the same value of $\omega_0$, becomes $\Delta\Theta \sim 0.05$.
Thus, we observe that this difference increases as $z$
increases,i.e., as time becomes more remote the difference
between the angular size in both theories becomes stronger.
Figure 3 shows how this difference becomes more important at a redshift closed
to last scattering values. This difference clearly is hard to be
detected experimentally. However, they may be an indication that
both theories become very different at $z_{LS}$, and it is
probably to search for another observable that might
distinguish between these two possibilities. Perhaps, the
spectrum due to different matter components
may be the answer. They certainly have an effect on the cosmic
background radiation, which in principle could be observable via
temperature fluctuations.
\section{\bf {Conclusions }}
Assuming an effective erquation of state for the $Q$ field given by
$p = w \rho$ with negative $w$, we have computed the form of the potential
$V(Q)$ for the Q-field in the model where a closed universe
looks similar to a flat one at low redshifts. We have found it to vary as
$V(Q) \sim Q^{-\alpha}$, where the parameter $\alpha$ becomes a function
of the BD parameter $\omega_0$. This parameter has the correct
Einstein limit, since for $\omega_0 \longrightarrow \infty$ this
paramater becomes equal to 4. We have also determined the angular size
(in unit of $lH_0$) as a function of the redshifts, in both theories.
Our conclusion is that the angular size at high redshift (closed
to last scattering values) could
distinguish between Einstein and Brans-Dicke theories.
\section{\bf Acknowledgements}
SdC was supported by Comision Nacional de Ciencias y Tecnologia
through FONDECYT Grant N$^0$. 1971157
and UCV-DGIP Grant N$^0$ 123.744/98. NC
was supported by USACH-DICYT under Grant N$^0$ 0497-31CM.
| proofpile-arXiv_065-8946 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction.}
In the past few years there has been substantial progress in
understanding the origin of angular momentum transport in astrophysical
accretion disks (see the reviews by Papaloizou \& Lin 1995 and Balbus
\& Hawley 1998). In particular, the nature of transport by
magnetohydrodynamic (MHD) turbulence has been greatly clarified.
Magnetized disks are linearly unstable to the weak field
magnetorotational instability (Balbus \& Hawley 1991). However, in
regions surrounding the solar nebula and in protostellar disks more
generally, temperatures and densities suggest very small ionization
fractions, leading to magnetic decoupling. The presence of turbulence
in such a disk is problematic. For this reason, hydrodynamical studies
of disk turbulence remain of great interest.
Before the importance of magnetic fields in disk dynamics was clear,
disk turbulence was viewed as a hydrodynamical problem. Adverse
entropy gradients (vertical stratification) and adverse angular
momentum gradients (radial stratification) each brought with them a
legacy of local instability, turbulence and enhanced transport.
Moreover, simple shear layers break down into turbulence via nonlinear
processes, even in the absence of rotation, and Couette flow experiments
show nonlinear breakdown of some Rayleigh-stable velocity profiles
(Coles 1965; Drazin \& Reid 1981). Even if details were a bit vague,
enhanced turbulent transport via {\it some\/} hydrodynamic process seemed
more than plausible.
Convective models of disk transport are now two decades old (Cameron
1978, Lin \& Papaloizou 1980). But convective turbulence does not, by
itself, guarantee outward angular momentum transport (Prinn 1990);
indeed, recent investigations suggests the opposite. Ryu \& Goodman
(1992) analyzed the linear stages of convective instability, and
pointed out that it produces inward, not outward transport. Kley,
Papaloizou \& Lin (1993) found inward transport in an axisymmetric disk
convection simulation. Stone \& Balbus (1996) conducted a full
three-dimensional (3D) numerical simulation of the compressible Euler
equations in a local patch of Keplerian disk, found small inward
transport, and gave arguments as to why this might be
expected. Despite the presence of vigorous convection in these
simulations, what little net transport was present was directed
radially inwards. The time-averaged amplitude of the stress was very
small, some three to four orders of magnitude below typical values
produced by MHD turbulence in comparable simulations. Similar results
were found by Cabot (1996) in a 3D local Navier-Stokes calculation,
when the assumed viscosity was sufficiently small.
Shear instabilities have the virtue that any resulting transport will
certainly be directed outwards, since the outwardly decreasing angular
velocity gradient would be the source of the turbulence. An even older
idea that convection (Crawford \& Kraft 1956), high Reynolds number
shear turbulence as a source of angular momentum transports predates
modern accretion disk theory, and is explicitly invoked in Shakura \&
Sunyaev (1973). Unfortunately, its validity has never been
demonstrated. Unlike convective instability, which occurs when a
well-understood linear stability criterion is violated, differentially
rotating flows are linearly stable by the Rayleigh criterion. The
oft-made conjecture is that, despite this, Keplerian disks are
nonlinearly unstable, as evinced by some Rayleigh-stable (but
decidedly non-Keplerian) Couette flows.
In principle, the nonlinear stability question of hydrodynamical
Keplerian disks could be settled by direct numerical simulation. But
nonlinear shear instability is a 3D problem, and the critical Reynolds
number for the onset of turbulence was thought to be too high to be
attainable with a 3D numerical code. This, however is not so (Balbus,
Hawley, \& Stone 1996; hereafter BHS). Working with the inviscid Euler
equations, BHS evolved numerical models at a variety of resolutions,
and for a range of angular momentum distributions. A Rayleigh-unstable
model produced rapid growth of the perturbation energy, as expected.
Simple Cartesian shear flow also produced unstable growth, due to a
nonlinear instability. A constant angular momentum distribution also
proved to be nonlinearly unstable: this profile is marginally stable
to linear perturbations, and BHS used simple symmetry arguments
to show that in its stability properties the system is formally
equivalent to (unstable) Cartesian shear flow. Thus, 3D simulations
{\it can\/} reproduce the onset on nonlinear shearing instabilities where
they are known to be present.
BHS found that simple shear and constant angular momentum flows were
the {\it only\/} (unmagnetized) Rayleigh-stable systems to exhibit any
dynamical instability. Keplerian disks simulations, in particular,
were completely stable. BHS argued that the crucial difference between
Keplerian flow and simple shear is the presence of Coriolis forces.
The epicyclic oscillations produced by those forces are strongly
stabilizing for both linear {\it and\/} nonlinear disturbances.
Epicyclic oscillations are not present in shear layers or in constant
angular momentum rotation profiles, which were found to be the only
nonlinear unstable flows. If the velocity profile of a disk has even
the most gentle rise in specific angular momentum with radius, its
behavior is qualitatively different from the constant angular momentum
(or simple shear) case.
At a minimum, the findings of BHS do not augur well for the existence
of hydrodynamic turbulent transport in differentially rotating disks.
The unfavorable results of the disk convection simulations, combined
with the finding that high Reynolds number shear instabilities are
easily simulated (when present), but disappear the moment rotational
effects are smoothly introduced, suggests that only MHD turbulence
offers a viable basis for Shakura-Sunyaev (1973) $\alpha$-disk models.
If hydrodynamic turbulence is present, it must be driven by some source
other than differential rotation, and generally will not transport
angular momentum (e.g., Balbus \& Hawley 1998).
In this paper we return to the local hydrodynamic simulations and
consider the hydrodynamic stability problem from several new
perspectives. We extend the body of simulations beyond what was done
in BHS with higher resolution, and with algorithms which differ in
their diffusive properties. In \S2 we briefly review the moment
equations developed by BHS; these form the basis for interpreting the
results of local numerical simulations. In \S3 we review numerical
procedures used for the local simulations. In \S4 we investigate a
number of issues: Is there any significant effect due to numerical
resolution on BHS's conclusions regarding the stability of Keplerian
flows? BHS speculated that the boundary between nonlinear stability
and instability (e.g., near constant angular momentum distributions)
should not be sharp, and we confirm this expectation. Nonlinearly
unstable, but Rayleigh-stable, laboratory Couette flows are precisely
analogous to flows which find themselves at this boundary. We next
consider the decay of the applied turbulence in the Keplerian system,
at a number of resolutions, and with two distinct numerical schemes.
Finally we compare the Reynolds and Maxwell stresses in a series of MHD
simulations, which span a full range of background angular momentum
distributions. In \S5 we present our conclusions.
\section{Hydrodynamic Fluctuations}
We begin with a brief review of basic disk equations and the formalism
of BHS on the nature of hydrodynamic turbulence in disk flows.
Nonadvective transport in a hydrodynamical accretion disk is determined by
the Reynolds stress tensor,
\begin{equation}\label{one}
T_{R\phi}\equiv \langle \rho u_R u_\phi\rangle
\end{equation}
where $\rho$ is the mass density, and ${\bf u}$ is the noncircular
component of the velocity ${\bf v}$, i.e., ${\bf v} = R\Omega
\bb{\hat\phi} + {\bf u}$. The average in
equation (\ref{one}) is spatial: we assume that a volume can be found
over which the small scale variations average out, leaving $T_{R\phi}$
a smoothly varying quantity. The phenomenological $\alpha$ prescription
of Shakura \& Sunyaev (1973) is $T_{R\phi}=\alpha P$, where $P$ is the
dynamical pressure (possibly including radiation).
The stress $T_{R\phi}$ has several roles. First, and most familiar,
it is the agent of angular momentum and energy transport.
We are particularly interested in the radial dependence of
$T_{R\phi}$. Consider the average radial angular momentum flux,
\begin{equation}\label{momflux}
\langle R \rho v_\phi u_R\rangle \equiv R^2\Omega \langle \rho
u_R\rangle
+ R T_{R\phi},
\end{equation}
and the radial energy flux
\begin{equation}\label {enflux}
\langle {1\over2}\rho v_\phi^2 u_R + \Phi_c \rangle =
-{1\over 2} R^2\Omega^2 \langle\rho u_R \rangle + R \Omega T_{R\phi}.
\end{equation}
where $\Phi_c$ is the central (external) gravitational potential. Note
that in both equations (\ref{momflux}) and (\ref{enflux}) the first
terms represent advected flux; all nonadvective flux is in the
$T_{R\phi}$ contribution of the second terms. Outward transport
corresponds to $T_{R\phi} > 0$. The nonadvective contributions differ
from one another only by a factor of $\Omega$ in the energy flux. Each
of the above (net) fluxes is a simple linear combination of $\langle
u_R\rangle$ and $T_{R\phi}$ only. The fact that no other flow
quantities appear is crucial to the formulation of classical
steady-state $\alpha$
disk theories, for it allows for a well-defined luminosity--accretion
rate relationship.
The turbulent stress must do more than regulate angular momentum
transport, however. It is also the conduit by which free
energy is tapped to maintain the fluctuations, which produce
$T_{R\phi}$ in the first place. This crucially important role is
not a part of standard $\alpha$ disk theory. It is a consequence of
{\it fluctuation\/} dynamics, not mean flow dynamics. This may be
seen by inspecting the diagonal moments of the radial and azimuthal
$u$ equations of motion (BHS):
\begin{equation}\label{balbusr}
{\partial\ \over\partial t} \left\langle {\rho u_R^2\over2}\right\rangle
+ {\nabla}{\cdot}\left\langle {1\over2}\rho u_R^2 {\bf u}\right\rangle =
2\Omega T_{R\phi}
-\left\langle {u_R}{\partial P\over\partial R} \right\rangle -{\rm losses}
\end{equation}
and
\begin{equation}\label{balbusaz}
{\partial\ \over\partial t} \left\langle {\rho u_\phi^2\over2}\right\rangle
+ {\nabla}{\cdot}\left\langle {1\over2}\rho u_\phi^2 {\bf u}\right\rangle =
-{\kappa^2\over2\Omega}T_{R\phi}
- \left\langle {u_\phi\over R}{\partial P\over\partial\phi} \right\rangle
-{\rm losses}
\end{equation}
where ``losses'' refer to viscous losses. In disks the stress tensor
couples both to the Coriolis force and to the background shear, and the
former is bigger than the latter.
Contrast this with simple shear flows. Here,
only the shear couple is present; the stabilizing Coriolis force is
absent. Reverting to Cartesian coordinates with background flow
velocity $V(x) {\bf {\hat e}_y }$, the dynamical $u$-moment equations
for shear flow are
\begin{equation}\label{X}
{\partial\ \over\partial t} \left\langle {\rho u_X^2\over2}\right\rangle
+ {\nabla}{\cdot}\left\langle {1\over2}\rho u_X^2 {\bf u}\right\rangle =
- \left\langle {u_X}{\partial P\over\partial x} \right\rangle - {\rm losses}
\end{equation}
and
\begin{equation}\label{Y}
{\partial\ \over\partial t} \left\langle {\rho u_Y^2\over2}\right\rangle
+ {\nabla}{\cdot}\left\langle {1\over2}\rho u_Y^2 {\bf u}\right\rangle =
-{dV\over dx} T_{XY} - \left\langle {u_Y}{\partial P\over\partial y}
\right\rangle
-{\rm losses}
\end{equation}
where $T_{XY}$ is the obvious analogue to $T_{R\phi}$.
In both disk and shear flow, the shear is the source of free energy
which maintains the kinetic energy of the fluctuations. But the
dynamical content of (\ref{X}) and (\ref{Y}), as compared
with (\ref{balbusr}) and (\ref{balbusaz}) is evidently very different.
The disk is faced with grave difficulties in keeping up both outward
transport ($T_{R\phi} > 0)$ and significant levels of $\langle
u^2\rangle$. Whereas $2\Omega T_{R\phi}$ is a source term for $\langle
\rho u_R^2/2\rangle$ if $T_{R\phi} >0$, the $-\kappa^2/2\Omega$ term in
equation (\ref{balbusaz}) is a sink for $\langle \rho
u_\phi^2/2\rangle$. The presence of both a source and a sink coupled
to $T_{R\phi}$ means that the $u_R$ and $u_\phi$ fluctuations cannot
grow simultaneously: one would grow only at the expense of the other,
and the implicit correlation embodied in $T_{R\phi}$ could not be
self-consistently maintained. One could appeal to the pressure term
in equation (\ref{balbusaz}) for help, and one needs to do so in
{\it any\/} hydrodynamical disk flow where there is outward
transport. This leads not to turbulence, whose physical origin is
vorticity entrainment in large scale shear (Tennekes \& Lumley 1972),
but to transport by spiral waves.
In shear flow there is no $T_{XY}$
sink in the corresponding equation (\ref{Y}), and hence no barrier to
maintaining both transport and fluctuation amplitude. The nonlinear
instability (at sufficiently high Reynolds numbers) and resulting
turbulence of simple shear
flow is a matter of common experience. The behavior of disks could
not be more different.
\section{Numerical Procedure}
Numerical simulations demonstrate the behaviors of disk and shear flows
unambiguously. It is sufficient to work in the local shearing-periodic
box system (Hawley, Gammie \& Balbus 1995). The background angular
velocity of the disk is taken to be a power law: $\Omega \propto
R^{-q}$. We construct a set of local coordinates corotating with the
fluid at a fiducial radius $R_\circ$. Equations are expanded to first
order about $R_{\circ}$, using locally Cartesian coordinates ${\bf x} =
(x,y,z) = (R-R_{\circ}, R_{\circ}(\phi-\Omega t), z)$. ($\Omega$ is
evaluated at $R=R_\circ$ in the expression for $y$.) Although the
local geometry is Cartesian, Coriolis and tidal forces ensure the local
dynamics is not.
The resulting hydrodynamic equations are
\begin{equation}\label{continuity}
{\partial\rho\over{\partial t}} + \nabla \cdot (\rho {\bf v}) = 0,
\end{equation}
\begin{equation}\label{euler}
{\partial {\bf v}\over{\partial t}} + {\bf v}\cdot \nabla {\bf v}
= - {1\over\rho}\nabla P
- 2 {\bf\Omega} \times {\bf v}
+ 2 q \Omega^2 x {\hat{\bf x}},
\end{equation}
\begin{equation}\label{energy}
{\partial \rho \epsilon\over{\partial t}} + \nabla\cdot(\rho\epsilon {\bf v})
+ P \nabla \cdot {\bf v} = 0,
\end{equation}
where the pressure $P$ is given by the
\begin{equation}\label{eos}
P = \rho \epsilon(\gamma - 1),
\end{equation}
and the remainder of the terms have their usual meaning. For
simplicity, the vertical component of gravity is not included. The
shearing box is defined to be a cube with length $L\ (=1)$ on a side, and
the initial equilibrium solution is $\rho= 1$, $P = L\Omega^2$, and
${\bf v} = -q \Omega x \hat y$.
The boundary conditions in the angular ($y$) and vertical ($z$)
directions are strictly periodic. The radial ($x$) direction, however,
is ``shearing periodic." This means that the radial faces are joined
together in such a way that they are periodic at $t = 0$ but
subsequently shear with respect to one another. Thus, when a fluid
element moves off the outer radial boundary, it reappears at the inner
radial boundary at its appropriate sheared position, with its
angular velocity compensated for the uniform mean shear across the
box. See Hawley, Gammie, \& Balbus (1995) for a detailed description
of these boundary conditions.
To begin a simulation, a background angular velocity gradient
($q$ value) is chosen, and
initial velocity perturbations are introduced into the flow. Stability
is determined by whether or not these fluctuations grow in amplitude.
The simulations of BHS began with random perturbations in pressure and
velocity applied as white noise down to the grid scale. However, such
initial conditions have the disadvantage of varying with resolution;
the initial perturbation spectrum will never be fully resolved. For
the models computed in this paper, we use a specific initial
perturbation rather than random noise. The initial conditions consist
of well-defined products of sine-wave perturbations of $v_y$ in all three spatial
directions, with wavelengths $L$, $L/2$, $L/3$ and $L/4$. A linear
combination of sine waves is constructed for each direction, e.g.,
if
\begin{equation}\label{sines}
f(x) = [\sin (2\pi x +\phi_1)+\sin(4\pi x+\phi_2)+
\sin(6\pi x+\phi_3)+\sin(8\pi x+\phi_4)]
\end{equation}
where the $\phi$ terms are fixed phase differences, then the
perturbation is applied to $v_y$ as
\begin{equation}\label{perturb}
\delta v_y = A L\Omega f(x) f(y) f(z)
\end{equation}
The amplitude $A$ of the perturbation is set to some fraction of the
shearing velocity $L\Omega$, typically 10\%. This procedure
ensures that the initial conditions will be the same for all
simulations within a comparison group, regardless of grid resolution,
and that they will be adequately resolved, even on the $32^3$ zone
grid.
Most of the simulations described in \S 4 were computed with the same
code used in BHS. This is an implementation of the hydrodynamic
portion of the ZEUS algorithm (Stone \& Norman 1992).
To address the possibility of
numerical artifacts affecting our findings, it has proven
useful to compare the results obtained using two very different
numerical algorithms. To this end, we have
adapted the Virginia Hydrodynamics-1 (VH1) Piecewise Parabolic Method
(PPM) code to the three-dimensional shearing box problem. The PPM
algorithm was developed by Colella \& Woodward (1984), and it is a
well-known, widely-used, and well-tested numerical technique for
compressible hydrodynamics. Like ZEUS, PPM employs directional
splitting, but differs fundamentally from ZEUS in its use of a
nonlinear Riemann solver rather than finite differences to obtain
the source terms in the Euler equations. PPM also uses piecewise
parabolic representations (third order in truncation error) for the
fundamental variables rather than the piecewise linear
functions used in ZEUS (second order). Both schemes employ
monotonicity filters to minimize zone to zone oscillations. VH1 uses a
Lagrangian-remap approach in which each one-dimensional sweep through the
grid is evolved using Lagrangian equations of motion, after which the
results are remapped back onto the original grid using parabolic
interpolations. Further information about the VH1 implementation of
PPM is currently available at http://wonka.physics.ncsu.edu/pub/VH-1,
and at http://www.pbm.com/~lindahl/VH-1.html.
\section{Results}
\subsection{Flows Marginally Stable by the Rayleigh Criterion}
A constant angular momentum distribution ($q=2$) is marginally stable
to linear perturbations by the Rayleigh criterion. BHS showed that
such a flow, which has a vanishing epicyclic frequency, is formally
equivalent in its stability properties to simple Cartesian shear. When
$\kappa=0$ equations (\ref{balbusr}) and (\ref{balbusaz}) have the same
form as (\ref{X}) and (\ref{Y}). This equivalence implies that
constant angular momentum flows should be subject to the same nonlinear
instabilities that disrupt shear flows. The simulations of BHS
demonstrate this unequivocally.
It is possible to explore deeper consequences of the symmetry. Not
only should a $q=2$ flow be formally analogous to a shear layer, a
``$q=2-\epsilon$'' Rayleigh-stable flow should be formally analogous to
a shear layer with a little bit of rotation: $d\Omega/d\ln R \gg
2\Omega$. This can be inferred from the $R\leftrightarrow\phi$
symmetry of equations (\ref{balbusr}) and (\ref{balbusaz}). (From the
standpoint of a source of free energy there is no problem; differential
rotation serves this role. This is somewhat unusual, since normally the
source of free energy disappears with the onset of linear stability.)
At large Reynolds numbers, only the ratio of the coefficients of
$T_{R\phi}$ matters, and where stability is concerned, reciprocal flows
(those whose coefficient ratios are reciprocals of one another) should
have the same stability properties. The $q=2-\epsilon$ case is
important, because some Couette flows in which the outer cylinder
dominates the rotation are found to be nonlinearly unstable, with the
onset of instability occurring near the inner cylinder (Coles 1965;
Drazin \& Reid 1981). This breakdown is the basis of ``subcritical''
behavior, which is occasionally cited as evidence for nonlinear
instability in {\it Keplerian\/} disks (e.g., Zahn 1991). From the
symmetry reasons stated above, however, we believe that subcritical
behavior is evidence that disks with $q=2-\epsilon$ are nonlinearly
unstable, not $q=1.5$ disks. This is a very testable hypothesis.
We examine this conjecture by computing models at $64^3$ and $32^3$
resolution, $1.94\le q\le 2$ in intervals of
0.01, for two different amplitudes of initial perturbations:
$\delta v_y = 0.1 (L \Omega)$ and $\delta v_y = 0.01 (L\Omega)$. The
value of $q$ determines the strength of the linear stabilization, the
initial perturbation amplitude sets the strength of the initial
nonlinear interactions, and the grid resolution influences the amount
of stabilizing numerical damping present [``losses'' in (\ref{balbusr})
and (\ref{balbusaz})]. Together these effects will determine when the
perturbations grow and when they do not.
Figure 1 displays some of the results. Figure 1a shows the perturbed
kinetic energy in units of $L\Omega$ for the $32^3$ resolution, large
amplitude ($\delta v_y = 0.1L\Omega$) perturbation models. The
different $q$ models begin with the same initial perturbations of the
form (\ref{perturb}). The kinetic energy decreases during the first
orbit, and the curves promptly separate according to angular momentum
distribution, with the smallest $q$ model decaying the most rapidly.
Only the flows with $q=2$ and 1.99 show any subsequent growth. The
$q=1.98$, 1.97, 1.96 and 1.95 models die away; the smaller the value of
$q$, the lower the remaining kinetic energy. Figure 1b depicts models
with the same range of $q$ and the same initial perturbations, but
computed with $64^3$ grid zones. Again there is a short initial period
of rapid decline in perturbed kinetic energy, with the curves
separating according to $q$. However, this decrease is smaller than
that seen in the $32^3$ zone simulations. After about one orbit in
time, the kinetic energies grow for all but the $q=1.96$ and 1.95
model. These models vary with time around an average that remains
close to the initial value; only the $q=1.94$ model experiences a clear
decline in perturbed kinetic energy with time.
The sensitivity of the nonlinear instability to initial perturbation
amplitudes is demonstrated with a third group of $64^3$ grid zone
models. These are begun with an initial perturbation amplitude of
only $\delta v_y = 0.01 L\Omega$ (Fig. 1c). The perturbation kinetic
energy increases slightly during the first orbit, and again the curves
separate according to $q$ value. In this case, however, only the
$q=2.0$ model shows growth; all the others die away.
Because the instability is truly nonlinear, the details of how a flow
develops depend upon the amplitude of the initial disturbance, and, to
a far greater degree than for a linear instability, the numerical
resolution. When $\kappa^2 = (2-q)\Omega = 0$ the linear forces on the
system sum to zero, and nonlinear dynamics determine the fate of the
flow. The simple shear flow shows that in the absence of a restoring
force the nonlinear dynamics are destabilizing. As $\kappa^2$ slowly
increases from zero, the linear restoring force returns; the separation
of the curves by $q$ value illustrates the stabilizing effect of the
Coriolis force. Whether or not the linear restoring force can ensure
stability in a given system depends on its strength compared to that of
the nonlinear dynamics, which, in turn, depend on the amplitude of the
initial perturbations. The larger the perturbations, the greater the
nonlinear forces.
The subcritical behavior of $2-\epsilon$ flows seems to have its roots
in epicyclic excursions. The mechanism of instability in planar
Couette flow is believed to be vorticity stretching in the shear
(Tennekes \& Lumley 1972). The presence of epicyclic motion in general
is incompatible with this process. Nearby perturbed elements execute
bound (pressure modified) epicyclic orbits around a a common angular
momentum center. There is no indefinite stretching of local vortices,
or at least the process is far less efficient. But the aspect ratio of
the ellipse epicycle becomes extreme as $q\rightarrow2$; in the absence
of pressure, the minor to major (radial to azimuthal) axis ratio for
displacements in the disk plane is $(1-q/2)^{1/2}$. At some point, it
is all but impossible to distinguish the epicycle from the shearing
background, and of course the epicyclic frequency is then tiny compared
with the shearing rate. This latter rate is the time scale for vortex
stretching, and so we expect this mechanism for turbulence to be viable
under these conditions. The fact the formal linear epicyclic excursion
may be bound is inconsequential. Vortex stretching and the feeding of
turbulence will proceed if there is ample time before the epicycle
closes. For $q=1.95$, the approximate threshold for nonlinear
instability found in the simulations, $\kappa=0.2|d\Omega/d\ln R|$, and
the aspect ratio quoted above is $0.16$, i.e. about 6 to 1 major to
minor axis ratio. These are sensible values in the scenario we have
suggested: if $\kappa$ is not less than an order of magnitude smaller
than the shearing rate, or the aspect ratio is not less than 0.1 to
0.2, then the flow is strongly stabilized by Coriolis-induced epicyclic
motion. In this case, vortices are not stretched efficiently by the
background shear: rather than monotonically increasing the distortion,
the epicyclic orbit relaxes the vortex stretching over half of
the period.
Numerical diffusion error represents a loss term from (\ref{balbusr})
and (\ref{balbusaz}), and this adds to the stabilizing effect by
reducing the amplitude of the perturbations. At a given resolution,
however, numerical effects should be nearly the same from one $q$ value
to the next. Hence a series of models that differ by only $q$ isolates
the physical effects from the numerical. Any differences observed in
these simulations are dynamical, not numerical, in origin.
To conclude, we have observed that the growth or decay of applied
velocity perturbations depends on the resolution and the initial
perturbation amplitude for flows near the critical $\kappa^2 = 0$
limit. This hypersensitivity, however, is shown only over a tiny range
of $q$. Below $q=1.95$ all indications of instability are gone. These
results are precisely what one would expect from the presence of a
nonlinear instability, and they are consistent with the observed
presence of such instabilities in shear dominated, but formally
Rayleigh-stable Couette experiments.
\subsection{The Influence of Resolution and Algorithm}
A concern in any numerical study, particularly one whose goal is to
search for a physical instability, is the effect of finite numerical
resolution. In \S4.1 we demonstrated how various flows
could be stabilized by increasing the epicyclic frequency (through a
decrease in $q$ from the critical value of 2.0). In some of these
cases, a transition from stability to instability occurred when
resolution was increased. Clearly, numerical resolution does play its
anticipated role: numerical diffusion error has a stabilizing effect.
But is numerical diffusion error decisive as $q$ becomes smaller?
BHS argued that the stability of Keplerian flow to finite perturbations
was due to physical, not numerical effects, and gave support to that
position through simulations done at three resolutions, all of which
produced similar results. In this section we describe a series of
simulations that improve upon those previous resolution experiments.
We have evolved a series of Keplerian flows with a range of numerical
resolutions. We begin with $32^3$ grid zones, and then increase the
number of grid zones by a factor of two in each of the three dimensions
in subsequent simulations, up to $256^3$ grid zones. Each of these
Keplerian flows is perturbed with angular velocity fluctuations of the
form (\ref{perturb}), with an maximum initial amplitude of $\delta v_y
= 0.1 L\Omega$.
Figure 2 shows the evolution of the kinetic energy of the angular and
radial velocity perturbations, $(\rho v_x^2)/2$ (Fig. 2a) and $(\rho
\delta v_y^2)/2$ (Fig. 2b). The initial perturbations produce radial
kinetic energies which rapidly (0.2 orbits) increase to a maximum value comparable
with the azimuthal component. Beyond this point, the
perturbed kinetic energies drop off with time; the higher the
resolution, the less rapid the decline, although each doubling in
resolution creates a diminishing change. All resolutions show rapid
decrease in $(\rho \delta v_y^2)/2$ from the initial value. One
intriguing difference between the models is that the higher the
resolution, the {\it lower} the value of $(\rho\delta v_y^2)/2$ after
about 2 orbits. Thus, far from promoting greater instability, higher
resolution is {\it reducing} the amplitude of the angular momentum
perturbations.
Why should an increase in resolution lead to more rapid damping? This is
clearly at variance with the expected behavior if numerical diffusion
were the sole sink of perturbed kinetic energy. As we have
emphasized, there is also a significant dynamical sink. Equation
(\ref{balbusaz}) shows that the Reynolds stress is a loss term for
$\langle \delta v_y^2 \rangle$.
All simulations begin with a positive Reynolds stress,
and the higher resolution simulations maintain larger values during the
initial orbit. At each resolution,
the Reynolds stress can be integrated over time
and space to give a measure of its strength:
$\int {\kappa^2 \over 2\Omega}\, {\langle \rho v_x\,v_y\rangle}\, dt$.
These values {\it increase} monotonically with resolution, from 0.0033,
to 0.0059, 0.0078, and finally to 0.0092 for the $256^3$ model. (For
reference, the initial value of $\langle{{1\over 2} \rho v_y^2 }\rangle$
is 0.04.)
Further evidence for the damping effect of the Reynolds stress can be
seen in the low resolution run. In Figure 3 we plot $(\rho \delta
v_y^2)/2$ (Fig. 3a) and the Reynolds stress (Fig. 3b) as a function of
time for the first orbit in the $32^3$ grid zone model. This low
resolution simulation is of special interest because at orbit 0.25 the
averaged Reynolds stress becomes negative. At the same time, the rate
of decline of $\langle \rho \delta v_y^2 \rangle$ decreases, as one
would anticipate from (\ref{balbusaz}). Hence, although at low
resolution grid scale numerical diffusion is the largest loss term in
the angular velocity fluctuation equation, the sink due to the Reynolds
stress is large enough to observe directly. Improved numerical
resolution increases the dynamical Reynolds sink by a greater factor
than it reduces the numerical diffusion!
We next turn to simulations of Keplerian flows at $32^3$, $64^3$ and
$128^3$ grid zone resolutions using the VH1 PPM code. We ran the same
problem with the same initial perturbations as above. Figure 4 shows
the time-history of the perturbed radial and azimuthal kinetic
energies. This plot should be compared with Figure 2 and for reference
we include the $32^3$ and the $128^3$ ZEUS simulation results as
dashed lines. Figure 5 is the Reynolds stress during the first orbit
for all the resolutions and for both algorithms; the PPM runs are the
bold lines.
The PPM results are completely consistent with the ZEUS simulations.
Most striking is the close similarity between a given PPM evolution and
the ZEUS simulation run at twice its resolution. For example, the
history curve of the Reynolds stress in the PPM $32^3$ run lies almost
on top of the ZEUS $64^3$ curve (fig. 5) through 0.2 orbits in time.
The Reynolds stresses in the $64^3$ and $128^3$ PPM simulations peak at
the same level as the $256^3$ ZEUS simulation, then decline at slightly
different rates beyond 0.2 orbits. The $128^3$ PPM simulation
apparently has less numerical diffusion than the $256^3$ ZEUS model.
Regardless of the relative merits of the two schemes, the succession of
curves with increasing resolution showing the same outcome, done with
two completely independent algorithms, indicates convergence to a
solution near that of the maximum resolution models. In other words,
Keplerian disks would prove stable even if computed at arbitrarily high
resolution.
\subsection{Nonlinear Decay in the Keplerian System}
In simulations of Keplerian differential rotation, the kinetic energy
of the perturbations declines at a rate which itself decreases with
time. Why should there be any decay at all in a stable inviscid
system? Is this decay entirely a numerical artifact?
These simulations begin with perturbations of the form
(\ref{perturb}). The initial power spectrum for the perturbed kinetic
energy thus contains power in the first four wavenumbers only. Once
the evolution begins, nonlinear interactions cause a cascade of
available perturbed kinetic energy into higher wavenumbers.
Dissipation occurs rapidly at the highest wavenumber. Although this
dissipation is numerical, it mimics the behavior of physical
dissipation at the viscous scale. The rate at which energy cascades to
larger wavenumbers, and hence the rate at which the perturbed kinetic
energy declines, should again be a function of numerical resolution and
perturbation amplitude. In this section we investigate these effects,
explore the reasons for the decay of the turbulence, and examine the
properties of the velocity fluctuations that remain at late time.
A study the Fourier power spectrum of the perturbed kinetic energy
yields important information. Because the background velocity has
shear, we must transform the data into coordinates in which the
shearing box system is strictly periodic, take the Fourier transform,
and then remap the wavenumbers onto the fixed Eulerian system. This
procedure is described in Hawley et al.\ (1995). Figure 6 is a one
dimensional power spectra, $| \delta {\bf v}(k)|^2$ in $k_x$,
$k_y$, and $k_z$, for the $64^3$ and $128^3$ grid zone Keplerian PPM
simulations discussed in \S4.2. The spectra are shown for orbits 1, 2
and 3, with the dashed lines corresponding to the $64^3$ run and the
solid lines to the $128^3$ model. The initial perturbation spectrum is
constant across the first four wavenumbers (seen in Figure 6 as a
horizontal line).
Immediately after the evolution begins, energy cascades into higher
wavenumbers. Within one third of an orbit, a relatively smooth power
law distribution has been established. As time goes by the energy at
all wavenumbers steadly declines. The power spectrum across the
smallest wavenumbers remains relatively flat but has dropped steadily
from $t=0$. Beyond $k\sim 10$ the spectra drop off as steep power
laws, with the $k_y$ distribution the steepest of all three
directions. Because of the background shearing velocity, transport in
$y$ direction produces the largest numerical diffusion. The $k_x$
function has the smallest slope. In this case, the background shear
causes low $k_x$ waves to be wrapped into higher wavenumbers, i.e.,
$k_x(t) = k_x (0) - tm d\Omega/dR$, where $m$ is an azimuthal
wavenumber. The higher resolution curves in Figure 6 have
larger energies compared to the low resolution curve. Aside from this,
the additional grid zones extend the curves out to higher
wavenumber without much significant qualitative difference.
Next we consider the effect of the initial perturbation amplitude on
the rate of decay of the turbulence and the properties of the system at
late times. Experience suggests that if a system is vulnerable to
nonlinear instabilities, large initial perturbation amplitudes will
promote the onset of turbulence. Indeed, this is what was observed in
the marginally Rayleigh-stable runs described in \S4.1. Here we run a
series of low resolution $32^3$ Keplerian simulations that have initial
velocity perturbations with maximum values equal to $\delta v_y/L\Omega
= 1.0$, 0.1, 0.01, and 0.001. The time evolution of the perturbed
kinetic energies in these four runs is shown in Figure 7. All the runs
show rapid decay; contrary to naive expectations, however, the higher
the initial perturbation amplitude the {\it larger} the initial decay
rate of the perturbed kinetic energy.
Figure 8 illustrates the reasons for this. Figure 8 shows the 1D
Fourier power spectrum for the largest and smallest initial
perturbation runs after 0.25 orbits. The importance of nonlinear
interactions is increased by larger initial amplitudes. This is why in
\S4.1 nonlinear effects were able (in some cases) to overcome the
stabilizing influence of the Coriolis force when the initial
perturbation amplitudes were increased. Here, however, the main
nonlinear effect promoted by increased perturbation amplitude is to
create a more rapid cascade of energy to high wavenumbers. In
contrast, the $\delta v_y = 0.001L\Omega$ case is dominated by linear
and numerical effects. Energy has been carried into higher $k_x$
wavenumbers by linear shear, and lost from $k_y$ by numerical diffusion
error. The spectrum in $k_z$ is completely flat through the first four
wavenumbers (those initially powered) before dropping precipitously.
Evidence for a strong nonlinear cascade in the largest intial
perturbation runs is also found in the rapid increase in entropy at the
start of the evolution due to thermalization of the initial kinetic
energy in those runs. By orbit 5, the decay rates have been reduced to
a much lower level comparable to that seen in the small amplitude
perturbation runs. The ratio of the kinetic energy at orbit 5 to the
initial value in each of these runs is 0.00042, 0.0042, 0.014, and
0.058. Eventually the fluctuation energy in all these simulations
levels off at a finite, small value. What remains at these late times
are pressure and epicyclic waves, whose amplitude is determined by the
strength of the initial perturbation. The very slow numerical decay of
these long-wavelength linear waves is due to numerical dissipation.
The Reynolds stress oscillates around zero in all runs, with an
amplitude similar to the late-time kinetic energy.
We have argued that the residual kinetic energy represents nothing more
than linear waves left over from the initial perturbations. Their
presence does not imply that Keplerian disks are somehow still
``slightly unstable''; stability certainly does not require that
velocity perturbations die out. Indeed, a complete decay to zero
amplitude would have been puzzling; the motivation of the section
after all was to give an account of why there was {\it any\/} decay.
This understood, even low levels of velocity fluctuations might
be of interest in a disk, if they could be sustained indefinitely. Can
one convincingly rule out the possibility that these lingering
fluctuations are somehow feeding off the differential rotation? An
experiment to test this conjecture is to chart the evolution of
perturbations in a $q=0$, constant $\Omega$ flow. In a uniformly
rotating disk, Coriolis forces are present without any background shear
at all. Such a system is rigorously stable; without background shear
there is no source of energy to feed velocity perturbations. At late
times, the noise in a uniformly rotating disk must reflect residual
energy from the initial conditions, not ongoing excitation. Further,
the absence of shear flow will reduce the effective numerical
viscosity; the perturbations will not be advected by the shear flow,
nor will radial wavenumber modes be sheared out to higher values.
The $q=0$ case has been run at two resolutions, $64^3$ and $32^3$, for
comparison with equivalently resolved Keplerian systems. The initial
perturbations have a maximum value $\delta v_y = 0.1 L\Omega$. The
time histories of the perturbed kinetic energy for both the $q=0$ and
the $q=1.5$ $64^3$ simulations are shown in Figure 9. Both angular
velocity distributions show rapid declines in kinetic energy, although
after 10 orbits the rate of decline is greatly reduced. The $32^3$
resolution simulations look similar, except that they have less energy
at late time. The residual energy is due to stable waves that have not
yet been damped by numerical diffusion. Compared to similarly resolved
simulation with a Keplerian rotation profile ($q=1.5$) the $q=0$ models
level out at {\it higher\/} energies. Without advection through the
grid, there is less numerical diffusion, and higher residual wave
amplitudes are preserved. The case for driven residual weak Keplerian
turbulence becomes untenable, if the ``turbulence'' is stronger in a
rigorously stable uniformly rotating disk!
\subsection{The Contrast with Magnetic Turbulence}
Although Keplerian flows have proven to be stable to the local
development of hydrodynamic turbulence, the inclusion of a magnetic
field changes everything, even if the field is weak (subthermal).
Hydrodynamic stability in a differentially rotating system is assured
so long as the Rayleigh criterion $dL/dR > 0$ is satisfied. Magnetic
differentially rotating systems quite generally require $d\Omega/dR >
0$ for stability (Balbus 1995), a condition not satisfied in accretion
disks. With a magnetic field present the stress tensor acquires a
magnetic component proportional to $B_R B_\phi$,
\begin{equation}\label{magstress}
T_{R\phi}= \langle \rho u_R u_\phi - \rho u_{AR}u_{A\phi}\rangle,
\end{equation}
where
\begin{equation}
{\bf u_A} = { {\bf B}\over \sqrt{4\pi\rho}}.
\end{equation}
Most importantly, the way the stress tensor couples to the
fluctuations changes. With the new expression for $T_{R\phi}$
the mean flow equations (\ref{momflux}) and (\ref{enflux}) are
unchanged, but the fluctuation equations become
\begin{equation} \label{magenr}
{1\over2}{\partial\ \over\partial t}\langle \rho (u_R^2 +u_{A\, R}^2)\rangle
+\nabla {\cdot}\langle\quad \rangle=
2\Omega\langle\rho u_R u_\phi \rangle -
\langle u_R {\partial P_{tot} \over \partial R} \rangle - {\rm losses,}
\end{equation}
\begin{equation} \label{magenaz}
{1\over2}{\partial\ \over\partial t}\langle \rho (u_\phi^2 +u_{A\,
\phi}^2)\rangle
+\nabla{\cdot} \langle\quad \rangle =
- 2\Omega\langle\rho u_R u_\phi \rangle
- T_{R\phi}\,{d\Omega\over d\ln R}
- \langle {u_\phi\over R} {\partial P_{tot} \over \partial
\phi}\rangle -\rm{losses}.
\end{equation}
(Terms proportional to $\nabla{\cdot} {\bf u}$ have been dropped,
the fluxes are not shown explicitly, and
$ P_{tot} = P + {B^2/8\pi}$.)
Now the stress tensor no longer works at cross purposes to itself.
There is still Coriolis stabilization in equation (\ref{magenaz}), but
it is not sufficient to overcome the stress--gradient coupling term.
One consequence of this is the now well-understood linear instability
of weak magnetic fields in disks (Balbus \& Hawley 1991; see reviews by
Papaloizou \& Lin 1995, and Balbus \& Hawley 1998). Another is that
outward transport of angular momentum maintains the turbulence
self-consistently by directly tapping into the free energy of
differential rotation.
The different couplings of the Maxwell (magnetic) and Reynolds stresses
can be demonstrated in simulations. Abramowicz, Brandenburg, \& Lasota
(1996) carried out a series of simulations with different values of
background $q$. They found an increase in angular momentum transport
levels roughly in proportion to the background shear to vorticity
ratio, i.e., $q/(2-q)$. This result is best understood by rewriting
the right hand side of (\ref{magenaz}) to obtain
\begin{equation}\label{qstress}
{1\over R}{dR^2\Omega\over dR}\langle\rho u_R u_\phi\rangle
- {d\Omega\over d\ln R} \langle\rho u_{AR} u_{A\phi}\rangle.
\end{equation}
Thus the Reynolds (kinetic) stress couples directly to the vorticity
[$=(2-q)\Omega$], and the Maxwell (magnetic) stress couples to the shear
($q\Omega$). In other words, vorticity limits turbulence whereas shear
promotes it.
Here we expand the study of Abramowicz et al.\ by examining a full range of
$q$ values between 0 and 2 in intervals of 0.1 in a local shearing
box. The simulations are of the same type as some of those presented
in Hawley, Gammie \& Balbus (1996). The initial magnetic field is $B_z
\propto \sin(2\pi x/L_x)$ with a maximum corresponding to $\beta =
P/P_{mag} =400$. The box size is $L_x = L_z = 1$, and $L_y = 2\pi$,
and the grid resolution is $32\times 64\times 32$.
Figure 10 shows time-averaged Reynolds and Maxwell stresses as a
function of $q$ for the full range of simulations. The magnetic
instability is present for all $q>0$. Equation (\ref{qstress})
provides no direct limit on the Maxwell stress; it acquires whatever
level the nonlinear saturation of the instability can support.
However, if the turbulence is to be sustained from the differential
rotation, not pressure forces, the Maxwell stress must in general
exceed the Reynolds stress by more than $(2-q)/q$, the ratio of the
vorticity to the shear. In practice the ratio of the Maxwell stress to
Reynolds stress is significantly greater than this, particularly in the
range $0<q<1$. In this regime the vorticity is so strongly stabilizing
that the Reynolds stress is kept to a minimum even when fluid
turbulence is created and maintained by the magnetic instability. When
$q>1$, however, the shear and vorticity become comparable; the Reynolds
and Maxwell stresses both climb with increasing $q$. As $q\rightarrow
2$, the vorticity goes to zero and there are no constraints on the
Reynolds stress from (\ref{qstress}). The total stress increases
dramatically as the flow enters the domain of the nonlinear
hydrodynamical instability. When $q>2$, of course, the flow is
Rayleigh unstable.
\section{Discussion}
In this paper we have carried out a series of numerical simulations to
explore further the local hydrodynamical stability properties of
Keplerian disks, and the role that the Reynolds stress plays in
determining that stability. The key conceptual points are embodied in
the moment equations (\ref{balbusr}) and (\ref{balbusaz}) for
hydrodynamics, and (\ref{magenr}) and (\ref{magenaz}) for
magnetohydrodynamics. The differences in those equations are clearly
manifest in simulations, both hydrodynamic and MHD. The Maxwell stress
couples to the shear, the Reynolds stress to the vorticity. While the
former maintains turbulence, the latter works against it. Thus, while
magnetized disks are unstable, and naturally create and sustain
turbulence, a nonmagnetized Keplerian flow possesses only the Reynolds
stress, and that cannot by itself sustain turbulence. The accumulating
evidence, both numerical and analytic, from this paper and earlier
works (BHS; Stone \& Balbus 1996), points clearly to the conclusion
that Keplerian flows are locally hydrodynamically stable, linearly and
nonlinearly.
It has been traditional to point to the nonlinear instabilities
observed in simple shear flows to support the conjecture that Keplerian
disks behave similarly. Such reasoning, however, neglects the critical
difference between such flows, namely the dynamical stabilization due
to the Coriolis force. Linear stabilization is measured by the
epicyclic frequency, $\kappa^2 = 2(2-q)\Omega^2$. As
$q\rightarrow 2$, $\kappa^2 \rightarrow 0$, and dynamical stabilization
becomes weaker and weaker. At $q=2$ it vanishes completely; the flow
becomes equivalent to a simple Cartesian shear and subject to the
nonlinear instabilities to which simple shear flows are prone. Viewed
in this light, the nonlinear instability of a Cartesian shear flow is
less a generic property than a singular case lying between the linearly
unstable and linearly stable regimes. The nonlinear instability exists
not because nonlinear forces can generally overcome linear restoring
forces, but because those linear forces vanish at the delimiting
boundary between
Rayleigh stability ($q<2$) and instability ($q>2$).
This is highlighted by our study of the transition between stability
and instability. By varying $q$ to values near to but slightly less
than 2, we can explore the dynamics of systems close to the marginal
stability limit. We find that when stabilization from the Coriolis
term is very weak, both the amplitude of the initial perturbations and
the size of the numerical diffusion error (grid resolution) can
determine whether the velocity perturbations amplify or decay. This is
entirely consistent with the experimental configurations that are
linearly stable but which nevertheless become unstable. Such
nonlinearly unstable systems are precisely those where a large shear
dominates over other factors (e.g., a rapidly rotating outer cylinder
in a Couette experiment). In these fluid experiments the transition to
turbulence depends on the amplitude of the perturbing noise and the
Reynolds number of the flow. When we reduce the strength of the
Coriolis force by varying $q$ just below the marginally stable value
$q=2$, we produce a similar dominance of shear and again find an
instability that depends on the initial perturbation amplitude and the
(numerical) viscosity. We have understood this in terms of epicyclic
orbits, which are highly distorted near $q =2$, almost
indistinguishable from background shear. Once $q$ is sufficiently
below $q=2$, however, Coriolis stabilization is powerful, epicycles are
rounder, and perturbation growth is no longer possible.
This conclusion is greatly strengthened by experiments in which the
Keplerian system is evolved with different initial perturbations and
different grid resolutions. First we explored the impact of finite
resolution. Recall that the effect of numerical diffusion error on
flow structure (the turbulent perturbations) will be as an additional
loss term in (\ref{balbusr}) and (\ref{balbusaz}). Even if we were to
assume an ideal scheme with no numerical losses, however, the sink due
to the Coriolis term in (\ref{balbusaz}) would remain unchanged. The
simulations with various $q$ values near but just below 2 provide a
quantitative measure of just how big that term needs to be to stabilize
the flow, and an estimate of the importance of numerical viscosity as a
loss term. Although we find that increasing the effective Reynolds
number (i.e., by increasing the resolution and thusly reducing
numerical diffusion) can convert a marginally stable flow into a
marginally unstable one, one should not conclude that further increases
will have a similar effect on strongly stable Keplerian flows. Vortex
stretching can ``sneak'' into a highly elongated epicycle, but it
cannot do so in a rounded, strongly stable Keplerian disturbance.
We have investigated the possibility of diffusive numerical
stabilization with a series of resolution experiments run with two
completely different algorithms. Keplerian simulations were run at 4
resolutions from $32^3$ up to $256^3$ using the ZEUS hydrodynamics
scheme, and 3 resolutions from $32^3$ up to $128^3$ using the PPM
algorithm. The results from all these simulations were very similar.
No hint of instability was seen in any of these simulations, nor was
there any trend observed which could conceivably suggest instability in
an extrapolation to arbitrarily high resolution. Furthermore, not just
decaying trends, but detailed numerical behavior was reproduced in two
distinct codes with very different numerical diffusion properties. The
case that numerical diffusion is dominating and stabilizing these runs
is untenable.
Next, a series of experiments explored a range of initial perturbation
amplitudes. The largest had initial fluctuations that were comparable
to the background rotation velocity $L\Omega$. We found that the {\it
larger} the initial perturbation, the more rapid the decay of the
resulting turbulence. Far from promoting instability, stronger initial
perturbations actually increase the rate of decay of the perturbed
kinetic energy. When finite amplitude perturbations are added to the
Keplerian system they rapidly establish a nonlinear cascade of energy
to higher wavenumbers. This energy is then thermalized (or lost,
depending upon the numerical scheme and the equation of state) at the
high wavenumber end. Linear amplitude perturbations do not decay via
such a cascade, and damp at much lower rates.
Turbulence decays in homogeneous systems lacking an external energy
source. A uniformly rotating disk is an example of such. A Keplerian
system is more interesting because decay is observed
despite the presence of free energy in the differential rotation which
could, in principle, sustain the turbulence. This does not happen,
however, because the coupling of the Reynolds stress to the background
vorticity simply makes it impossible to power simultaneously both the
radial and azimuthal velocity fluctuations that make up the Reynolds
stress. Residual levels of fluctuations were even lower in the
Keplerian disk than they were in the uniformly rotating disk.
This behavior stands in contrast to the MHD system. The magnetic
instability produces not just turbulent fluctuations, but the {\em
right kind\/} of fluctuations: positive correlations in $u_R$ and
$u_\phi$, and in $B_R$ and $B_\phi$. It is because the
magnetorotational instability is driven by differential rotation that
the critical $R$--$\phi$ correlations exist. Unless $T_{R\phi}$ were
positive, energy would not flow from the mean flow into the
fluctuations. Hydrodynamical Cartesian shear flows maintain the
correlation physically by ensnaring vortices (a nonlinear process);
magnetic fields do this by acting like springs attached to the fluid
elements (a linear process). Sources of turbulence other than the
differential rotation (or simple shear) do not force a correlation
between $u_R$ and $u_\phi$, and generally do not lead to enhanced
outward transport.
Magnetic fields, then, are uniquely suited to be the agents responsible
for the behavior of $\alpha$ disks. While this conclusion has
important implications for fully ionized disks, its implications for
protostellar disks are yet more profound. If such disks are unable to
sustain adequate levels of magnetic coupling, or unable to sustain such
coupling throughout their radial extent, angular momentum transport may
well be tiny or nonexistent. Angular momentum transport, when it
occurs, will have to be accomplished through global nonaxisymmetric
waves, driven, for example, by self-gravitational instabilities. Even
if triggered by, say, convective instability, turbulence would likely
prove to be transient: it cannot be sustained from the only source of
energy available, namely the differential rotation. More generally,
nonmagnetic disks will not be describable in terms of the usual
$\alpha$ model.
Phenomenologically much less is known of MHD turbulence than of
hydrodynamical turbulence. There is very little laboratory to draw
upon, in contrast to the rich literature of hydrodynamical Couette
flow. The observational complexity of many disk systems suggests the
presence of a versatile and eruptive source of turbulence; magnetic
fields seem an obvious candidate for producing such behavior. The
physics behind magnetic reconnection, large scale field topology,
ion-neutral interactions, magnetic Prandtl number, and global dynamos
is likely to prove at least as rich and complex as the behavior of
astrophysical disks.
This work is supported in part by NASA grants NAG5-3058, NAG5-7500, and
NSF grant AST-9423187. Simulations were carried out with support from
NSF Metacenter computing grants at the Pittsburgh Supercomputing Center
and at NPACI in San Diego.
\clearpage
\begin{center}
{\bf References}
\end{center}
\refindent Abramowicz, M., A. Brandenburg, \& J.-P. Lasota 1996,
MNRAS, 281, L21
\refindent Balbus, S.~A. 1995, ApJ, 453, 380
\refindent Balbus, S.~A., \& Hawley, J. F. 1991, ApJ, 376, 214
\refindent Balbus, S.~A., \& Hawley, J.~F. 1998, Rev Mod Phys, 70, 1
\refindent Balbus, S.A., Hawley, J.F., \& Stone, J.M. 1996, ApJ, 467,
76 (BHS)
\refindent Cabot, W. 1996, ApJ, 465, 874
\refindent Cameron, A.~G.~W. 1978, Moon and Planets, 18, 5
\refindent Colella, P., \& Woodward, P. R. 1984, J. Comput. Phys., 54, 174
\refindent Coles, D. 1965, J. Fluid Mech, 21, 385
\refindent Crawford, J.~A., \& Kraft, R.~P. 1956, ApJ 123, 44
\refindent Cuzzi, J.~N., Dobrovolskis, A.~R., \& Hogan, R.~C. 1996, in
Chondrules and the Protoplanetary Disk, ed. R. H. Hewins, R. H.
Jones, \& E. R. D. Scott (Cambridge: Cambridge Univ. Press), 35
\refindent Drazin, P. G., \& Reid, W. H. 1981, Hydrodynamical Stability
(Cambridge: Cambridge University Press)
\refindent Hawley, J.~F., Gammie, C.~F., Balbus, S.~A. 1995, ApJ, 440,
742
\refindent Hawley, J.~F., Gammie, C.~F., Balbus, S.~A. 1996, ApJ, 464,
690
\refindent Kley, W., Papaloizou, J. C. B., \& Lin, D. N. C. 1993, ApJ,
416, 679
\refindent Lin, D.~N.~C., \& Papaloizou, J.~C.~B. 1980, MNRAS, 191, 37
\refindent Papaloizou, J.~C.~B., \& Lin, D.~N.~C. 1995, ARAA, 33, 505
\refindent Prinn, R.~G. 1990, ApJ, 348, 725
\refindent Ryu, D. \& Goodman, J. 1992, ApJ 388, 438
\refindent Shakura, N.~I., \& Sunyaev, R.~A. 1973, A\&A, 24, 337
\refindent Stone, J.~M., \& Balbus, S.~A. 1996, ApJ, 464, 364
\refindent Stone, J.~M., \& Norman, M.~L. 1992, ApJS, 80, 753
\refindent Tennekes, H., \& Lumley, J. L. 1972, A First Course in
Turbulence (Cambridge: MIT Press)
\refindent Zahn, J-P. 1991, in Structure and Emission Properties of
Accretion Disks, C. Bertout, S. Collin-Souffrin, J-P. Lasota, \& J.
Tran Thanh Van eds (Gif sur Yvette, France: Editions Fronti\`eres)
\newpage
\begin{figure}
\plotone{hbw1.ps}
\caption{Evolution of kinetic energy of velocity perturbations for
background rotation laws $\Omega \propto R^{-q}$ near the marginally
stable constant angular momentum distribution $q=2$. Selected curves
are labeled by their $q$ value. Top: Low resolution $32^3$ grid zone
simulations with initial maximum perturbation $\delta v_y =
0.1L\Omega$. Only the upper two curves ($q=2$ and $q=1.99$) show any
perturbation amplification. Middle: Simulations with $64^3$ grid zone
resolution and initial perturbation amplitude $\delta v_y =
0.1L\Omega$. The 6 curves correspond to $q=2.0$ to $q=1.94$ in
increments of 0.01. The $q=1.95$ curve remains level while the
$q=1.94$ declines with time. Bottom: Simulations with $64^3$ grid
zones and initial perturbation amplitude $\delta v_y = 0.01L\Omega$.
There 5 curves range from $q=2.0$ to $q=1.96$ in increments
of 0.01. Only the $q=2.0$ curve shows growth.
}
\end{figure}
\begin{figure}
\plotone{hbw2.ps}
\caption{
Evolution of $v_x$ (top) and $v_y$ (bottom) fluctuation
kinetic energy for simulations with resolutions of $32^2$, $64^3$,
$128^3$, and $256^3$ grid zones. Curves are labeled by resolution.
}
\end{figure}
\begin{figure}
\plotone{hbw3.ps}
\caption{
Time evolution of perturbed angular velocity kinetic energy
and volume-averaged Reynolds stress for the $32^3$ simulation. The
abrupt change of slope in $\rho\delta v_y^2/2$ (dashed line added for
reference) that occurs at $t=0.24$ (indicated by vertical line)
corresponds to the point in time when the Reynolds stress becomes
negative. A negative Reynolds stress provides a source for angular
velocity fluctuation energy; a positive Reynolds stress is a sink.
}
\end{figure}
\begin{figure}
\plotone{hbw4.ps}
\caption{
Evolution of $v_x$ (top) and $v_y$ (bottom) fluctuation
kinetic energy for 3 simulations using the PPM algorithm with $32^2$,
$64^3$, and $128^3$ grid zones (bold curves). The $32^3$ and $128^3$
grid zone simulations from Figure 2 (dashed curves) are included for
reference.
}
\end{figure}
\begin{figure}
\plotone{hbw5.ps}
\caption{
Time evolution of the Reynolds stress over the first orbit
in the Keplerian simulations for a range of resolutions and for both
the ZEUS and PPM (bold lines) numerical algorithms. The peak in the
stress is labeled by the corresponding resolution and algorithm.
}
\end{figure}
\begin{figure}
\plotone{hbw6.ps}
\caption{
One dimensional power spectrum $|\delta v(k)|^2$
for the $128^3$ (solid line) and $64^3$ (dashed line) PPM Keplerian
simulation at 1, 2 and 3 orbits. The horizontal line extending out to
$k/2\pi = 4$ is power spectrum of the initial perturbation.
}
\end{figure}
\begin{figure}
\plotone{hbw7.ps}
\caption{
Time evolution of the perturbation kinetic energy in four
$32^3$ grid zone simulations of Keplerian shearing systems. The curves
are labeled by the maximum amplitude of the initial perturbations.
Larger initial perturbations show a larger rate of decay of the
perturbed kinetic energy.
}
\end{figure}
\begin{figure}
\plotone{hbw8.ps}
\caption{
One dimensional power spectrum $|\delta v(k)|^2$ at 0.25
orbits for a $32^3$ large amplitude perturbation simulation (solid line)
and a $32^3$ small amplitude perturbation simulation (dashed line).
The curves are labeled by their initial maximum perturbation amplitude.
}
\end{figure}
\begin{figure}\plotone{hbw9.ps}
\caption{
Time evolution of the perturbed kinetic energy in a
constant $\Omega$ simulation, labeled $q=0$, and a
Keplerian simulation, labeled $q=1.5$.
}
\end{figure}
\begin{figure}
\plotone{hbw10.ps}
\caption{
Reynolds stress (stars) and Maxwell stress (diamonds) for a
series of MHD shearing box simulations with different background
angular velocity distributions $q$. Stress values are time-averaged
over the entire simulation. Error bars correspond to one standard
deviation in the stress values.
}
\end{figure}
\end{document}
| proofpile-arXiv_065-8958 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\subsection{The model}
\label{sb:model}
We treat the problem of a one dimensional electron coupled to acoustic phonons
while tunneling through a rectangular barrier, of length L. The potentials
restricting the electron's movement do not effect phonons, which move freely
through the bulk and thus are treated as three dimensional. Being interested
in the effect of the zero point fluctuations of the phonon field on the
electron tunneling probability we consider the system at zero temperature.
Practically it means that temperature is low enough and this condition as well
as the finite temperature case are discussed in the Appendix. Coupling the
electron to the phonon field transforms the problem from a low dimensional
quantum problem to a field theory problem. A convenient way to approach such a
field theory problem is via the formalism of path integrals \cite{Hibbs65}.
The first stage in the calculation of the transition probability for the
tunneling process is the calculation of the retarded Green function. We
express the energy dependent retarded Green function as the Fourier transform
of the time dependent propagator,
\begin{eqnarray}
\label{eq:prop}
& & K (x_{b}, u_{{\bf q} f} ,E \mid x_{a}, u_{{\bf q} i}) =
\int_{0}^{\infty} K \left ( x_{b}, u_{{\bf q} f},T \mid x_{a},
u_{{\bf q} i} \right )
\exp{ \left ( \frac{\imath E \cdot T}{\hbar}\right ) } dT \ ,
\end{eqnarray}
that can be written as the combined path integral of the electron and phonons:
\begin{eqnarray}
\label{eq:propagator}
K \left( x_{b}, u_{{\bf q} f},T \mid x_{a}, u_{{\bf q} i} \right) & = &
\int {\cal D}x \int {\cal D}u_{\bf q} \
\exp \left\{ \frac{\imath}{\hbar}
\left[
{\cal S}_{el}(x) + {\cal S}_{ph}(u_{\bf q}) + {\cal S}_{int}(u_{\bf q},x)
\right]
\right\} ,
\end{eqnarray}
where $x$ is the electron coordinate, $u_{\bf q} $ are the Fourier
coefficients of $u(x,t)$, which is the $x$ component of the displacement
vector and $x_{a}, u_{{\bf q} i}$ ; $x_{b}, u_{{\bf q} f}$ are the initial and
final coordinates of the electron and phonons respectively. The limits over
the energy Fourier transform result from the fact we are calculating the
retarded Green function. $ {\cal S}_{el} $ is the action of the electron in
the absence of phonons, given by:
\begin{equation}
{\cal S}_{el} = \int_{0}^{T} dt \left ( \frac{m}{2} {\dot x}^2 -
V \right),
\label{eq:Selectron}
\end{equation}
where, $m$, is the effective electron mass, and $V$ is the constant potential
height of the rectangular barrier. ${\cal S}_{ph}$ is the action of the
uncoupled phonon field:
\begin{equation}
{\cal S}_{ph} = \int_{0}^{T} dt \sum_{q} \left (
\frac{\rho}{2} {\mid \dot u_{\bf q}\mid}^{2} - \frac{\rho}{2} \cdot
{{\omega}_{q}}^2 {\mid u_{\bf q}\mid}^{2} \right) ,
\label{eq:Sphonon}
\end{equation}
where $ \rho $ is the crystal density and $ u_{\bf -q} = {u_{\bf q}}^*$. Since
we are treating acoustic phonons $ {\omega}_{q} = q \cdot s $, $s$ being the
phonon velocity. ${\cal S}_{int} $ is the action associated with the
interaction between the electron and the phonon environment:
\begin{equation}
{\cal S}_{int} = - \int_{0}^{T} dt V_{int}\ .
\label{eq:Sinteraction}
\end{equation}
We will treat two electron phonon coupling mechanisms, piezoelectric and
deformation potential, and consider crystals of cubic symmetry. We consider
this symmetry as the most important because most of III-V compounds belong to
this class\cite{re:effective mass}. So for the piezoelectric coupling
\begin{equation}
V_{int}^{piezo} =
\frac{1}{\sqrt{{V}_{vol}}}
\sum_{\bf{q}\nu} \Xi_{\bf{q}\nu} u_{\bf{q}\nu} M_{{\bf q}_{\perp}} \exp (iq_{x} x) ,
\label{eq:mod.piezo}
\end{equation}
where
\begin{equation}
\Xi_{\bf{q}\nu} = {4\pi e\over\epsilon q^{2}} \ \beta_{ijk}q_{i}q_{j}e_{k}^{\nu}
\label{eq:mod.7}
\end{equation}
$\beta_{ijk}$ is the piezoelectric modules, $e$ is the electron charge,
$\epsilon$ is the dielectric constant, $e_{k}^{\nu}$ is the polarization
vector of the $\nu$-th phonon branch, and ${V}_{vol}$ is the normalization
volume. $M_{{\bf q}_{\perp}}$ is the matrix element of the phonon exponent,
$\exp(i{\bf q}_{\perp}{\bf r}_{\perp})$, between the wave functions describing
the electron quantization in the cross section of the wire, ${\bf q}_{\perp}$
and ${\bf r}_{\perp}$ are the phonon wave vector and electron radius vector in
the cross section plane. In cubic crystals the only nonzero components of the
piezoelectric module are $\beta_{xyz}=\beta_{yxz}$ and all other
permutations\cite{landau_ecm}. They are equal and we will designate them as
$\beta$.
The deformation potential couples electrons only with the longitudinal phonon
mode and
\begin{equation}
V_{int}^{def} = \frac{1}{\sqrt{{V}_{vol}}} \sum_{q} i\Lambda \vert q \vert
u_{{\bf q}l} M_{{\bf q}_{\perp}}\exp (iq_{x} x ) \ .
\label{eq:mod.def}
\end{equation}
Here $\Lambda$ is the deformation potential constant.
We neglect the screening of $V_{int}$. Underneath the potential barrier there
are no electrons and the screening by remote electrons is small.
\subsection{Transition amplitude and transition probability}
\label{sb:trp}
We can now define the transition amplitude through the use of the retarded
Green function (\ref{eq:propagator}). $K (x_{b}, u_{{\bf q} f} ,E \mid x_{a},
u_{{\bf q} i})$ is the amplitude to go from initial phonon coordinate $u_{{\bf
q} i}$ to final coordinate $u_{{\bf q} f}$ and from electron coordinate
$x_{a}$ to $x_{b}$ , for a given energy $E$, of the joint system. (In all
intermediate calculation we suppress the index of the phonon branch to
simplify the notations.) The transition amplitude for the joint system to go
from a phonon state designated by $\psi_{0} \left ( u_{{\bf q} i}
\right ) $ to a state $\psi_{n}
\left ( u_{{\bf q} f} \right ) $ is expressed by $K (x_{b}, u_{{\bf q} f} ,E
\mid x_{a}, u_{{\bf q} i})$ in the following manner :
\begin{equation}
\label{eq:ampli}
A_{n b,0 a} = \int du_{{\bf q} i} du_{{\bf q} f}
\psi_{n}^* \left ( u_{{\bf q} f} \right )
K (x_{b}, u_{{\bf q} f} ,E \mid x_{a}, u_{{\bf q} i})
\psi_{0} \left ( u_{{\bf q} i} \right ) \ .
\end{equation}
We treat the problem with the temperature equal to zero,
therefore phonons are assumed to be initially in the ground state:
$\psi_{0} \left ( u_{{\bf q} i} \right )$.
The transition probability is the absolute value squared of the
transition matrix element. Since we are not interested in the final phonon
configuration, final phonon states are summed over. From the completeness
relation we get the following delta function:
$ \delta\left ( u_{{\bf q} f} - {\tilde u}_{{\bf q} f} \right )$.
We end up with the following expression for the electron transmission
probability:
\begin{eqnarray}
\label{eq:probab}
P_{ b,a} & = & \sum_{n} {\mid A_{n b,0 a} \mid}^2 = \nonumber \\
& = & \int du_{{\bf q} i} \ d{\tilde u}_{{\bf q} i} \ du_{{\bf q} f} \
\psi_{0}^* \left ( {\tilde u}_{{\bf q} f} \right )
K (x_{b}, u_{{\bf q} f} ,E \mid x_{a} u_{{\bf q} i})
{\tilde K}^* (x_{b}, u_{{\bf q} f} ,E
\mid x_{a} {\tilde u}_{{\bf q} i})
\psi_{0} \left ( u_{{\bf q} i} \right ) \ .
\end{eqnarray}
This equation defines the transmission coefficient.
\section{Main approximations}
\label{sec:ma}
The usual approach to the calculation of the functional integral in
Eq.(\ref{eq:propagator}) that has been started by Caldeira and
Leggett\cite{Caldeira 81} is the integration out the phonon degrees of
freedom. This can be done exactly because the action is quadratic in $u_{\bf
q}$. In the realistic electron-phonon coupling, this results in a complicated
electron-electron effective potential, since the coupling mechanism is
non-linear in the electron coordinate, thus a simpler approach is needed.
We use the following approach. We consider the situation
(which is typical experimentally) when (i)the barrier is so high that the
tunneling can be considered semi-classically and (ii)the interaction energy
between electrons and phonons is small compared to the height of the barrier.
The first point allows us to make use of the semi-classical approximation to
integrate with respect to all electron paths. The main contribution to this
integral comes from only one optimal trajectory that satisfies the equation
\begin{equation}
m\ddot{x} = - {\partial V_{int}\over\partial x} \ ,
\label{eq:ma.1}
\end{equation}
(for a rectangular barrier $\partial V/\partial x=0$). This equation has to be
solved for given $u(x,t)$. The second point allows us to consider $V_{int}$ in
Eq.(\ref{eq:ma.1}) as a small perturbation and thus it can be neglected. As
usual in tunneling problems,\cite{McLaughlin72,Schulman81} we change the time
$t$ to $-it$ (and $T$ to $iT$) so that the first integral of
Eq.(\ref{eq:ma.1}) has the form
\begin{equation}
{m\dot{x}^{2}\over2} = V - E^{\prime} \ .
\label{eq:ma.2}
\end{equation}
Here $E^{\prime}$ is the integration constant that can be considered as the
electron energy. The value of $E^{\prime}$ is determined from the boundary
conditions $x(0)=x_{a}$ and $x(T)=x_{b}$. For a rectangular barrier when
$x_{a}$ and $x_{b}$ are fixed $E^{\prime}$ is a function of $T$ only. Without
phonon emission $E^{\prime}$ eventually appears equal to the energy of the
incident electron $E$ that is also the total energy. If during tunneling
phonons are emitted then the electron energy $E^{\prime}$ appears smaller than
the total energy.
Thus, in the tunneling problem an important parameter appears the electron
velocity $v=\sqrt{2(V-E^{\prime})/m}$. If $V-E^{\prime}$ is around 10 meV or
larger than in GaAs where $m=0.067m_{0}$ ($m_{0}$ is the free electron mass)
$v$ is around or larger than $2\times10^{7}$ cm/s. This velocity is larger
than the sound velocity $s$ by about two orders of magnitude. That means that
frequencies of phonons with the wave length around the length of the barrier
$L$ are much smaller than the inverse time necessary for an electron to
traverse the barrier. Actually the wave length of a typical phonon interacting
with the tunneling electron is limited not by the length of the barrier but by
the width of the constriction $a$ that is smaller that $L$. However,
practically the ratio $a/L$ for a constriction where tunneling is still
measurable is not very small and the assumption that the typical phonon
frequencies are much smaller than the inverse traverse time is justified. This
assumption means that during the traverse time ($L/v$) the phonon potential
nearly does not change. In the extreme case we can consider it constant. We
call this case the static approximation. A similar approximation was used by
Flynn and Stoneham \cite {Flynn 70} and later by Kagan and Klinger \cite
{Kagan 76} treating the problem of quantum diffusion of atomic particles.
It should be mentioned that the physical significance of the parameter $\omega
t_{0}$, $\omega$ being the phonon frequency, $t_{0}$ the tunneling time as
defined by B\"{u}ttiker and Landauer \cite{Landauer82}, $t_{0}= \frac{L}{v}$
(in our case of a rectangular barrier) was also noted in the two papers of
Ueda and Ando \cite{Ueda 94}, but in their problem the typical frequency
$\omega$ was a tunable parameter defined by the properties of the electric
circuit.
In the static approximation the problem is dramatically simplified because we
need to consider electron motion in a static potential and the stationary
Schr\"odinger equation is enough for this. The integration with respect to
phonon coordinates is reduced to the integration only with respect to $u_{{\bf
q}i}$. The derivation of the corresponding expression for the transmission
coefficient in the static approximation from Eq.(\ref{eq:probab}) with the
help of the expansion in $s/v$ is given in the Appendix.
In the static approximation we can add also $V_{int}^{stat}=V_{int}(u_{{\bf
q}i})$ to the rhs of Eq.(\ref{eq:ma.2}). That immediately shows that the small
parameter characterizing the interaction with phonons is $V_{int}/(V-E)$. In
the calculation of the transmission coefficient we take into account only
terms of the first order in this parameter. So we ignore corrections to the
trajectory $x=vt$ coming from the interaction with phonons. It is obvious in
the calculation of $V_{int}$ and this is also true for ${\cal S}_{el}$.
Indeed, the trajectory $x=vt$ is found from the minimization of ${\cal
S}_{el}$ and any correction to this functional contains a correction to the
trajectory squared. Such an approximation means, in particular, that we
neglect all polaron effects. The physical meaning of the phonon effect in this
approximation is that different phonon configurations change the barrier and
the main contribution to the transmission comes from the configurations
corresponding to the barrier being a bit lower in average, so that the
tunneling probability is higher. It is worthwhile to note that the average
(over configurations) height of the barrier does not change because $\langle
V_{int}\rangle=0$ but nevertheless the average of the tunneling exponent is
modified due to $V_{int}$ (similar to, e.g., $\langle\exp(V_{int})\rangle>1$).
The tunneling across a static barrier is an elastic process and the energy of
the incident electron and the tunneled one is the same. Phonon emission and
the corresponding change of the electron energy come about only in the first
approximation in $s/v$ when the phonon dynamics is taken into account. The
calculation of the dynamic correction to the transmission coefficient that
requires a more complete treatment of the modes presented in the previous
section is given in Sec.\ref{sec:physical results}.
\section {Static approximation}
\label{sec:statapp}
In the static approximation the problem of tunneling can be formulated in a
very simple way, without making use of Eq.(\ref{eq:propagator}). First we can
find the transmission probability for a given static phonon field. The regular
WKB approximation gives for it
\begin{equation}
D(E;u_{{\bf q}i}) = \sqrt{\frac{m}{4
\left (
V + V_{int}^{stat} - E
\right )}}
\exp
\left\{
- {2\over\hbar} \int_{0}^{L} \sqrt{2m(V + V_{int}^{stat} - E)} \ dx
\right\} .
\label{eq:sa.1}
\end{equation}
The calculation of the electron transmission coefficient in the case when
phonon field is initially at the ground state is reduced now to the
integration of $D(E;u_{{\bf q}i})$ multiplied by the ground state phonon wave
function squared with respect to $u_{{\bf q}i}$,
\begin{equation}
P(E) = C^{2} \int D(E;u_{{\bf q}i})
\exp \left( - \frac{1}{\hbar}
\sum_{q} \rho q s {\mid u_{{\bf q}i} \mid}^2
\right) du_{{\bf q}i}
\label{eq:sa.2}
\end{equation}
(C is a normalizing constant). The expansion in $V_{int}$ gives
\begin{equation}
P(E) = P_{0}(E) C^{2} \int
\exp \left\{
- {\sqrt{2m} \over \hbar\sqrt{V - E}}
\int_{0}^{L} V_{int}^{stat} dx
-\frac{ \rho s}{\hbar}
\sum_{q} q \mid u_{{\bf q} i} \mid^2
\right\} du_{{\bf q}i} \ ,
\label{eq:sa.3}
\end{equation}
where
\begin{equation}
P_{0}(E) = \frac{\sqrt{m}}{2\sqrt{V - E}} \
\exp \left\{
-\frac{2L}{\hbar} \sqrt{2m(V - E)}
\right\} .
\label{eq:sa.4}
\end{equation}
is the transmission coefficient without interaction with phonons.
Eq.(\ref{eq:sa.3}) can be obtained directly from Eq.(\ref{eq:probab}) (see
Appendix). The result can be written in the form,
\begin{equation}
P(E) = P_{0}(E) \exp \left\{{2\over\hbar} \ \Phi_{stat}(E)\right\} \ .
\label{eq:sa.6}
\end{equation}
For the deformation potential
\begin{equation}
\Phi_{stat}(E) = {\Lambda^{2}L\over8\pi^{2}\rho s_{l}v^{2}}
\int \vert M_{{\bf q}_{\perp}}\vert^{2} q_{\perp} \
d^{2}{\bf q}_{\perp} \ ,
\label{eq:sa.9}
\end{equation}
where $s_{l}$ is the longitudinal phonon velocity.
For the piezoelectric coupling
\begin{equation}
\Phi_{stat}(E) =
{1\over2\pi^{3}\rho v^{2}}
\int_{0}^{\infty} dq_{x} {\sin^{2}(q_{x}L/2)\over q_{x}^2}
\sum_{\nu} {1\over s_{\nu}}
\int |\Xi_{{\bf q}\nu}|^2\vert M_{{\bf q}_{\perp}}\vert^{2} \
{d^{2}{\bf q}_{\perp} \over q} \ .
\label{eq:sa.7}
\end{equation}
Because of the anisotropy the result depends on the tunneling direction with
respect to crystalographic axes. For the tunneling in [100] the contribution
of the longitudinal phonons in $\Phi_{stat}$ is small compared to that of
transverse ones in $a/L$ and
\begin{equation}
\Phi_{stat}(E) = {8\beta^{2}e^{2}L \over \rho v^{2}\epsilon^{2}s_{t}}
\int \vert M_{{\bf q}_{\perp}}\vert^{2}
{q_{y}^{2}q_{z}^{2} \over q_{\perp}^{5}} \ d^{2}{\bf q}_{\perp} \ ,
\label{eq:sa.8}
\end{equation}
where $s_{t}$ is the velocity of the transverse phonons. For the tunneling in
[110] direction the contributions of both longitudinal and transverse phonons
are of the same order,
\begin{equation}
\Phi_{stat}(E) = {2\beta^{2}e^{2}L \over \rho v^{2}\epsilon^{2}s_{t}}
\int \vert M_{{\bf q}_{\perp}}\vert^{2}
\left[
{q_{y}^{4} + 4q_{y}^{2}q_{z}^{2} \over s_{t}q_{\perp}^{4}} -
\left({1 \over s_{t}} - {1 \over s_{l}}\right)
{9q_{y}^{4}q_{z}^{2} \over q_{\perp}^{6}}
\right]
{d^{2}{\bf q}_{\perp} \over q_{\perp}} \ ,
\label{eq:sa.10}
\end{equation}
As can be seen the typical phonon wave numbers with which the electron
interacts are fixed through the length scales of the problem, the barrier
length $L$ and the width of the constriction $a$. It is important to note
that, the static correction, in the exponent is positive, therefore static
phonons reduce the effective potential barrier height, enhancing electron
tunneling probability. The electron, due to coupling to the zero point
fluctuations of the phonon field, tunnels through an effectively lower
potential barrier.
\section {dynamic corrections and energy loss}
\label{sec:physical results}
To get the electron's energy loss due to phonon emission during the tunneling
process, one needs to go beyond the static approximation. As the first step we
make use of the WKB approximation to simplify the expression for the
propagator $K( x_{b},u_{{\bf q} f},T\mid x_{a},u_{{\bf q} i})$
(\ref{eq:propagator}). In this approximation the main contribution to the
integral with respect to $x(t)$ comes from the saddle point trajectory that in
the first approximation in $V_{int}/(V-E)$ is determined by
Eq.(\ref{eq:ma.2}), As a result the propagator is factorized.
In Eq.(\ref{eq:ma.2}) we passed to the imaginary time so
the trajectory is $x_{0}(t)=vt$ where $v=\sqrt{2(V-E^{\prime})/m}$, thus
the propagator can be expressed as:
\begin{eqnarray}
K (x_{b}, u_{{\bf q} f} ,-iT \mid x_{a}, u_{{\bf q} i}) =
K_{0}(x_{b},T \mid x_{a})
K_{ph}(u_{{\bf q} f},T \mid u_{{\bf q}i}) \ .
\label{eq:dc.1}
\end{eqnarray}
Here
\begin{equation}
K_{0}(x_{b},T \mid x_{a}) =
\left[
{m \over 4(V + V_{int}^{stat} - E)}
\right]^{1/4}
\exp \left ( \frac{- E'T}{\hbar} \right )
\exp{ \left (- \frac{1}{\hbar} \int_{0}^{L} dx
\sqrt{2 m (V - E')} \right )}
\label{eq:dc.2}
\end{equation}
is the electron propagator without interaction with phonons and
\begin{equation}
K_{ph}(u_{{\bf q}f},T \mid u_{{\bf q}i}) =
\int {\cal D} u_{\bf q}
\exp \left\{
- {1\over\hbar}
\left[
{\cal S}_{ph}(u_{\bf q}) + {\cal S}_{int}(u_{\bf q}, x_{0})
\right]
\right\} ,
\label{eq:dc.3}
\end{equation}
is the phonon part of the propagator.
So as the electron trajectory is determined the phonon part of the propagator
is the propagator of an ensemble of forced harmonic oscillators.\cite{Hibbs65}
The integration with respect to $u_{\bf q}$ leads to
\begin{equation}
K_{ph}(u_{{\bf q}f}, T \mid u_{{\bf q}i}) =
g(T) \exp \left[
- {1\over\hbar} \ {\cal S}_{cl}(u_{{\bf q}f}, T \mid u_{{\bf q}i})
\right] ,
\label{eq:dc.4}
\end{equation}
where
\begin{equation}
g(T) = \prod_{\bf q} \sqrt{\rho\omega_{q} \over 2\pi\hbar\sinh\omega_{q}T} \ ,
\label{eq:dc.5}
\end{equation}
and
\begin{eqnarray}
{\cal S}_{cl}(u_{{\bf q}f}, T \mid u_{{\bf q}i}) & = &
\sum_{q} \frac{\rho \omega_{q}}{2 \sinh(\omega_{q} T)}
\Big\{
\cosh (\omega_{q} T) \left ( {\mid u_{{\bf q} i} \mid}^2 +
{\mid u_{{\bf q} f} \mid}^2 \right ) - \left (
u_{{\bf q} i}{u_{{\bf q} f}^*} + {u_{{\bf q} i}^*}u_{{\bf q} f} \right )
\nonumber \\ && \hspace{-2cm} -
u_{{\bf q} f}
\left(
\frac{1}{\rho\omega_{q} v}
\int_{0}^{L} dx f_{\bf q}(x) \sinh(\frac{\omega_{q} x}{v})
\right ) -
u_{{\bf q} i}
\left(
\frac{1}{\rho\omega_{q} v}
\int_{0}^{L} dx f_{\bf q}(x) \sinh(\frac{\omega_{q} (L-x)}{v})
\right)
\nonumber\\ && \hspace{-2cm} -
{u_{{\bf q} f}^*}
\left(
\frac{1}{\rho\omega_{q} v}
\int_{0}^{L} dt f_{\bf q}^{*}(x) \sinh(\frac{\omega_{q} x}{v})
\right)
- {u_{{\bf q} i}^*}
\left(
\frac{1}{\rho\omega_{q} v}
\int_{0}^{L} dx f_{\bf q}^*(x) \sinh (\frac{\omega_{q} (L-x)}{v})
\right)
\nonumber\\ && \hspace{-2cm} -
\frac{2}{{\rho}^2{\omega_{q}}^2 {v}^2}
\int_{0}^{L} dx \int_{0}^{x} dy
\left[ f_{\bf q}(x)f_{\bf q}^*(y) + f_{\bf q}^*(x)f_{\bf q}(y) \right]
\sinh (\frac{\omega_{q} y}{v}) \sinh(\frac{\omega (L-x)}{v})
\Big\} \ .
\label{eq:dc.6}
\end{eqnarray}
Here according to Eqs.(\ref{eq:mod.def}) and (\ref{eq:mod.piezo})
\begin{equation}
f_{\bf q}(x) = {\Xi M_{{\bf q}_{\perp}} \over \sqrt{V_{vol}}} \ e^{iq_{x}x}
\label{eq:dc.7}
\end{equation}
for the piezoelectric interaction and
\begin{equation}
f_{\bf q}(x) =i \vert q \vert {\Lambda M_{{\bf q}_{\perp}}\over \sqrt{V_{vol}}}
\ e^{iq_{x}x}
\label{eq:dc.8}
\end{equation}
for the deformation potential interaction. One should note that in
Eq.(\ref{eq:dc.6}) we made a transformation of variables from $t$ in the
action, to $x=vt$.
Making use of the factorization (\ref{eq:dc.1}) the expression for tunneling
probability (\ref{eq:probab}) can be written as
\begin{equation}
\label{eq:tranprob}
P(E) = \int dT \int d\tilde{T}
K_{0}(x_{b},T \mid x_{a})K_{0}(x_{b},\tilde{T}
\mid x_{a}) \xi_{ph} \left( T ; \tilde{T} \right) ,
\end{equation}
where
\begin{eqnarray}
\label{eq:dc.10}
& & \xi_{ph}\left( T ; \tilde{T}\right) = g(T)g(\tilde{T})
\nonumber \\
&\times&\int du_{{\bf q}i} \int du_{{\bf q}f} \int {d\tilde u}_{{\bf q}i}
\exp \left[
- {1\over\hbar} \
{\cal S}_{cl}\left( u_{{\bf q}f},T \mid u_{{\bf q} i} \right)
- {1\over\hbar} \
{\cal S}_{cl}\left(u_{{\bf q}f},\tilde{T}, \mid {\tilde u}_{{\bf q}i}\right)
\right]
{\psi_{0} \left ( u_{{\bf q} i} \right )}
{\psi_{0}^* \left ( {\tilde u}_{{\bf q} i}\right )} \ ,
\end{eqnarray}
and $\psi_{0}\left(u_{{\bf q} i}\right)$ is the phonon ground state wave
function. The integrals with respect to $T$ and $\tilde{T}$ in
Eq.(\ref{eq:tranprob}) are calculated by the saddle point
method\cite{McLaughlin72}. Due to the symmetry of
$\xi_{ph}\left[T;\tilde{T}\right]$ with respect to the transposition of $T$
and $\tilde{T}$ the saddle point values of these variables are identical. We
are interested only in the exponential part of the transition probability and
for this only $\xi_{ph}\left[T;T\right]$ is necessary. The result can be
written in the form
\begin{equation}
\xi_{ph}\left(T;T\right) = A \exp\left[ {2\over\hbar} \ \Phi_{ph}(T)\right] \ ,
\label{eq:dc.11}
\end{equation}
where $\Phi_{ph}(T)$ is calculated in the Appendix and the pre-exponential
factor $A$, will not be calculated.
We now break $\Phi_{ph}(T)$ into a static part, $\Phi_{stat}$, and a dynamical
correction so that $\Phi_{ph}=\Phi_{stat}+\Phi_{dyn}$. As one can expect the
static part is identical to $\Phi_{stat}(E^{\prime})$ obtained in
Sec.\ref{sec:statapp}, Eq.(\ref{eq:sa.8}), in a more simple way. The dynamical
correction, $\Phi_{dyn}(T,E^{\prime})$, is obtained in the Appendix in the
leading order in $s/v\ll1$. Using these notations the saddle point equation
for time integration is given by:
\begin{equation}
\label{eq:tisp}
\left(
{\partial\Phi_{el}\over\partial E'} -
{\partial\Phi_{stat}\over\partial E'} - T
\right)
\frac{dE'}{dT} - \frac{d \Phi_{dyn}}{dT}
+ E - E' = 0 \ ,
\end{equation}
where $\Phi_{el}(E^{\prime})=L\sqrt{2m(V-E^{\prime})}$.
According to Sec.\ref{sec:statapp},
$\Phi_{el}(E^{\prime})-\Phi_{stat}(E^{\prime})$ is the electron action in the
phonon static field at the trajectory with the energy $E^{\prime}$. The
derivative of the action with respect to the energy is the traveling time
along the trajectory. The first term, $\partial\Phi_{el}/\partial E'$, has
been defined by B\"uttiker and Landauer\cite{Landauer82} as a semiclassical
traverse time. The second term, $\partial\Phi_{stat}/\partial E'$, gives a
phonon correction to the traverse time. The sum to the two terms equals $T$
and the expression in the parentheses is identically zero. Then
Eq.(\ref{eq:tisp}) can be written in the form
\begin{equation}
\label{eq:sadlepoint}
E - E' = - \frac{\partial \Phi_{dyn}}{\partial v} \frac{v^2}{L} .
\end{equation}
In the accepted approximation in the rhs of this equation the difference
between $E$ and $E^{\prime}$ has to be neglected.
According to our definition the energy $E$ appearing at the Fourier transform
of the transmission amplitude is the total energy of the system while
$E^{\prime}$ is the energy characterizing the electron trajectory. The
difference between them is the energy transferred to the phonon system, i.e.,
the average energy loss of the tunneling electron.
The substitution of $T=L/v$ and calculation of the integral with respect to
$q_{x}$ in Eq.(\ref{eq:ap.7}) we obtain for the deformation potential,
\begin{equation}
\Phi_{dyn}(E) =
{{\Lambda}^2 L^{2}\over 8\pi^{2}\rho v^{3}}
\int \vert M_{{\bf q}_{\perp}}\vert^{2} \ {q_{\perp}}^2
{d^{2}{\bf q}_{\perp}} \ .
\label{eq:dc.12}
\end{equation}
For the piezoelectric potential the terms in $|\Xi_{{\bf q}\nu}|^2$ containing
$q_{x}$ are small in $a/L$ and the main contribution is
\begin{equation}
\Phi_{dyn}(E) =
{L^{2}\over 8\pi^{2}\rho v^{3}} \sum_{\nu}
\int |\Xi_{{\bf q}\nu}|^2 \vert M_{{\bf q}_{\perp}}\vert^{2} \
{d^{2}{\bf q}_{\perp}} \ .
\label{eq:dc.13}
\end{equation}
For [100] tunneling direction Eq.(\ref{eq:dc.13}) becomes
\begin{equation}
\Phi_{dyn}(E) =
{8\beta^{2}e^{2}L^{2}\over \rho v^{3}\epsilon^{2}}
\int \vert M_{{\bf q}_{\perp}}\vert^{2} \
{q_{y}^{2}q_{z}^{2} \over q_{\perp}^{4}} \
{d^{2}{\bf q}_{\perp}} \ ,
\label{eq:dc.14}
\end{equation}
and for [110] tunneling direction
\begin{equation}
\Phi_{dyn}(E) =
{2\beta^{2}e^{2}L^{2}\over \rho v^{3}\epsilon^{2}}
\int \vert M_{{\bf q}_{\perp}}\vert^{2} \
(q_{y}^{4} + 4q_{y}^{2}q_{z}^{2}) \
{d^{2}{\bf q}_{\perp} \over q_{\perp}^{4}} \ .
\label{eq:dc.15}
\end{equation}
It worthwhile to note that $\Phi_{dyn}$ is positive, i.e., it increases the
transmission coefficient as well as $\Phi_{stat}$. One could expect that
phonon emission makes the barrier for electron effectively higher that may
lead to a reduction of the transmission probability. However, $\Phi_{dyn}$ is
calculated for $E=E^{\prime}$ and this effect can appear only in the next
approximation.
The average energy loss resulting from the deformation potential coupling is,
\begin{equation}
E - E^{\prime} =
{3{\Lambda}^2 L\over 8\pi^{2}\rho v^{2}}
\int \vert M_{{\bf q}_{\perp}}\vert^{2}{q_{\perp}}^2 \
{d^{2}{\bf q}_{\perp}} \ .
\label{eq:dc.16}
\end{equation}
Whereas the average energy loss due to the piezoelectric coupling is given by:
\begin{equation}
E - E^{\prime} =
{24\beta^{2}e^{2} L\over \rho v^{2}\epsilon^{2}}
\int \vert M_{{\bf q}_{\perp}}\vert^{2} \
{q_{y}^{2}q_{z}^{2} \over q_{\perp}^{4}} \
{d^{2}{\bf q}_{\perp}} \ ,
\label{eq:dc.17}
\end{equation}
for [100] tunneling direction and
\begin{equation}
E - E^{\prime} =
{6\beta^{2}e^{2} L\over \rho v^{2}\epsilon^{2}}
\int \vert M_{{\bf q}_{\perp}}\vert^{2} \
(q_{y}^{4} + 4q_{y}^{2}q_{z}^{2}) \
{d^{2}{\bf q}_{\perp} \over q_{\perp}^{4}} \ .
\label{eq:dc.18}
\end{equation}
for [110] tunneling direction.
The energy loss is proportional to the width of the barrier which means
that it is accumulated along it.
The comparison the static and dynamical phonon corrections to the tunneling
probability gives
$\Phi_{dyn}/\Phi_{stat}\sim(s/v)(L/a)$ where $a$ is the width of the
constriction. Typically the ratio $L/a$ is not very large so the expansion
that we used is justified.
\section{discussion and summary}
\label{sec:discussion}
In this paper we have presented a detailed study of effect of coupling with
acoustic phonons on electron tunneling across a rectangular barrier in a
realistic situation. We studied piezoelectric and deformation potential
coupling at low temperature which means that $\lambda_{T}/a\gg1$, where
$\lambda_{T}$ is the thermal phonon wave length (see Appendix). We considered
only a one-dimensional problem that can be realized in a quantum wire or a
narrow constriction.
In our calculation we assumed that the barrier is high enough to describe
tunneling in the semiclassical approximation. Our detailed calculations reveal
that the typical phonon interacting with the tunneling electron is chosen
through the length of the barrier $L$ and the width of the constriction or
quantum wire $a$. It is thus the geometry of the potential barrier which
defines the phonon frequency $\omega$. Under such a condition the electron
motion under the barrier is so fast that phonons don't follow it and can be
considered nearly static during the time necessary for electron to traverse
the barrier. The main effect of phonons in this case is a modulation of the
barrier so that its height has to be considered as a quantum mechanical
variable which probability distribution is controlled by a phonon wave
function. In this case, roughly speaking, an electron chooses for tunneling
the phonon configurations when the barrier is lower than its average value,
that results in an increase of the transmission coefficient compared to that
with zero electron-phonon coupling. Thus the interaction of an electron with
the zero point phonon fluctuations increases the tunneling probability.
The correction to the tunneling exponent in the static approximation is
proportional to the length of the barrier. That is the coupling affects the
dependence of the transmission coefficient on the height of the barrier only.
The dynamical correction leads to two effects. First, it describes the
reduction of the electron energy due to phonon emission. Second, which is
probably more interesting, it gives the dependence of the transmission
coefficient on the length of the barrier different from the regular one where
the log of the transmission coefficient is linear in the length. We calculated
only the first order correction to the exponential dependence. The effect can
be stronger and is probably measurable for tunneling near the top of the
barrier. One should note that even though the dynamical correction to the
transition probability is smaller than the static correction, it is still an
exponential correction and it easily can be made larger than unity, i.e., by
changing the length of the barrier. In this case it can significantly affect
the tunneling probability.
The dependence of transmission coefficient on the height and the length of a
barrier can be measured in devices where the barrier is introduced with the
help of a gate. In such a device both the height and the length of the barrier
are controlled by the gate voltage. Our results point out to a deviation of
these dependencies from the standard ones obtained from stationary
Schr\"odinger equation.
The application of our results to a long quantum wire can encounter a
difficulty because we did not take into account electron-electron interaction.
The effect of this interaction can be reduced for a short wire or a narrow
constriction.
We expect non-trivial results through further use of the piezoelectric and
deformation potential coupling mechanisms in two or three dimensional systems.
We believe that the physical considerations presented here which
greatly simplified the calculations are convenient for extending the
calculations to two and three dimensional physical situations.
\newpage
| proofpile-arXiv_065-8963 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The study of quantum computation has attracted considerable attention since
Shor discovered a quantum mechanical algorithm for factorization in
polynomial instead of exponential time \cite{Pwshor} in 1994. In 1996,
Grover \cite{grover} also showed that quantum mechanics can speed up a range
of search applications over an unsorted list of $N$ elements. Hence it
appears that a quantum computer can obtain the result with certainty in $O(%
\sqrt{N})$ instead of $O(N)$ attempts.
In 1982, Benioff \cite{benioff} showed that a computer could in principle
work in a purely quantum-mechanical fashion. In 1982 and 1986, Feynman
showed that a quantum computer might simulate quantum systems \cite{feynman}
\cite{feynman1}. In 1985 and 1989, Deutsch \cite{deutsch} \cite{deutsch1}
first explicitly studied the question that quantum-mechanical processes
allow new types of information processing. Quantum computers have two
advantages. One is the quantum states can represent a $1$ or a $0,$ or a
superposition of a $1$ and a $0,$which leads to quantum parallelism
computations. The other is quantum computers perform deterministic unitary
transformations on the quantum states.
It is difficult to build up quantum computers for two principal reasons. One
is decoherence of the quantum states. The other is a quantum computer might
be prone to errors, which are troublesomely corrected. The current
developments, however , have showed that the two obstacles might be
surmounted. Shor \cite{shora} \cite{calderbank} and Steane \cite{steane}
discovered that the use of quantum error-correcting codes enables quantum
computer to operate in spite of some degree of decoherence and errors, which
may make quantum computers experimentally realizable. Knill et al. \cite
{knill} also showed that arbitrary accurate quantum computation is possible
provided that the error per operation is below a threshold value. In
addition, it is possible to decrease the influence of decoherences by using
mixed-state ensembles rather than isolated systems in a pure state \cite
{cory} \cite{chuang} \cite{chuang1} .
Among many candidate physical systems envisioned to perform quantum
computations ( such as quantum dots \cite{dot} \cite{dot1} , isolated
nuclear spin \cite{divincenzo}, trapped ions \cite{cz} , optical photons
\cite{chy}, cavity quantum-electrodynamics \cite{drbh} \cite{tshw} and
nuclear magnetic resonance (NMR) of molecules in a room temperature solution
\cite{cory} \cite{chuang} etc. ), NMR quantum computers \cite{cory} \cite
{chuang} are particularly attractive because nuclear spins are extremely
well isolated from their environment and readily manipulated with modern NMR
methods. Recently, Chuang and Jones et al . \cite{chuang2} \cite{jones} have
experimentally realized for the first time a significant quantum computing
algorithm using NMR techniques to perform Grover'quantum search algorithm
\cite{grover}. In addition, NMR quantum computers with two qubits or three
qubits have been used to implement Deutsch's quantum algorithm \cite{chuang3}
\cite{jones1} \cite{lbr}. Cory et al. have experimentally realized for the
first time quantum error correction for phase errors on an NMR quantum
computer \cite{cory98} . However, there are two primary challenges in using
nuclear spins in quantum computers. One is low sensitivity of NMR signals.
The other is that scaling-up to much larger systems with this approach may
be difficult \cite{warren}. Recently, Kane \cite{kane} has presented a
scheme for implementing a quantum computer, using semiconductor physics to
manipulate nuclear spins. This may be an answer to scaling up to produce
useful quantum computers \cite{divincenzo1}. However, the current technology
is difficult to implement Kane's scheme \cite{kane} \cite{divincenzo1}.
Chuang \textit{et al. }first experimentally demonstrated the complete model
of a simple quantum computer by NMR spectroscopy on the small organic
molecule chloroform \cite{chuang2}. Their experimental results showed that
the execution of certain tasks on a quantum computer indeed requires fewer
steps than on a classical computer. However, building a practical quantum
computer by the use of NMR techniques poses a formidable challenge. Steps of
circumventing these problems based on the bulk spin resonance approach to
build quantum computers can include increasing the sample size, using
coherence transfer to and from electrons, and optical pumping to cool the
spin system \cite{chuang4}.
DiVincenzo \textit{et al.} have suggested that a ''solid state'' approach to
quantum computation with a $10^{6}$ qubit might be possible \cite{dot1}.
Solid-state devices open up the possibility of actual quantum computers
having realistic applications \cite{divincenzo1}. In the paper, we shall
discuss the implementation of a controlled NOT gate between two spins with a
spin-hyperpolarized bulk. It should be noted that an important progress in
optical pumping in solid state nuclear magnetic resonance (OPNMR) has been
made \cite{tr}. Recently, interest has been growing in exploitation of
optical pumping of nuclear spin polarizations as a means of enhancing and
localizing NMR signals in solid state nuclear magnetic resonance \cite{tr}.
The principal work has been concentrated in the following two areas. The
polarization of $^{129}Xe$ can be enormously enhanced through spin-exchange
with optically pumped alkali-metal vapor \cite{happer}. The NMR signals from
$^{129}Xe$ nuclei have enhanced to $\thicksim 10^{5}$ times the thermal
equilibrium value \cite{happer}. The spin-lattice relaxation of the $%
^{129}Xe $ nuclei in the polarized solid $^{129}Xe$ is exceptionally slow
(such as relaxation times longer than 500h \cite{happer1}). In addition, the
experimental results \cite{happer1} \cite{pines} \cite{happer2} \cite{pines1}
\cite{pines2} have shown that laser-polarized $^{129}Xe$ nuclei can be used
to polarize other nuclei that are present in the lattice or on a surface
through cross relaxation or cross polarization. Optical pumping NMR (OPNMR)
techniques of inorganic semiconductors (such as GaAs, and InP \textit{et al.}%
) \cite{tr} at low temperatures can enhance NMR signals in solids, which can
circumvent the two problems in solid state NMR, that is, its relative low
sensitivity and its lack of spatial selectivity. In OPNMR, spatial
localization of the NMR signals can be achieved through spatial localization
of the optical absorption and through cross polarization or relaxation
mechanisms \cite{tr} \cite{gbskp} \cite{ty}. In this paper, we shall propose
two schemes for implementing a controlled NOT (CNOT) gate in quantum
computers based on NMR spectroscopy and magnetic resonance imaging from
hyperpolarized solid $^{129}Xe$ and HCl mixtures and OPNMR in the solid
state nuclear magnetic resonance of inorganic semiconductors ( quantum well
and quantum dot ).
The paper is organized as follows. In Sec.II, we introduce the model for the
controlled NOT gates in terms of hyperpolarized solid $^{129}Xe$ and HCl
mixtures. In Sec.III we present a scheme for implementing a CNOT gate based
on OPNMR in inorganic semiconductors. In Sec.IV, we present implications for
experiments on our schemes.
\section{Quantum computation with hyperpolarized $^{129}Xe$ and HCl solid
mixtures}
To realize quantum computation, it is necessary to have the nonlinear
interactions in a system. These nonlinear interactions can simultaneously be
influenced externally in order to control states of the system. Meanwhile,
it is required that the system can be extremely well isolated from its
environment so that the quantum coherence in computing is not rapidly lost.
In a solid, there are dipolar couplings between two spin systems, which
result in broader NMR lines and cross relaxation among spin systems. The
interactions in solids can be controlled with complex radio-frequency (rf)
pulse sequences (such as decoupling pulse sequences). In general, the
interactions in solids are so strong that the eigenstates are not the simple
spin product states and logical manipulation are more complex \cite{warren}.
However, for special solids (such as $^{129}Xe$ and $^{1}HCl$ , and $%
^{129}Xe $ and $^{13}CO_{2}$ mixtures), since the homonuclear spin system is
diluted by other spin system , the spin dipolar interactions between two
homonuclei are weak and the solid mixtures \hspace{0.05in}may be homogeneous
so that the solids could have a resolved dipolar structure. A full quantum
mechanical treatment of the spin system is in order \cite{ernst}.
Spin-exchange optical pumping can produce hyperpolarized $^{129}Xe$ (with an
enhanced factor of about $10^{5}$ )\cite{happer}. The hyperpolarized $%
^{129}Xe$ gas can be frozen into a hyperpolarized $^{129}Xe$ solid with
little or no loss of $^{129}Xe$ nuclear spin polarization $\cite{happer1}$.
It should be noted that nuclear spin polarization of $^{129}Xe$ , which is
produced with spin-exchange optical pumping, does not depend on the strength
of magnetic fields. Therefore, the NMR experiments can be performed in low
fields produced by the general electromagnets or the magneto irons. NMR
signals with sufficient signal-to-noise ratio from a hyperpolarized $%
^{129}Xe $ solid are available on a single acquisition. At low temperatures,
the spin-lattice relaxation of the $^{129}Xe$ nuclei in the hyperpolarized
solids is extremely slow. For example, the $^{129}Xe$ spin polarization
lifetime $T_{1}$ is hundreds of hours at 1 KG below 20K \cite{happer1}.
Linewidth of NMR signals from a hyperpolarized $^{129}Xe$ solid is tens of
Hz \cite{zeng}. This indicates that the $^{129}Xe$ nuclei in a
hyperpolarized solid can be relatively well isolated from their environment.
Through dipolar-dipolar interactions, enhanced nuclear spin polarization of $%
^{129}Xe$ can be transferred to other nuclei ($^{1}H$ and $^{13}C$ $etc.$)
on a surface \cite{pines1} \cite{happer2}, in a solid lattice \cite{pines}
\cite{happer1}and a solution \cite{pines2}. The experimental results have
indirectly shown that the sign of the $^{129}Xe$ polarization can be
controlled by the helicity of the pumping laser or the orientation of the
magnetic field in the optical pumping stage \cite{pines} \cite{pines2}. It
is interesting to note that the signs of polarization of other nuclei ($%
^{1}H $ and $^{13}C$) depend on those of polarization of $^{129}Xe$ nuclei
in the cross polarization experiments \cite{pines} \cite{pines2}. The signs
of polarization of other nuclei ($^{1}H$ and $^{13}C$) are the same as $%
^{129}Xe $ nuclei \cite{pines} \cite{pines2}.
In quantum computation, Barenco \textit{et al.} \cite{ba} have shown that
single-spin rotations and the two-qubit ''controlled''-NOT gates can be
built up into quantum logic gates having any logical functions. In the
following, we shall show how to build up a ''controlled''NOT gate based on
NMR signals from a hyperpolarized $^{129}Xe$ solid.
Before our discussions, we first show how to prepare a hyperpolarized $%
^{129}Xe$ and HCl solid mixture. First, spin-exchange optical pumping
produces hyperpolarized $^{129}Xe$ gas at a 25G magnetic field. Secondly,
the polarized xenon is mixed with 760 Torr of HCl at room temperature and
the mixture is rapidly frozen into the sample tube in liquid $N_{2}$.
In solids, the dominant mechanism of spin-spin relaxation is the
dipole-dipole interaction. The experimental results \cite{pines} \cite
{pines2} \cite{happer2} have shown that the transfer of a large nuclear
polarization from $^{129}Xe$ to $^{1}H$ (or $^{13}C$) can be controlled by
cross polarization techniques in the solids. Hyperpolarized $^{129}Xe$ ice
can be used to polarize $^{131}Xe$ \cite{happer1} and $^{13}C$ ($CO_{2})$
\cite{pines} trapped in the xenon lattice through thermal mixture in low
fields. Pines \textit{et al. } \cite{pines1} have shown that high-field
cross polarization methods can make magnetization transfer between two
heteronuclear spin systems selective and sensitive.
In a quantum computer, logic functions are essentially classical, only
quantum bits are of quantum characteristic (quantum superpositions) \cite
{grover}. In the $^{129}Xe$ and HCl solid mixture, one can use complex pulse
sequences and dipole-dipole interactions between $^{129}Xe$ and $^{1}H$ to
manipulate and control two qubits ( $^{129}Xe$ and $^{1}H$ ). In the
following, we shall discuss the scheme for implementing a controlled NOT
(CNOT) gate in quantum computers based on NMR spectroscopy and magnetic
resonance imaging from the hyperpolarized solid $^{129}Xe$ and HCl mixtures.
Since $^{129}Xe$ and $^{1}H$ in the solid are a weakly- coupled two-spin IS
system with a resolved structure, in the doubly rotating frame, rotating at
the frequencies of the two applied r.f. fields, the Hamiltonian of this
system can be written as \cite{ernst}
$\mathbf{H=}\Omega _{I}I_{Z}+\Omega _{S}S_{z}+2\pi J_{IS}I_{z}S_{z}+\omega
_{1I}I_{x}+\omega _{1S}S_{x}$
where $\Omega _{I}$ and $\Omega _{S}$ are the resonance offsets, and $\omega
_{1I}$ and $\omega _{1S}$ are the two r.f. field strengths, respectively. $%
\omega _{1I}$ and $\omega _{1S}$ can be used to perform arbitrary
single-spin rotations to each of the two spins (I and S) with selective
pulses \cite{ernst}.
In the above Hamiltonian, the Hamiltonian $\mathbf{H}_{IS}=2\pi
J_{IS}I_{z}S_{z}$ leads to the following evolution operator
$\widehat{R}_{zIS}(J_{IS}\tau \pi )=e^{i2\pi J_{IS}\tau I_{z}S_{z}}$
$\qquad =\cos (J_{IS}\tau \pi /2)+i\sin (J_{IS}\tau \pi /2)\left|
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & -1 & 0 & 0 \\
0 & 0 & -1 & 0 \\
0 & 0 & 0 & 1
\end{array}
\right| $.
when $\tau =\frac{1}{2J_{IS}},\widehat{R}_{zIS}(J_{IS}\tau \pi )=\frac{\sqrt{%
2}}{2}\left|
\begin{array}{cccc}
1+i & 0 & 0 & 0 \\
0 & 1-i & 0 & 0 \\
0 & 0 & 1-i & 0 \\
0 & 0 & 0 & 1+i
\end{array}
\right| $.
It is easy to perform single- spin rotations with arbitrary phase $\phi $
with the modern pulse NMR techniques \cite{ernst}. For example, one can
perform single-spin rotations via composite z-pulses \cite{freeman} and free
precession \cite{jones1} .
In the ideal case, the operator to perform CNOT gates can be written as
$\widehat{C}_{CNOT}=\left|
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0
\end{array}
\right| $.
A ''controlled ''NOT gate can be realized by the following pulse sequences
\cite{chuang} \cite{chuang4}
$\widehat{C}_{1AB}=\widehat{R}_{yA}(-\pi /2)\widehat{R}_{zB}(-\pi /2)%
\widehat{R}_{zA}(-\pi /2)\widehat{R}_{zAB}(\pi /2)\widehat{R}_{yA}(\pi /2)$
$\;\;\;\;\;\;=\sqrt{-i}\left|
\begin{array}{llll}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0
\end{array}
\right| \;\;\;\;\;\;(1),$
or
$\widehat{C}_{2AB}=\widehat{R}_{yB}(\pi /2)\widehat{R}_{zAB}(\pi /2)\widehat{%
R}_{xB}(\pi /2)$
$\;\;\;\;\;=\left|
\begin{array}{llll}
(-1)^{1/4} & \ \;0 & \;0 & \;0 \\
\;0 & -(-1)^{3/4} & \;0 & \;0 \\
\;0 & \;\,0 & \;0 & (-1)^{1/4} \\
\;0 & \;\,0 & (-1)^{3/4} & \;0
\end{array}
\right| \;\;\;(2),$
where A and B represent ''target''qubit ( $^{1}H$ (HCl)) and
''control''qubit ( $^{129}Xe$), respectively. In eq. (1), one can not only
apply composite z-pulses \cite{freeman} actively to perform $\widehat{R}%
_{zA}(-\pi /2)$ and $\widehat{R}_{zB}(-\pi /2)$ but also let a two-spin AB
system freely evolve in periods of precession under Zeeman Hamiltonians.
When using pulse-sequences in Eqs. (1) and (2), one can obtain a similar
effect on quantum computation \cite{chuang4} . This involves an important
problem, that is, optimized rf pulse sequences. It is important to eliminate
unnecessary pulses.
The CNOT operations can also be performed on the basis of the
cross-polarization experiments in solid state NMR. Under some experimental
conditions, one uses pumping laser with helicities $\sigma ^{-}$ and $\sigma
^{+}$ to produce positive or negative polarizations of $^{129}Xe$ through
spin-exchange laser pumping techniques before one performs the CNOT
operations. If polarization of $^{129}Xe$ nuclei is negative, no operation
is needed. If polarizations of $^{129}Xe$ nuclei and $^{1}H$ nuclei are
positive, one can use the cross-polarization pulse sequences shown in Fig.1a
to perform the controlled rotation operations. If polarizations of $^{129}Xe$
nuclei and $^{1}H$ nuclei are respectively positive and negative, one can
use the pulse sequences in Fig.1b to perform the CNOT gate operations.
In order to yield a large sensitivity enhanced for $^{1}H$, before
performing CNOT gates, we perform cross polarization experiments of the $%
^{1}H$ and $^{129}Xe$ system. In addition, in order to enhance sensitivity
of NMR signals, one can selectively consider a hyperpolarized $^{129}Xe$ and
a $^{1}H$ as the ``target'' qubit and the ``control'' qubit respectively. If
we use three types of gases ( $^{129}Xe$ , $^{1}HCl$ and $^{13}CO_{2}$) to
prepare the NMR sample, we can perform quantum logic gates with three qubit (%
$^{129}Xe$ , $^{1}H$ and $^{13}C$). If we respectively use different
pressures of $^{129}Xe$ , $^{1}HCl$ and $^{13}CO_{2}$, we can increase or
decrease the homonuclear dipole-dipole interaction. That is because when the
relative pressure of the other gases is increased or decreased, the distance
r between two homonuclei is increased or decreased.
As we know, the principal limitation of solid state NMR is its relatively
low sensitivity. In addition, another limitation of solid state NMR is its
lack of spatial selectivity and extremely broad NMR lines. This is because
the NMR signals from an inhomogeneous sample are bulk-averaged \cite{tr} and
the dipole-dipole interactions between two spins are strong. Therefore, it
is difficult to assign particular logic gates to regions within the bulk
sample. Constructing the actual quantum circuit based on NMR could be an
important issue. Now, the actual quantum circuit based on liquid state NMR
is constructed only \textit{in time} by using different pulse sequences. Can
we build up actual quantum circuits both \textit{in time }and \textit{in
space}? It is interesting to note that NMR signals from $^{129}Xe$ ice have
relatively high sensitivity and that NMR lines are relatively narrow \cite
{zeng}. It is possible to construct the actual quantum circuit based on
hyperpolarized $^{129}Xe$ ice \textit{in space }using magnetic resonance
imaging (MRI) and NMR spectroscopy techniques. If one uses gradient fields
\cite{cory}, one can make the quantum logic gates localized in space in a
bulk sample. Therefore logic gates having any logical functions can be
performed not only\textit{\ in space} but also \textit{in time. }%
Entanglement between spins of two different cells may be realized by
homonuclear spin diffusion.
\section{Quantum computation based on optically pumped NMR of semiconductors}
Recently, great progress in optically pumped NMR (OPNMR) in semiconductors,
single quantum dot, as well as quantum wells has been made \cite{tr} \cite
{gbskp} \cite{ty}. An optical pumping technique can be used to enhance and
localize nuclear magnetic resonance signals. This method has greatly
improved spatial resolution and sensitivity of NMR signals.
Spatial localization of the NMR signals from solids can be achieved through
spatial localization of the optical absorption and through subsequent
manipulations and transfers of the optically pumped nuclear spin
polarization \cite{tr}.
Enhanced NMR signals can be detected either indirectly by optical techniques
\cite{tr} or directly by conventional rf pick up coils \cite{tr} . Optical
detection can extremely sensitively measure NMR signals from a single
quantum dot ( $\leq 10^{4}$ nuclei ) \cite{gbskp}. Here we propose a scheme
for the implementation of quantum logic gates by using OPNMR in solids,
where the electron spin $\overrightarrow{S}$ and the nuclear spin $%
\overrightarrow{I}$ are respectively considered as the ``control''qubit and
the ``target''qubit.
For semiconductors with zinc blende structures ( such as, Si and GaAs
\textit{etc. ), }spin polarization of conduction electrons in semiconductors
can be produced by near-infrared laser light with circularly polarized light
working at the band gap \cite{tr}. Since electron and nuclear spins are
coupled by the hyperfine interaction, polarization is transferred between
electrons and nuclei by the spin flip-flop transitions. Through many optical
pumping cycles, large polarization of nuclear spins can be achieved at low
temperature. When the large nuclear spin polarizations are produced, one can
directly detect NMR signals from hyperpolarized nuclei in the semiconductors
with conventional NMR techniques or optical techniques.
In addition, since optical detection of NMR is extremely sensitive, which
can detect NMR signals from fewer than $10^{4}$ \cite{gbskp} nuclei, the
optical methods are naturally preferable to the conventional NMR methods.
Optical detection is mainly based on the following two mechanisms. In
direct-gap semiconductors, a conduction electron can emit a circularly
polarized photon through recombining with a hole in the hole band. The
degree of circular polarization of the photoluminescence depends on the
polarization of the conduction electrons. The hyperfine interaction leads to
spin polarization transfer between electrons and nuclei. Therefore it is
possible to detect NMR in direct-gap semiconductors indirectly by optical
detection methods \cite{tr}. In addition, under NMR conditions,
hyperpolarized nuclei in semiconductors can act back on electron spins by
the hyperfine interaction so that they shift electron Zeeman levels (
Overhauser shift ) \cite{gbskp} and change polarization of electron spins.
This is because hyperpolarized nuclei exert a magnetic field ( called the
nuclear hyperfine field ) on the electron spins \cite{gbskp}. The nuclear
hyperfine field is directly proportional to the nuclear spin polarization.
Radio-frequency pulse near an NMR transition can change the strength and
direction of this field. The net magnetic field felt by the electron spins
depends on the combined action of this field and the externally applied
magnetic field. The shift of the conduction electron Zeeman levels results
from the change of the net magnetic field. It is possible to measure this
Overhauser shift \cite{gbskp} and the polarization of the photoluminescence
through sensitive optical spectroscopies with tunable lasers and highly
sensitive detectors under NMR conditions \cite{gbskp}. Therefore one
indirectly measures NMR in the direct-gap semiconductors by optical
detection methods.
In the following, we shall discuss how to prepare and measure the states of
the nuclear spins in the direct-gap semiconductors and how to perform the
controlled rotation operation on the nuclear spins in our scheme.
For simplicity, we shall take the direct-gap semiconductor InP into account,
which has electronic levels similar to GaAs ( see Fig. 2a )\cite{tr} . Near
the $\Gamma $ point ( electronic Bloch wavevector k is equal to zero ), an
excess of conduction electrons with $m_{1/2}=+1/2$ can be produced by using
circularly polarized light with helicity $\sigma ^{-}$ tuned to the band gap
( E$_{g}\approx 1.42eV$ in InP near 0 K ) \cite{tr} . The direction of the
conduction electron spin polarization can be controlled by circularly
polarized light with different helicities ( such as $\sigma ^{-}$ or $\sigma
^{+}$ ). Electron spin polarization can be transferred to the $^{31}P$
nucleus by the hyperfine interaction. In addition, the $^{31}P$ nuclear
resonance frequency change is directly proportional to the electron spin
polarization $\langle S_{z}\rangle $ \cite{ty}, i.e. $\Delta f=A\rho
(z^{^{\prime }})\langle S_{z}\rangle $ \cite{ty}, where A is a coupling
constant, $\rho (z^{^{\prime }})$ is the conduction electron density envelop
function, $z^{^{\prime }}$ is the displacement of nucleus from a conduction
electron and $\langle S_{z}\rangle $ is electron spin polarization. The
nuclear resonance frequency of $^{31}P$ can be written as: $\nu =\gamma
B_{0}/2\pi +\Delta f,$ where $\gamma $ is the gyromagnetic ratio and $B_{0}$
is the externally applied magnetic field. Therefore one can use the observed
NMR frequency shift based on positive or negative electron spin polarization
$\langle S_{z}\rangle $ to control $^{31}P$ nuclear rotations by using
selective r.f. pulses. For example, only when the electron spin polarization
is positive, a radio pulse required to flip spin can be used to selectively
change the states of the nucleus. When the electron spin polarization is
negative, one has no use for doing anything. Therefore, one can perform the
controlled rotation operations of $^{31}P$ nuclear spin on the basis of
different directions of electron spin polarizations.
The purpose of optical pumping in the computer is to control the hyperfine
interaction between electrons and nuclei, indirectly to mediate nuclear spin
interactions, to produce electron and nuclear spin polarization and
indirectly to measure nuclear spin polarization. For example, when the
valence band electrons are excited to the conduction band near the $\Gamma $
point, the large hyperfine interaction energy is yielded. This is because
near the $\Gamma $ point, the conduction band is primarily composed of
wavefunctions with $s$ orbits so that the electron wavefunctions are
concentrated at the nucleus. When one uses a laser to pump a cell between
two cells in the semiconductors (see Fig. 2b ), it can enhance nuclear
dipole-dipole interactions and mediate the indirect nuclear spin coupling.
This is because when the electrons are excited to the conduction band by a
laser, the conduction electron wavefunction extends over large distance
through the crystal lattice and large electron spin polarizations are
produced. Electron spin polarization can be transferred to nuclear spins by
the hyperfine interaction. As soon as larger nuclear spin polarizations are
produced, the nuclear spins act back on the electron and shift the electron
Zeeman energy ( Overhauser shift ) \cite{gbskp}. By measuring the magnitude
of the Overhauser shift under NMR conditions, it is possible to measure the
states of nuclei with the Raman spectroscopy \cite{gbskp}.
On the basis of the above discussions, in the following, we shall discuss
how to prepare the cell magnetization in an initial state, how to perform
the controlled rotation operations of $^{31}P$ nuclear spin, how to couple
the effective pure states of the two adjacent cells and how to measure the
effective states of logic gates at different cells.
Optical pumping in the computer can be used to prepare the electron spin
states and the nuclear spin states, i.e. control qubits and target qubits.
As we have seen, one can pump the electrons in valence band into positive or
negative polarizations of conduction electrons by using circularly light
with different helicities ( $\sigma ^{-}$ or $\sigma ^{+}$ ). Therefore an
initial state in a cell can be loaded with circularly polarized light with
different helicities ( $\sigma ^{+}$ or $\sigma ^{-}$ ). For example, in
order to prepare electron and nuclear spins up ( at a magnetic field ), spin
up of the conduction electrons can be produced with circularly polarized
light with helicity of $\sigma ^{-}$ tuned to the band gap, and spin up of $%
^{31}P$ nuclei can also be prepared by the hyperfine interaction. Similarly,
one can prepare electron and nuclear spins down with circularly polarized
light with helicity of $\sigma ^{+}$ . In Fig.2b, one uses many laser beams
with different helicities ( $\sigma ^{+}$ or $\sigma ^{-}$ ) to initial
logic gates at different cells in the different states.
The controlled rotation operations of $^{31}P$ nuclear spins can be
performed by r.f pulses with different frequencies on the basis of the
states ( spins up or down ) of the conduction electrons at different cells,
as has been discussed above.
In Fig.2b, entanglements of effective pure states between cell 1 and cell 3,
might be performed by the following means. Between two logic gates ( such as
1 and 3 in Fig.2b ), if one uses laser with higher power to pump cell 2, one
can produce much more conduction electrons near cell 2 and obtain higher
polarization of the conduction electrons at cell 2, laser pumping at cell 2
results in the electron wavefunction at the cell extending over large
distances through the crystal lattice. The electron spin dipole-dipole
interactions between these two gates ( 1 and 3 ) are increased through the
conduction electrons of the 2 cell mediating electron spin dipole-dipole
interactions between cell 2 and cell 1, and cell 2 and cell 3. Therefore one
can use pumping laser light indirectly to mediate an indirect coupling
between $^{31}P$ qubits of two cells by means of the $^{31}P$
electron-nuclear hyperfine interaction. If one uses pumping laser with low
or no power to pump cell 2, one can decrease the indirect interactions
between $^{31}P$ qubits of two cells so that two gates 1 and 3 can
independently work. As the distance between two cells is increased, the
interactions between distant logic spin will no longer be effective,
Fortunately, universal quantum computation is still possible with just local
interactions \cite{chuang4} \cite{lloyd}. This is because one can use a
cellular automata architecture to perform any function computations in a
linearly-increasing computational time with system size due to massage
passing \cite{lloyd} .
The $^{31}P$ nuclear spin states at a cell can indirectly be measured with
the Raman spectroscopy \cite{gbskp}. This is because when $^{31}P$ nuclear
spin is up or down, the magnitude of the Overhauser shift under NMR
conditions is different \cite{gbskp}.
Can we construct different logic gates in space in semiconductor? i.e. can
one make logic gates addressable? In Fig.2b, similar to the scheme of
trapped ion quantum computer \cite{cz} , we use N\ different laser beams
with different helicities to pump and detect different cells in space in the
InP semiconductor, together with the externally applied gradient magnetic
fields and r.f. gradient pulses, so that we can build up any logic gates in
space.
\section{Conclusion}
We have described the two schemes for implementing the controlled NOT gates
in NMR quantum computers. It is possible to realize these two proposals with
current techniques. Optical pumping in solid state NMR can circumvent the
two problems of relatively low sensitivity and lack of spatial selectivity
in solid state NMR. Therefore it is possible to construct quantum logic
gates in space. The schemes could be useful for implementing actual quantum
computers in terms of a cellular automata architecture. It should be noted
that nuclear spin polarization in optical pumping solid state NMR does not
depend on the strength of magnetic fields. The experiments can be performed
in low fields produced by the general electromagnets or the magneto irons.
The experimental demonstration of our proposal with optical pumping in solid
state NMR and modern NMR techniques is possible.
\medskip
{\Large Acknowledgments}
This work has been supported by the National Natural Science Foundation of
China.
| proofpile-arXiv_065-9001 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{Intro}
In this paper, we consider a {\it toroidal
compactification} of a {\it mixed Shimura variety}
\[
j: M \hookrightarrow M({\mathfrak{S}}) \: .
\]
According to \cite{P1}, the {\it boundary} $M({\mathfrak{S}}) - M$ has a natural
{\it stratification} into locally closed subsets,
each of which is itself (a quotient by
the action of a finite group of) a Shimura variety. Let
\[
i: M' \hookrightarrow M({\mathfrak{S}})
\]
be the inclusion of an individual such stratum. Both in the Hodge and the
$\ell$-adic context, there is a theory of {\it mixed sheaves}, and in particular,
a functor
\[
i^*j_*
\]
from the bounded derived category of mixed sheaves on $M$ to that of mixed sheaves on $M'$.
The objective of the present article is a formula for the effect of $i^*j_*$
on those complexes of
mixed sheaves coming about via the {\it canonical construction},
denoted $\mu$:
The Shimura variety $M$ is associated to a linear algebraic group $P$ over ${\mathbb{Q}}$, and any complex of algebraic representations ${\mathbb{V}}^\bullet$ of $P$ gives rise
to a complex of
mixed sheaves $\mu({\mathbb{V}}^\bullet)$ on $M$. Let $P'$ be the group belonging to $M'$;
it is the quotient by a normal unipotent subgroup $U'$ of a
subgroup $P_1$ of $P$:
\[
\begin{array}{ccccc}
U' & \trianglelefteq & P_1 & \le & P \\
& & \downarrow & & \\
& & P' & &
\end{array}
\]
Our main result (\ref{2H} in the Hodge setting; \ref{3J} in the $\ell$-adic setting) expresses the composition $i^*j_* \circ \mu$
in terms of the canonical construction $\mu'$ on $M'$, and Hochschild
cohomology of $U'$. It may be seen as complementing results
of Harris and Zucker (\cite{HZ}), and of Pink (\cite{P2}).
In the $\ell$-adic setting, \cite{P2} treats the analogous question for the
natural stratification of the {\it Baily--Borel compactification} $M^*$
of a {\it pure} Shimura variety $M$.
The resulting formula (\cite{P2}~(5.3.1)) has a more
complicated structure than ours: Besides Hochschild cohomology of a unipotent
group, it also involves cohomology of a certain arithmetic group.
Although we are interested in a different geometric situation, much of the
abstract material developed in the first two sections of \cite{P2} will enter
our proof. We should mention that the proof of Pink's result actually involves
a toroidal compactification. The stratification used is the
one induced by the stratification of $M^*$, and is therefore coarser than the
one considered in the present work.
In \cite{HZ}, Harris and Zucker study
the {\it Hodge structure} on the boundary cohomology of
the {\it Borel--Serre compactification} of a Shimura variety. As
in \cite{P2}, toroidal compactifications enter the proof of the
main result (\cite{HZ}~(5.5.2)). It turns out to be necessary to control
the structure of $i^*j_* \circ \mu({\mathbb{V}}^\bullet)$ in the case
when the stratum $M'$ is minimal.
There, the authors
arrive at a
description which is equivalent to ours (\cite{HZ}~(4.4.18)).
Although they only treat the case of a pure Shimura variety, and
do not relate their result directly to representations of the group $P'$,
it is fair to say that an important part of
the main Hodge theoretic information
entering our proof
(see (b) below) is
already contained in \cite{HZ}~(4.4). Still, our global
strategy of proof of the main comparison result \ref{2H} is different:
We employ Saito's \emph{specialization functor}, and
a homological yoga to reduce to two seemingly weaker
comparison statements: (a) comparison for the full functor $i^*j_* \circ \mu$, but only
on the level of local systems; (b) comparison on the level of variations of
Hodge structure, but only for ${\cal H}^0i^*j_* \circ \mu$.
It is a pleasure to thank A.~Huber, A.~Werner, D.~Blasius, C.~Deninger,
G.~Kings,
C.~Serp\'e, J.~Steenbrink, M.~Strauch
and T.~Wenger for useful remarks, and
G.~Weckermann for \TeX ing my manuscript.
I am particularly grateful to R.~Pink for pointing out an error
in an earlier version of the proof of \ref{2H}.
Finally, I am indebted to the referee for
her or his helpful comments.
\myheading{Notations and Conventions:}
Throughout the whole article, we make consistent use of the language and
the main results of \cite{P1}.
Algebraic representations of an algebraic group are finite dimensional
by definition. If a group $G$ acts on $X$, then we write $\mathop{\rm Cent}\nolimits_G X$ for
the kernel of the action. If $Y$ is a subobject of $X$, then $\mathop{\rm Stab}\nolimits_G Y$
denotes the subgroup of $G$ stabilizing $Y$.
If $X$ is a variety over ${\mathbb{C}}\,$,
then $D^b_{\rm c} (X({\mathbb{C}}))$ denotes the full triangulated subcategory
of complexes of sheaves of abelian groups on $X({\mathbb{C}})$ with
constructible cohomology. The subcategory of complexes whose cohomology
is \emph{algebraically} constructible is denoted by $D^b_{\rm c} (X)$.
If $F$ is a coefficient field,
then we define triangulated categories of complexes of sheaves of
$F$-vector spaces
\[
D^b_{\rm c} (X , F) \subset D^b_{\rm c} (X({\mathbb{C}}) , F)
\]
in a similar fashion. The category
$\mathop{\bf Perv}\nolimits_F X$ is defined as the heart of the perverse
$t$-structure on $D^b_{\rm c} (X , F)$.
Finally, the ring of finite ad\`eles over ${\mathbb{Q}}$ is denoted by ${\mathbb{A}}_f$.
\section{Strata in toroidal compactifications}
\label{1}
This section provides complements to certain aspects of Pink's treatment (\cite{P1}). The first concerns the shape of the canonical stratification of a toroidal compactification of a Shimura variety. According to
\cite{P1}~12.4~(c), these strata are quotients by finite group actions of ``smaller'' Shimura varieties. We shall show (\ref{1F}) that under mild restrictions (neatness of the compact group, and condition $(+)$ below),
the finite groups occurring are in fact trivial.
The second result concerns the formal completion of a stratum.
Under the above restrictions, we show (\ref{1M}) that the completion
in the toroidal compactification is canonically isomorphic to the
completion in a suitable torus embedding.
Under special assumptions on the cone decomposition giving rise to the compactification, this result is an immediate consequence of \cite{P1}~12.4~(c), which concerns the {\it closure} of the stratum in question.
Finally (\ref{1R}), we identify the normal cone of a
stratum in a toroidal compactification.\\
Let $(P, {\mathfrak{X}})$ be {\it mixed Shimura data} (\cite{P1}~Def.~2.1).
So in particular, $P$ is a connected algebraic linear group over ${\mathbb{Q}}$, and
$P({\mathbb{R}})$ acts on the complex manifold ${\mathfrak{X}}$ by analytic automorphisms.
Any {\it admissible parabolic subgroup} (\cite{P1}~Def.~4.5) $Q$ of $P$ has a canonical normal subgroup $P_1$ (\cite{P1}~4.7). There is a finite collection of {\it rational boundary components} $(P_1 , {\mathfrak{X}}_1)$, indexed by the $P_1 ({\mathbb{R}})$-orbits in $\pi_0 ({\mathfrak{X}})$ (\cite{P1}~4.11). The $(P_1 , {\mathfrak{X}}_1)$ are themselves mixed Shimura data.
Denote by $W$ the unipotent radical of $P$. If $P$ is reductive, i.e., if $W=0$, then $(P, {\mathfrak{X}})$ is called {\it pure}.
Consider the following condition on $(P, {\mathfrak{X}})$:
\begin{enumerate}
\item [$(+)$] If $G$ denotes the maximal reductive quotient of $P$, then the neutral connected component $Z (G)^0$ of the center $Z (G)$ of $G$ is, up to isogeny, a direct product of a ${\mathbb{Q}}$-split torus with a torus $T$ of compact type (i.e., $T({\mathbb{R}})$ is compact) defined over ${\mathbb{Q}}$.
\end{enumerate}
From the proof of \cite{P1}~Cor.~4.10, one concludes:
\begin{Prop}\label{1A}
If $(P, {\mathfrak{X}})$ satisfies $(+)$, then so does any rational boundary component $(P_1 , {\mathfrak{X}}_1)$.
\end{Prop}
Denote by $U_1 \trianglelefteq P_1$ the ``weight $-2$'' part of $P_1$. It is abelian, normal in $Q$, and central in the unipotent radical $W_1$ of $P_1$.
Fix a connected component ${\mathfrak{X}}^0$ of ${\mathfrak{X}}$, and denote by
$(P_1 , {\mathfrak{X}}_1)$ the associated rational boundary component. There is a natural open embedding
\[
\iota: {\mathfrak{X}}^0 \longrightarrow {\mathfrak{X}}_1
\]
(\cite{P1}~4.11, Prop.~4.15~(a)). If ${\mathfrak{X}}^0_1$ denotes the connected component of ${\mathfrak{X}}_1$ containing ${\mathfrak{X}}^0$, then the image of the embedding can be described by means of the map {\it imaginary part}
\[
\mathop{{\rm im}}\nolimits : {\mathfrak{X}}_1 \longrightarrow U_1 ({\mathbb{R}}) (-1) := \frac{1}{2 \pi i} \cdot U_1 ({\mathbb{R}}) \subset U_1 ({\mathbb{C}})
\]
of \cite{P1}~4.14: ${\mathfrak{X}}^0$ is the preimage of an open convex cone
\[
C ({\mathfrak{X}}^0 , P_1) \subset U_1 ({\mathbb{R}}) (-1)
\]
under $\mathop{{\rm im}}\nolimits |_{{\mathfrak{X}}^0_1}$ (\cite{P1}~Prop.~4.15~(b)).\\
Let us indicate the definition of the map $\mathop{{\rm im}}\nolimits$: given $x_1 \in {\mathfrak{X}}^0_1$, there is exactly one element $u_1 \in U_1 ({\mathbb{R}}) (-1)$ such that $u^{-1}_1 (x_1) \in {\mathfrak{X}}^0_1$ is real, i.e., the associated morphism of the Deligne torus
\[
\mathop{{\rm int}}\nolimits (u^{-1}_1) \circ h_{x_1} : {\mathbb{S}}_{{\mathbb{C}}} \longrightarrow P_{1,{\mathbb{C}}}
\]
(\cite{P1}~2.1) descends to ${\mathbb{R}}$. Define $\mathop{{\rm im}}\nolimits (x_1) := u_1$.\\
We now describe the composition
\[
\mathop{{\rm im}}\nolimits \circ \iota : {\mathfrak{X}}^0 \longrightarrow U_1 ({\mathbb{R}}) (-1)
\]
in terms of the group
\[
H_0 := \{ (z,\alpha) \in {\mathbb{S}} \times {\rm GL}_{2,{\mathbb{R}}} \, | \, N(z) = \det (\alpha) \}
\]
of \cite{P1}~4.3. Let $U_0$ denote the copy of ${\mathbb{G}}_{a,{\mathbb{R}}}$ in $H_0$ consisting of elements
\[
\left( 1 , \left(
\begin{array}{cc}
1 & \ast \\ 0 & 1
\end{array} \right) \right) \; .
\]
According to \cite{P1}~Prop.~4.6, any $x \in {\mathfrak{X}}$ defines a morphism
\[
\omega_x : H_{0,{\mathbb{C}}} \longrightarrow P_{{\mathbb{C}}} \; .
\]
\begin{Lem}\label{1B}
Let $x \in {\mathfrak{X}}^0$. Then
\[
\mathop{{\rm im}}\nolimits (\iota x) \in U_1 ({\mathbb{R}}) (-1)
\]
lies in $\omega_x (U_0 ({\mathbb{R}}) (-1) - \{0\})$.
\end{Lem}
\begin{Proof}
Since the associations
\[
x \longmapsto \omega_x
\]
and
\[
x \longmapsto \mathop{{\rm im}}\nolimits (\iota x)
\]
are $(U ({\mathbb{R}})(-1))$-equivariant, we may assume that $\mathop{{\rm im}}\nolimits (x) = 0$, i.e., that
\[
h_x : {\mathbb{S}}_{{\mathbb{C}}} \longrightarrow P_{{\mathbb{C}}}
\]
descends to ${\mathbb{R}}$. According to the proof of \cite{P1}~Prop.~4.6,
\[
\omega_x : H_{0,{\mathbb{C}}} \longrightarrow P_{{\mathbb{C}}}
\]
then descends to ${\mathbb{R}}$. Now
\[
h_{\iota x} : {\mathbb{S}}_{{\mathbb{C}}} \longrightarrow P_{1,{\mathbb{C}}} \hookrightarrow P_{{\mathbb{C}}}
\]
is given by $\omega_x \circ h_{\infty}$, for a certain embedding
\[
h_{\infty} : {\mathbb{S}}_{{\mathbb{C}}} \longrightarrow H_{0,{\mathbb{C}}}
\]
(\cite{P1}~4.3).
More concretely, as can be seen from \cite{P1}~4.2--4.3,
there is a $\tau \in {\mathbb{C}} - {\mathbb{R}}$ such that on ${\mathbb{C}}$-valued points,
we have
\[
h_\infty: (z_1,z_2) \longrightarrow
\left( (z_1,z_2) , \left(
\begin{array}{cc}
z_1z_2 & \tau (1-z_1z_2) \\
0 & 1
\end{array} \right) \right) \; .
\]
Hence there is an element
\[
u_0 \in U_0 ({\mathbb{R}}) (-1) - \{0\}
\]
such that $\mathop{{\rm int}}\nolimits (u^{-1}_0) \circ h_\infty$ descends to ${\mathbb{R}}$.
But then $\omega_x (u_0)$ has the defining property of $\mathop{{\rm im}}\nolimits (\iota x)$.
\end{Proof}
Let $F$ be a field of characteristic $0$. By definition of Shimura data, any
algebraic representation
\[
{\mathbb{V}} \in \mathop{\bf Rep}\nolimits_F P
\]
comes equipped with a natural weight filtration $W_{\bullet}$
(see \cite{P1}~Prop.~1.4). Lemma \ref{1B}
enables us to relate it
to the weight filtration $M_{\bullet}$ of
\[
\mathop{\rm Res}\nolimits^P_{P_1} ({\mathbb{V}}) \in \mathop{\bf Rep}\nolimits_F P_1 \; :
\]
\begin{Prop}\label{1C}
Let ${\mathbb{V}} \in \mathop{\bf Rep}\nolimits_F P$, and $T \in U_1 ({\mathbb{Q}})$ such that
\[
\pm \frac{1}{2 \pi i} T \in C ({\mathfrak{X}}^0 , P_1) \; .
\]
Then the weight filtration of $\log T$ relative to $W_{\bullet}$ (\cite{D}~(1.6.13)) exists, and
is identical to $M_{\bullet}$.
\end{Prop}
\begin{Proof}
Set $N:= \log T$. Since $\mathop{\rm Lie}\nolimits (U_1)$ is of weight $-2$, we clearly have
\[
NM_i \subset M_{i-2} \; .
\]
It remains to prove that
\[
N^k : {\rm Gr}^M_{m+k} {\rm Gr}^W_m {\mathbb{V}} \longrightarrow {\rm Gr}^M_{m-k} {\rm Gr}^W_m {\mathbb{V}}
\]
is an isomorphism. According to \ref{1B}, there are $x \in {\mathfrak{X}}^0$ and $u_0 \in U_0 ({\mathbb{R}}) (-1) - \{0\}$ such that
\[
\omega_x : H_{0,{\mathbb{C}}} \longrightarrow P_{{\mathbb{C}}}
\]
maps $u_0$ to $\pm \frac{1}{2\pi i} T$. By definition,
$M_{\bullet}$ is the weight filtration associated to the morphism
\[
\omega_x \circ h_{\infty} : {\mathbb{S}}_{{\mathbb{C}}} \longrightarrow P_{1,{\mathbb{C}}} \; .
\]
Our assertion has become one about representations of $H_{0,{\mathbb{C}}}$. But
$\mathop{\bf Rep}\nolimits_{\mathbb{C}} H_{0,{\mathbb{C}}}$ is semisimple, the irreducible objects being given by
\[
\mathop{\rm Sym}\nolimits^n V \otimes \chi \; ,
\]
$V$ the standard representation of ${\rm GL}_{2,{\mathbb{C}}} \; , \; \chi$ a character of $H_{0,{\mathbb{C}}}$ and $n \ge 1$. It is straightforward to show that for any such representation, the weight filtration defined by $h_{\infty}$ equals the monodromy weight filtration for $\log u_0$.
\end{Proof}
\begin{Cor}\label{1D}
Let $T \in U_1 ({\mathbb{Q}})$ such that $\pm \frac{1}{2 \pi i} T \in C ({\mathfrak{X}}^0 , P_1)$.
Then
\[
\mathop{\rm Cent}\nolimits_W(T) = \mathop{\rm Cent}\nolimits_W(U_1) = W \cap P_1 \; .
\]
\end{Cor}
\begin{Proof}
The inclusions ``$\supset$'' hold since the right hand side is contained in $W_1$, and $U_1$ is central in $W_1$. For the reverse inclusions, let us show that
\[
\mathop{\rm Lie}\nolimits \left( \mathop{\rm Cent}\nolimits_W (T) \right) \subset \mathop{\rm Lie}\nolimits W
\]
is contained in the (weight $\le -1$)-part of the
restriction of the adjoint representation
\[
\mathop{\rm Lie}\nolimits W \in \mathop{\bf Rep}\nolimits_{\mathbb{Q}} P
\]
to $P_1$. Observe that with respect to this representation, we have
\[
\ker \left( \log T \right) = \mathop{\rm Lie}\nolimits \left( \mathop{\rm Cent}\nolimits_W (T) \right) \; .
\]
First, recall (\cite{P1}~2.1) that ${\rm Gr}_m^{W_\bullet} (\mathop{\rm Lie}\nolimits W) = 0$
for $m \ge 0$.
From the defining property of the weight filtration $M_{\bullet}$ of $\log T$
relative to $W_{\bullet}$, it follows that
\[
\ker \left( \log T \right) \subset M_{-1} \left( \mathop{\rm Lie}\nolimits W \right) \; .
\]
Proposition~\ref{1C} guarantees that the right hand side equals the
(weight $\le -1$)-part under the action of $P_1$.
Our claim therefore follows from the equality
\[
M_{-1} \left( \mathop{\rm Lie}\nolimits W \right) = \mathop{\rm Lie}\nolimits \left( W \cap P_1 \right)
\]
(\cite{P1}~proof of Lemma~4.8).
\end{Proof}
\begin{Lem}\label{1E}
Let $P_1 \trianglelefteq Q \le P$ as before, let
$\Gamma \le Q ({\mathbb{Q}})$ be contained in a compact subgroup of $Q ({\mathbb{A}}_f)$, and assume that $\Gamma$ centralizes $U_1$. Then a subgroup of finite index in $\Gamma$ is contained in
\[
(Z (P) \cdot P_1) ({\mathbb{Q}}) \; .
\]
If $(+)$ holds for $(P , {\mathfrak{X}})$, then a subgroup of finite index in $\Gamma$ is contained in $P_1 ({\mathbb{Q}})$.
\end{Lem}
\begin{Proof}
The two statements are equivalent: if one passes from $(P , {\mathfrak{X}})$ to the quotient data $(P, {{\mathfrak{X}}}) / Z (P)$ (\cite{P1}~Prop.~2.9), then $(+)$ holds. So assume that $(+)$ is satisfied. Fix a point $x \in {\mathfrak{X}}$, and consider the associated homomorphism
\[
\omega_x : H_{0,{\mathbb{C}}} \longrightarrow P_{{\mathbb{C}}} \; .
\]
Since $\omega_x$ maps the subgroup $U_0$ of $H_0$ to $U_1$, the elements in the centralizer of $U_1$ also commute with $\omega_x (U_0)$.
First assume that $P = G = G^{{\rm ad}}$. By looking at the decomposition of $\mathop{\rm Lie}\nolimits G_{{\mathbb{R}}}$ under the action of $H_0$ (\cite{P1}~Lemma~4.4~(c)), one sees that
the Lie algebra of the centralizer in $Q_{{\mathbb{R}}}$ of $\omega_x(U_0)$,
\[
\mathop{\rm Lie}\nolimits (\mathop{\rm Cent}\nolimits_{Q_{{\mathbb{R}}}} U_0) \subset \mathop{\rm Lie}\nolimits Q_{{\mathbb{R}}}
\]
is contained in $\mathop{\rm Lie}\nolimits P_{1,{\mathbb{R}}} + \mathop{\rm Lie}\nolimits (\mathop{\rm Cent}\nolimits_{G_{{\mathbb{R}}}} \mathop{{\rm im}}\nolimits (\omega_x))$.
But $\mathop{\rm Cent}\nolimits_{G_{{\mathbb{R}}}} \mathop{{\rm im}}\nolimits (\omega_x)$ is a compact group, hence the image of $\Gamma$ in $(Q / P_1) ({\mathbb{Q}})$ is finite.
Next, if $P = G$, then by the above,
\[
\Gamma \cap (Z (G) \cdot P_1) ({\mathbb{Q}})
\]
is of finite index in $\Gamma$. Because of $(+)$, the image of $\Gamma$ in $(Q / P_1) ({\mathbb{Q}})$ is again finite.
In the general case,
\[
\Gamma \cap (W \cdot P_1) ({\mathbb{Q}})
\]
is of finite index in $\Gamma$. Analysing the decomposition of $\mathop{\rm Lie}\nolimits W_{{\mathbb{R}}}$ under the action of $H_0$ (\cite{P1}~Lemma~4.4~(a) and (b)),
or using Corollary~\ref{1D}, one realizes that
\[
\mathop{\rm Lie}\nolimits (\mathop{\rm Cent}\nolimits_{Q} U_1) \cap \mathop{\rm Lie}\nolimits W \subset \mathop{\rm Lie}\nolimits P_{1} \; .
\]
\end{Proof}
The {\it Shimura varieties} associated to mixed Shimura data $(P,{\mathfrak{X}})$ are indexed by the open compact subgroups of $P ({\mathbb{A}}_f)$. If $K$ is one such, then the analytic space of ${\mathbb{C}}$-valued points of the corresponding variety $M^K := M^K (P,{\mathfrak{X}})$ is given as
\[
M^K ({\mathbb{C}}) := P ({\mathbb{Q}}) \backslash ( {\mathfrak{X}} \times P ({\mathbb{A}}_f) / K ) \; .
\]
In order to discuss compactifications, we need to introduce
the {\it conical complex} associated to $(P,{\mathfrak{X}})$:
set-theoretically, it is defined as
\[
{\cal C} (P , {\mathfrak{X}}) := \coprod_{({\mathfrak{X}}^0 , P_1)} C ({\mathfrak{X}}^0 , P_1) \; .
\]
By \cite{P1}~4.24, the conical complex is naturally equipped with a topology (which is usually different from the coproduct topology). The closure $C^{\ast} ({\mathfrak{X}}^0 , P_1)$ of $C ({\mathfrak{X}}^0 , P_1)$
inside ${\cal C} (P , {\mathfrak{X}})$
can still be considered as a convex cone in $U_1 ({\mathbb{R}}) (-1)$, with the induced topology. \\
For fixed $K$, the (partial) {\it toroidal compactifications} of $M^K$ are para\-me\-terized by {\it $K$-admissible partial cone decompositions}, which are collections of subsets of
\[
{\cal C} (P,{\mathfrak{X}}) \times P ({\mathbb{A}}_f)
\]
satisfying the axioms of \cite{P1}~6.4. If ${\mathfrak{S}}$ is one such, then in particular any member of ${\mathfrak{S}}$ is of the shape
\[
\sigma \times \{ p \} \; ,
\]
$p \in P ({\mathbb{A}}_f)$, $\sigma \subset C^{\ast} ({\mathfrak{X}}^0 , P_1)$ a {\it convex rational polyhedral cone} in
the vector space $U_1 ({\mathbb{R}}) (-1)$ (\cite{P1}~5.1) not containing any non-trivial linear subspace.\\
Let $M^K ({\mathfrak{S}}) := M^K (P , {\mathfrak{X}} , {\mathfrak{S}})$ be the associated compactification. It comes equipped with a natural stratification into locally closed strata, each of which looks as follows: Fix a pair $({\mathfrak{X}}^0 , P_1)$ as above, $p \in P ({\mathbb{A}}_f)$ and
\[
\sigma \times \{ p \} \in {\mathfrak{S}}
\]
such that $\sigma \subset C^{\ast} ({\mathfrak{X}}^0 , P_1)$. Assume that
\[
\sigma \cap C ({\mathfrak{X}}^0 , P_1) \neq \emptyset \; .
\]
To $\sigma$, one associates Shimura data
\[
\left( \mathop{P_{1,[\sigma]}}\nolimits , \mathop{\FX_{1,[\sigma]}}\nolimits \right)
\]
(\cite{P1}~7.1), whose underlying group $\mathop{P_{1,[\sigma]}}\nolimits$ is the quotient of $P_1$ by the algebraic subgroup
\[
\langle \sigma \rangle \subset U_1
\]
satisfying ${\mathbb{R}} \cdot \sigma = \frac{1}{2 \pi i} \cdot \langle \sigma \rangle ({\mathbb{R}})$. Set
\[
K_1 := P_1 ({\mathbb{A}}_f) \cap p \cdot K \cdot p^{-1} \; , \quad
\pi_{[\sigma]} : P_1 \ontoover{\ } \mathop{P_{1,[\sigma]}}\nolimits \; .
\]
According to \cite{P1}~7.3, there is a canonical map
\[
i ({\mathbb{C}}) : \mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits (\mathop{P_{1,[\sigma]}}\nolimits , \mathop{\FX_{1,[\sigma]}}\nolimits) ({\mathbb{C}}) \longrightarrow
M^K ({\mathfrak{S}}) ({\mathbb{C}})
:= M^K (P , {\mathfrak{X}} , {\mathfrak{S}}) ({\mathbb{C}})
\]
whose image is locally closed. In fact, $i ({\mathbb{C}})$ is a quotient map onto its image.
\begin{Prop}\label{1F}
Assume that $(P, {\mathfrak{X}})$ satisfies $(+)$, and that $K$ is \emph{neat} (see e.g.\ \cite{P1}~0.6). Then $i ({\mathbb{C}})$ is injective, i.e., it identifies $\mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits ({\mathbb{C}})$ with a locally closed subspace of $M^K ({\mathfrak{S}}) ({\mathbb{C}})$.
\end{Prop}
\begin{Proof}
Consider the group $\Delta_1$ of \cite{P1}~6.18:
\begin{eqnarray*}
H_Q & := & \mathop{\rm Stab}\nolimits_{Q ({\mathbb{Q}})} ({\mathfrak{X}}_1) \cap P_1 ({\mathbb{A}}_f) \cdot p \cdot K \cdot p^{-1} \; , \\
\Delta_1 & := & H_Q / P_1 ({\mathbb{Q}}) \; .
\end{eqnarray*}
The subgroup $\Delta_1 \le (Q / P_1) ({\mathbb{Q}})$ is arithmetic. According to \cite{P1}~7.3, the image under $i ({\mathbb{C}})$ is given by the quotient of $\mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits ({\mathbb{C}})$ by a certain subgroup
\[
\mathop{\rm Stab}\nolimits_{\Delta_1} ([\sigma]) = \mathop{\rm Stab}\nolimits_{H_Q} ([\sigma]) / P_1 ({\mathbb{Q}}) \le \Delta_1 \; .
\]
This stabilizer refers to the action of $H_Q$ on the double quotient
\[
P_1 ({\mathbb{Q}}) \backslash {\mathfrak{S}}_1 / P_1 ({\mathbb{A}}_f)
\]
of \cite{P1}~7.3. Denote the projection $Q \to Q / P_1$ by $\mathop{\rm pr}\nolimits$, so $\Delta_1 = \mathop{\rm pr}\nolimits (H_Q)$, and
\[
\mathop{\rm Stab}\nolimits_{\Delta_1} ([\sigma]) = \mathop{\rm pr}\nolimits \left( \mathop{\rm Stab}\nolimits_{H_Q} ([\sigma]) \right) \; .
\]
By Lemma~\ref{1G}, this group is trivial under the hypotheses of
the proposition.
\end{Proof}
\begin{Lem}\label{1G}
If $(P , {\mathfrak{X}})$ satisfies $(+)$ then $\mathop{\rm Stab}\nolimits_{\Delta_1} ([\sigma])$ is finite. If, in addition, $K$ is neat then $\mathop{\rm Stab}\nolimits_{\Delta_1} ([\sigma]) = 1$.
\end{Lem}
\begin{Proof}
The second claim follows from the first since $\mathop{\rm Stab}\nolimits_{\Delta_1} ([\sigma])$ is contained in
\[
(Q / P_1) ({\mathbb{Q}}) \cap \mathop{\rm pr}\nolimits (p \cdot K \cdot p^{-1}) \; ,
\]
which is neat if $K$ is.
Consider the arithmetic subgroup of $Q ({\mathbb{Q}})$
\[
\Gamma_Q := H_Q \cap p \cdot K \cdot p^{-1} \; .
\]
The group $\mathop{\rm pr}\nolimits (\Gamma_Q)$ is arithmetic, hence of finite index in $\Delta_1$. Hence
\[
\mathop{\rm Stab}\nolimits_{\mathop{\rm pr}\nolimits(\Gamma_Q)}([\sigma]) =
\mathop{\rm pr}\nolimits \left( \mathop{\rm Stab}\nolimits_{\Gamma_Q} ([\sigma])\right) \le \mathop{\rm Stab}\nolimits_{\Delta_1} ([\sigma])
\]
is of finite index. Now
\[
\mathop{\rm Stab}\nolimits_{\Gamma_Q} (\sigma) \le \mathop{\rm Stab}\nolimits_{\Gamma_Q} ([\sigma])
\]
is of finite index. By \cite{P1}~Thm.~6.19, a subgroup of finite index of $\mathop{\rm Stab}\nolimits_{\Gamma_Q} (\sigma)$ centralizes $U_1$. Our claim thus follows from
Lemma~\ref{1E}.
\end{Proof}
\begin{Rem}\label{1H}
The lemma shows that the groups ``\, $\mathop{\rm Stab}\nolimits_{\Delta_1} ([\sigma])$'' occurring in 7.11, 7.15, 7.17, 9.36, 9.37, and 12.4 of \cite{P1} are all trivial provided that $(P,{\mathfrak{X}})$ satisfies $(+)$ and $K$ is neat.
\end{Rem}
We continue the study of the map
\[
i ({\mathbb{C}}) : \mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits ({\mathbb{C}}) \longrightarrow M^K ({\mathfrak{S}}) ({\mathbb{C}}) \; .
\]
Let $\mathop{\FS_{1,[\sigma]}}\nolimits$ be the minimal $K_1$-admissible cone decomposition of
\[
{\cal C} (P_1 , {\mathfrak{X}}_1) \times P_1 ({\mathbb{A}}_f)
\]
containing $\sigma \times \{ 1 \}$; $\mathop{\FS_{1,[\sigma]}}\nolimits$ can be realized inside the decomposition ${\mathfrak{S}}^0_1$ of \cite{P1}~6.13; by definition, it is {\it concentrated in the unipotent fibre} (\cite{P1}~6.5~(d)).
View $\mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits ({\mathbb{C}})$ as sitting inside $M^{K_1} (\mathop{\FS_{1,[\sigma]}}\nolimits) ({\mathbb{C}})$:
\[
i_1 ({\mathbb{C}}) : \mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits ({\mathbb{C}}) \hookrightarrow M^{K_1} (\mathop{\FS_{1,[\sigma]}}\nolimits) ({\mathbb{C}}) \; .
\]
Consider the diagram
\[
\vcenter{\xymatrix@R-10pt{
\mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits ({\mathbb{C}}) \ar@{^{ (}->}[r]^{i_1 ({\mathbb{C}})} \ar@{_{ (}->}[dr]_{i({\mathbb{C}})} &
M^{K_1} (\mathop{\FS_{1,[\sigma]}}\nolimits) ({\mathbb{C}}) \\
& M^K ({\mathfrak{S}}) ({\mathbb{C}}) \\}}
\]
\cite{P1}~6.13 contains the definition of an open neighbourhood
\[
{\mathfrak{U}} := \overline{\FU} (P_1 , {\mathfrak{X}}_1 , p)
\]
of $\mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits ({\mathbb{C}})$ in $M^{K_1} (\mathop{\FS_{1,[\sigma]}}\nolimits) ({\mathbb{C}})$, and a natural extension $f$ of the map $i ({\mathbb{C}})$ to ${\mathfrak{U}}\,$:
\[
\vcenter{\xymatrix@R-10pt{
\mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits ({\mathbb{C}}) \ar@{^{ (}->}[r] \ar@{_{ (}->}[dr]_{i ({\mathbb{C}})} & {\mathfrak{U}} \ar[d]^f \\
& M^K ({\mathfrak{S}}) ({\mathbb{C}}) \\}}
\]
\begin{Prop}\label{1I}
(a) $f$ is open.\\
(b) We have the equality
\[
f^{-1} (M^K({\mathbb{C}})) = {\mathfrak{U}} \cap M^{K_1} ({\mathbb{C}}) \; .
\]
\end{Prop}
\begin{Proof} Let us recall the definition of $\overline{\FU} (P_1 , {\mathfrak{X}}_1,p)$, and part of the construction of
$M^K ({\mathfrak{S}}) ({\mathbb{C}})$:
Let ${\mathfrak{X}}^+ \subset {\mathfrak{X}}$ be the preimage under
\[
{\mathfrak{X}} \longrightarrow \pi_0 ({\mathfrak{X}})
\]
of the $P_1 ({\mathbb{R}})$-orbit in $\pi_0 ({\mathfrak{X}})$ associated to ${\mathfrak{X}}_1$, and
\[
{\mathfrak{X}}^+ \longrightarrow {\mathfrak{X}}_1
\]
the map discussed after Proposition~\ref{1A}; according to \cite{P1}~Prop.~4.15~(a), it is still an open embedding (i.e., injective).
As in \cite{P1}~6.10, set
\[
{\mathfrak{U}} (P_1, {\mathfrak{X}}_1 , p) := P_1 ({\mathbb{Q}}) \backslash
( {\mathfrak{X}}^+ \times P_1 ({\mathbb{A}}_f) / K_1) \; .
\]
It obviously admits an open embedding into $M^{K_1} ({\mathbb{C}})$ as well as an open morphism to $M^K ({\mathbb{C}})$. One defines (\cite{P1}~6.13)
\[
\overline{\FU} (P_1 , {\mathfrak{X}}_1 , p) \subset M^{K_1} (\mathop{\FS_{1,[\sigma]}}\nolimits) ({\mathbb{C}})
\]
as the interior of the closure of ${\mathfrak{U}} (P_1 , {\mathfrak{X}}_1 , p)$.
Then $M^K ({\mathfrak{S}}) ({\mathbb{C}})$ is defined as the quotient with respect to some equivalence relation $\sim$ on the disjoint sum of all $\overline{\FU} (P_1 , {\mathfrak{X}}_1 , p)$ (\cite{P1}~6.24). In particular, for our {\it fixed} choice of $(P_1 , {\mathfrak{X}}_1)$ and $p$, there is a continuous map
\[
f : \overline{\FU} (P_1 , {\mathfrak{X}}_1 , p) \longrightarrow M^K ({\mathfrak{S}}) ({\mathbb{C}}) \; .
\]
From the description of $\sim$ (\cite{P1}~6.15--6.16), one sees that $f$ is open; the central point is that the maps
\[
\overline{\beta} := \overline{\beta} (P_1 , {\mathfrak{X}}_1 , P'_1 , {\mathfrak{X}}'_1 , p):
\overline{\FU} (P_1, {\mathfrak{X}}_1 , p) \cap M^{K_1} (P_1 , {\mathfrak{X}}_1, {\mathfrak{S}}''^0) ({\mathbb{C}}) \longrightarrow
\overline{\FU} (P'_1, {\mathfrak{X}}'_1 , p)
\]
of \cite{P1}~page~152 are open. This shows (a). As for (b), one observes that
\[
\overline{\beta}^{-1} \left( {\mathfrak{U}} (P'_1, {\mathfrak{X}}'_1 , p) \right) =
{\mathfrak{U}} (P_1, {\mathfrak{X}}_1 , p) \; .
\]
\end{Proof}
\begin{Rem}\label{1J}
\cite{P1}~Cor.~7.17 gives a much stronger statement than Proposition~\ref{1I}~(a), assuming that ${\mathfrak{S}}$ is \emph{complete} (\cite{P1}~6.4) and satisfies condition ($\ast$) of \cite{P1}~7.12. In this case, one can identify a suitable open neighbourhood of the \emph{closure} of
\[
\mathop{\rm Stab}\nolimits_{\Delta_1} ([\sigma]) \backslash \mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits ({\mathbb{C}}) = \mathop{{\rm Im}}\nolimits( i({\mathbb{C}}) ) \subset M^K ({\mathfrak{S}}) ({\mathbb{C}})
\]
with an open neighbourhood of the closure of
\[
\mathop{\rm Stab}\nolimits_{\Delta_1} ([\sigma]) \backslash \mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits ({\mathbb{C}}) \subset \mathop{\rm Stab}\nolimits_{\Delta_1} ([\sigma]) \backslash M^{K_1} ({\mathfrak{S}}_1) ({\mathbb{C}}) \; ,
\]
where
\[
\mathop{\FS_{1,[\sigma]}}\nolimits \subset {\mathfrak{S}}_1 := ([\cdot p]^{\ast} {\mathfrak{S}}) \, |_{(P_1 , {\mathfrak{X}}_1)}
\]
(\cite{P1}~6.5~(a) and (c)).
Consequently, one can identify the formal completions (in the sense of analytic spaces) of $M^K ({\mathfrak{S}}) ({\mathbb{C}})$ and of
\[
\mathop{\rm Stab}\nolimits_{\Delta_1} ([\sigma]) \backslash M^{K_1} ({\mathfrak{S}}_1) ({\mathbb{C}})
\]
along the closure of the stratum
\[
\mathop{\rm Stab}\nolimits_{\Delta_1} ([\sigma]) \backslash \mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits ({\mathbb{C}}) \; .
\]
\end{Rem}
It will be important to know that without the hypotheses of
\cite{P1}~Cor.~7.17, the completions along the stratum itself still agree. For simplicity, we assume that the hypotheses of Proposition~\ref{1F} are met, and hence that $\mathop{\rm Stab}\nolimits_{\Delta_1} ([\sigma]) = 1$.
\begin{Thm} \label{1K}
Assume that $(P, {\mathfrak{X}})$ satisfies $(+)$, and that $K$ is neat.
\begin{itemize}
\item[(i)] The map $f$ of \ref{1I} is locally biholomorphic near $\mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits ({\mathbb{C}})$.
\item[(ii)]
$f$ induces an isomorphism between the formal analytic completions of
$M^K ({\mathfrak{S}}) ({\mathbb{C}})$ and of $M^{K_1} (\mathop{\FS_{1,[\sigma]}}\nolimits) ({\mathbb{C}})$ along $\mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits ({\mathbb{C}})$.
\end{itemize}
\end{Thm}
\begin{Proof}
$f$ is open and identifies the analytic subsets
\[
\mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits ({\mathbb{C}}) \subset M^{K_1} (\mathop{\FS_{1,[\sigma]}}\nolimits) ({\mathbb{C}})
\]
and
\[
\mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits ({\mathbb{C}}) \subset M^K ({\mathfrak{S}}) ({\mathbb{C}}) \; .
\]
For (ii), we have to compare certain sheaves of functions. The claim therefore follows from (i).
According to \cite{P1}~6.18, the image of $f$ equals the quotient of ${\mathfrak{U}}$ by the action of a group $\Delta_1$ of analytic automorphisms, which according to \cite{P1}~Prop.~6.20 is properly discontinuous.
\end{Proof}
So far, we have worked in the category of analytic spaces. According to
Pink's generalization to mixed Shimura varieties of the Algebraization
Theorem of Baily and Borel
(\cite{P1}~Prop.~9.24), there exist canonical structures of normal algebraic varieties on the $M^K (P , {\mathfrak{X}}) ({\mathbb{C}})$, which we denote as
\[
M^K_{{\mathbb{C}}} := M^K (P, {\mathfrak{X}})_{{\mathbb{C}}} \; .
\]
If there exists a structure of normal algebraic variety on $M^K (P , {\mathfrak{X}} , {\mathfrak{S}}) ({\mathbb{C}})$ extending $M^K_{{\mathbb{C}}}$, then it is unique (\cite{P1}~9.25); denote it as
\[
M^K ({\mathfrak{S}})_{{\mathbb{C}}} := M^K (P , {\mathfrak{X}} , {\mathfrak{S}})_{{\mathbb{C}}} \; .
\]
Pink gives criteria on the existence of $M^K ({\mathfrak{S}})_{{\mathbb{C}}}$ (\cite{P1}~9.27, 9.28). If any cone of a cone decomposition ${\mathfrak{S}}'$ for $(P , {\mathfrak{X}})$ is contained in a cone of ${\mathfrak{S}}$, and both $M^K ({\mathfrak{S}}')_{{\mathbb{C}}}$ and $M^K ({\mathfrak{S}})_{{\mathbb{C}}}$ exist, then the morphism
\[
M^K ({\mathfrak{S}}') ({\mathbb{C}}) \longrightarrow M^K ({\mathfrak{S}}) ({\mathbb{C}})
\]
is algebraic (\cite{P1}~9.25). From now on we implicitly assume the existence whenever we talk about $M^K ({\mathfrak{S}})_{{\mathbb{C}}}$.
According to \cite{P1}~Prop.~9.36, the stratification of $M^K ({\mathfrak{S}})_{{\mathbb{C}}}$ holds algebraically.
\begin{Thm}\label{1L}
Assume that $(P, {\mathfrak{X}})$ satisfies $(+)$, and that $K$ is neat. The isomorphism of Theorem~\ref{1K} induces a canonical isomorphism between the
formal completions of $M^K ({\mathfrak{S}})_{{\mathbb{C}}}$ and of $M^{K_1} (\mathop{\FS_{1,[\sigma]}}\nolimits)_{{\mathbb{C}}}$ along $\mathop{M_{\BC}^{\pi_{[\sigma]} (K_1)}}\nolimits$.
\end{Thm}
\begin{Proof}
If ${\mathfrak{S}}$ is complete and satisfies ($\ast$) of \cite{P1}~7.12, then this is an immediate consequence of \cite{P1}~Prop.~9.37, which concerns the formal completions along the closure of $\mathop{M_{\BC}^{\pi_{[\sigma]} (K_1)}}\nolimits$.
We may replace $K$ by a normal subgroup $K'$ of finite index: the objects on
the level of $K$ come about as quotients under the finite group $K / K'$
of those on the level of $K'$. Therefore,
we may assume, thanks to \cite{P1}~Prop.~9.29 and Prop.~7.13, that there is a complete cone decomposition ${\mathfrak{S}}'$ containing $\sigma \times \{ p \}$ and satisfying ($\ast$) of \cite{P1}~7.12.
Let ${\mathfrak{S}}''$ be the coarsest refinement of both ${\mathfrak{S}}$ and ${\mathfrak{S}}'$; it still contains $\sigma \times \{ p \}$, and $M^K ({\mathfrak{S}}'')_{{\mathbb{C}}}$ exists because of \cite{P1}~Prop.~9.28. We have
\[
\mathop{\FS_{1,[\sigma]}}\nolimits = {\mathfrak{S}}''_{1,[\sigma]} = {\mathfrak{S}}'_{1,[\sigma]} \; ,
\]
hence the formal completions all agree analytically. But on the level of ${\mathfrak{S}}'_{1,[\sigma]}$, the isomorphism is algebraic.
\end{Proof}
According to \cite{P1}~Thm.~11.18, there exists a {\it canonical model} of the variety $M^K (P , {\mathfrak{X}})_{{\mathbb{C}}}$, which we denote as
\[
M^K := M^K (P, {\mathfrak{X}}) \; .
\]
It is defined over the {\it reflex field} $E (P , {\mathfrak{X}})$ of $(P , {\mathfrak{X}})$ (\cite{P1}~11.1). The reflex field does not change when passing from $(P , {\mathfrak{X}})$ to a rational boundary component (\cite{P1}~Prop.~12.1).
If $M^K ({\mathfrak{S}})_{{\mathbb{C}}}$ exists, then it has a canonical model $M^K ({\mathfrak{S}})$ over $E (P , {\mathfrak{X}})$ extending $M^K$, and the stratification descends to $E (P, {\mathfrak{X}})$. In fact, \cite{P1}~Thm.~12.4 contains these statements under special hypotheses on ${\mathfrak{S}}$. However, one passes from ${\mathfrak{S}}$ to a covering by finite cone decompositions (corresponding to an open covering of $M^K ({\mathfrak{S}})_{{\mathbb{C}}}$), and then (\cite{P1}~Cor.~9.33) to a subgroup of $K$ of finite index to see that the above claims hold as soon as $M^K ({\mathfrak{S}})_{{\mathbb{C}}}$ exists.
\begin{Thm}\label{1M}
Assume that $(P , {\mathfrak{X}})$ satisfies $(+)$, and that $K$ is neat. The isomorphism of Theorem~\ref{1L} descends to a canonical isomorphism between the formal completions of $M^K ({\mathfrak{S}})$ and of $M^{K_1} (\mathop{\FS_{1,[\sigma]}}\nolimits)$ along $\mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits$.
\end{Thm}
\begin{Proof}
If ${\mathfrak{S}}$ is complete and satisfies ($\ast$) of \cite{P1}~7.12, then this statement is contained in \cite{P1}~Thm.~12.4~(c).
In fact, the proof of \cite{P1}~Thm.~12.4~(c) does not directly use the special hypotheses on ${\mathfrak{S}}$: the strategy is really to prove \ref{1M} and then deduce the stronger conclusion of \cite{P1}~12.4~(c) from the fact that it holds over ${\mathbb{C}}$\,; the point there is (\cite{P1}~12.6) that since the schemes are normal, morphisms descend if they descend on some open dense subscheme.
Thus the proof of our claim is contained in \cite{P1}~12.7--12.17.
\end{Proof}
\begin{Rem}\label{1O} (a) Without any hypotheses on $(P , {\mathfrak{X}})$ and $K$, there are obvious variants of Theorems~\ref{1K}, \ref{1L}, and \ref{1M}. In particular, there is a canonical isomorphism between the formal completions of $M^K ({\mathfrak{S}})$ and of
\[
\mathop{\rm Stab}\nolimits_{\Delta_1} ([\sigma]) \backslash M^{K_1} (\mathop{\FS_{1,[\sigma]}}\nolimits)
\]
along
\[
\mathop{\rm Stab}\nolimits_{\Delta_1} ([\sigma]) \backslash \mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits \; .
\]
(b) By choosing simultaneous refinements, one sees that the isomorphisms of \ref{1K}~(ii), \ref{1L}, and \ref{1M} do not depend on the cone decomposition ${\mathfrak{S}}$ ``surrounding'' our fixed cone $\sigma \times \{ p \}$.
\end{Rem}
In the situation
we have been considering, the cone $\sigma$ is called {\it smooth}
with respect to the lattice
\[
\Gamma^p_U (-1) := \frac{1}{2 \pi i} \cdot \left( U_1 ({\mathbb{Q}}) \cap K_1 \right) \subset \frac{1}{2 \pi i} \cdot U_1 ({\mathbb{R}})
\]
if the semi-group
\[
\Lambda_{\sigma} := \sigma \cap \Gamma^p_U (-1)
\]
can be generated (as semi-group)
by a subset of a ${\mathbb{Z}}$-basis of $\Gamma^p_U (-1)$. The corresponding statement is then necessarily true for any face of $\sigma$. Hence the $K_1$-admissible partial cone decomposition $\mathop{\FS_{1,[\sigma]}}\nolimits$ is smooth in the sense of \cite{P1}~6.4.\\
Let us introduce the following condition on $(P_1,{\mathfrak{X}}_1)$:
\begin{enumerate}
\item [$(\cong)$] The canonical morphism
$(\mathop{P_{1,[\sigma]}}\nolimits,\mathop{\FX_{1,[\sigma]}}\nolimits) \longrightarrow (P_1,{\mathfrak{X}}_1) / \langle \sigma \rangle$
(\cite{P1}~7.1) is an isomorphism.
\end{enumerate}
In particular, there is an epimorphism of Shimura data from $(P_1,{\mathfrak{X}}_1)$ to $(\mathop{P_{1,[\sigma]}}\nolimits,\mathop{\FX_{1,[\sigma]}}\nolimits)$. According to \cite{P1}~7.1, we have:
\begin{Prop} \label{1P}
Condition $(\cong)$ is satisfied whenever $(P_1,{\mathfrak{X}}_1)$ is a \emph{proper}
boundary component of some other mixed Shimura data, e.g., if the
parabolic subgroup $Q \le P$ is proper.
\end{Prop}
Under the hypothesis $(\cong)$, we can establish more structural
properties of our varieties:
\begin{Lem} \label{1Q}
Assume that $(\cong)$ is satisfied.
\begin{itemize}
\item [(i)] The Shimura variety $M^{K_1}$ is a torus torsor over
$\mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits$:
\[
\pi_{[\sigma]} : M^{K_1} \longrightarrow \mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits \; .
\]
The compactification $M^{K_1} (\mathop{\FS_{1,[\sigma]}}\nolimits)$ is a \emph{torus embedding} along
the fibres of $\pi_{[\sigma]}$:
\[
\overline{\pi_{[\sigma]}} : M^{K_1} (\mathop{\FS_{1,[\sigma]}}\nolimits) \longrightarrow \mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits
\]
admitting only one closed stratum. The section
\[
i_1 : \mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits \hookrightarrow M^{K_1} (\mathop{\FS_{1,[\sigma]}}\nolimits)
\]
of $\overline{\pi_{[\sigma]}}$ identifies the base with this closed stratum.
\item [(ii)] Assume that $\sigma$ is smooth. Then
\[
\overline{\pi_{[\sigma]}} : M^{K_1} (\mathop{\FS_{1,[\sigma]}}\nolimits) \longrightarrow \mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits
\]
carries a canonical structure of vector bundle, with zero section $i_1$.
The rank of this vector bundle is equal to the dimension of $\sigma$.
\end{itemize}
\end{Lem}
\begin{Proof}
(i) This is \cite{P1}~remark on the bottom of page~165, taking into account
that $\mathop{\FS_{1,[\sigma]}}\nolimits$ is minimal with respect to the property of
containing $\sigma$.\\
(ii) If $\sigma$ is smooth of dimension $c$, then by definition, the semi-group
$\Lambda_{\sigma}$ can be generated by an appropriate basis of the ambient
real vector space. One shows that each choice of such a basis gives rise to
the same $\mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits$-linear structure on $M^{K_1} (\mathop{\FS_{1,[\sigma]}}\nolimits)$.
\end{Proof}
We conclude the section by putting together all the results obtained so far:
\begin{Thm} \label{1R}
Assume that $(P,{\mathfrak{X}})$ satisfies $(+)$, that $(P_1,{\mathfrak{X}}_1)$ satisfies $(\cong)$,
that $K$ is neat, and that $\sigma$ is smooth. Then there is a canonical isomorphism of vector bundles over $\mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits$
\[
\iota_\sigma: M^{K_1}(\mathop{\FS_{1,[\sigma]}}\nolimits) \arrover{\sim} N_{\mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits / M^K ({\mathfrak{S}})}
\]
identifying $M^{K_1}(\mathop{\FS_{1,[\sigma]}}\nolimits)$ and the normal bundle of $\mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits$ in $M^K ({\mathfrak{S}})$.
\end{Thm}
\begin{Proof}
The isomorphism of Theorem~\ref{1M} induces an isomorphism
\[
N_{\mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits / M^{K_1}(\mathop{\FS_{1,[\sigma]}}\nolimits)} \arrover{\sim} N_{\mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits / M^K ({\mathfrak{S}})} \; .
\]
But the normal bundle of the zero section in a vector bundle is canonically
isomorphic to the vector bundle itself.
\end{Proof}
\section{Higher direct images for Hodge modules}
\label{2}
Let $M^K({\mathfrak{S}}) = M^K(P,{\mathfrak{X}}, {\mathfrak{S}})$ be a toroidal compactification
of a Shimura variety $M^K = M^K(P,{\mathfrak{X}})$, and $\mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits = \mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits(\mathop{P_{1,[\sigma]}}\nolimits,\mathop{\FX_{1,[\sigma]}}\nolimits)$
a boundary stratum. Consider the situation
\[
M^K \stackrel{j}{\hookrightarrow} M^K ({\mathfrak{S}})
\stackrel{i}{\hookleftarrow} \mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits \; .
\]
Saito's formalism (\cite{Sa}) gives a functor $i^{\ast} j_{\ast}$ between the bounded derived categories of {\it algebraic mixed Hodge modules} on $M^K_{{\mathbb{C}}}$ and on $\mathop{M_{\BC}^{\pi_{[\sigma]} (K_1)}}\nolimits$ respectively. The main result of this section
(Theorem~\ref{2H}) gives a formula for the restriction of $i^{\ast} j_{\ast}$ onto the image of the natural functor associating to an algebraic representation of $P$ a variation of Hodge structure on $M^K_{{\mathbb{C}}}$. The proof has two steps: first, one employs the \emph{specialization functor} \`a la Verdier--Saito,
and Theorem~\ref{1R}, to reduce from the toroidal to a toric situation
(\ref{2K}). The second step consists in proving the compatibility statement on the level of ${\cal H}^0$ and then appealing to homological algebra, which implies
compatibility on the level of functors between derived categories. \\
Throughout the whole section, we fix a set of data satisfying
the hypotheses of Theorem~\ref{1R}. We thus have
Shimura data $(P,{\mathfrak{X}})$ satisfying condition $(+)$, a rational boundary component $(P_1 , {\mathfrak{X}}_1)$ satisfying condition $(\cong)$,
an open, compact and neat subgroup $K \le P ({\mathbb{A}}_f)$, an element
$p \in P ({\mathbb{A}}_f)$ and a smooth
cone $\sigma \times \{ p \} \subset C^{\ast} ({\mathfrak{X}}^0 , P_1) \times \{ p \}$ belonging to some $K$-admissible partial cone decomposition ${\mathfrak{S}}$ such that $M^K ({\mathfrak{S}})$ exists. We assume that
\[
\sigma \cap C ({\mathfrak{X}}^0 , P_1) \neq \emptyset \; ,
\]
and write $K_1 := P_1 ({\mathbb{A}}_f) \cap p \cdot K \cdot p^{-1}$,
\[
j : M^K \hookrightarrow M^K ({\mathfrak{S}}) \; ,
\]
and
\[
i : \mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits \hookrightarrow M^K ({\mathfrak{S}}) \; .
\]
Similarly, write
\[
j_1 : M^{K_1} \hookrightarrow M^{K_1} (\mathop{\FS_{1,[\sigma]}}\nolimits) \; ,
\]
and
\[
i_1 : \mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits \hookrightarrow M^{K_1} (\mathop{\FS_{1,[\sigma]}}\nolimits)
\]
for the immersions into the torus embedding $M^{K_1} (\mathop{\FS_{1,[\sigma]}}\nolimits)$, which according to
Theorem~\ref{1R} we identify with the normal bundle of $\mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits$ in $M^K ({\mathfrak{S}})$.
If we denote by $c$ the dimension of $\sigma$, then both $i$ and $i_1$ are of
pure codimension $c$.\\
The immersions $i ({\mathbb{C}})$ and $i_1 ({\mathbb{C}})$ factor as
\[
\vcenter{\xymatrix@R-10pt{
\mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits ({\mathbb{C}}) \ar@{^{ (}->}[r] \ar@{=}[d] &
{\mathfrak{U}} \ar@{^{ (}->}[r] \ar[d]^f &
M^{K_1} (\mathop{\FS_{1,[\sigma]}}\nolimits) ({\mathbb{C}}) \ar@{<-^{ )}}[r] &
M^{K_1} ({\mathbb{C}}) \\
\mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits ({\mathbb{C}}) \ar@{^{ (}->}[r] &
{\mathfrak{V}} \ar@{^{ (}->}[r] &
M^K ({\mathfrak{S}}) ({\mathbb{C}}) \ar@{<-^{ )}}[r] &
M^K({\mathbb{C}}) \\}}
\]
where ${\mathfrak{U}}$ and ${\mathfrak{V}}$ are open subsets of the respective compactifications,
and $f$ is the map of \ref{1I}. For a sheaf ${\cal F}$ on $M^K ({\mathbb{C}})$, we can consider the restriction $f^{-1} {\cal F}$ on
$f^{-1} (M^K ({\mathbb{C}})) = {\mathfrak{U}} \cap M^{K_1}({\mathbb{C}})$.
Let $F$ be a coefficient field
of characteristic $0$.
Denote by
\[
\mu_{K,\rm{top}} : \mathop{\bf Rep}\nolimits_F P \longrightarrow \mathop{\bf Loc}\nolimits_F M^K ({\mathbb{C}})
\]
the exact tensor functor associating to an algebraic representation ${\mathbb{V}}$
the sheaf of
sections of
\[
P ({\mathbb{Q}}) \backslash \left( {\mathfrak{X}} \times {\mathbb{V}} \times P ({\mathbb{A}}_f) / K \right)
\]
on
\[
M^K ({\mathbb{C}}) = P({\mathbb{Q}}) \backslash \left( {\mathfrak{X}} \times P ({\mathbb{A}}_f) / K \right)\; .
\]
\begin{Prop}\label{2C}
Let ${\mathbb{V}} \in \mathop{\bf Rep}\nolimits_F P$. Then $f^{-1} \circ \mu_{K,\rm{top}} {\mathbb{V}}$
is the restriction to $f^{-1} (M^K ({\mathbb{C}}))$ of the local system
$\mu_{K_1,\rm{top}} \mathop{\rm Res}\nolimits^P_{P_1} {\mathbb{V}}$ on
\[
M^{K_1} ({\mathbb{C}}) = P_1 ({\mathbb{Q}}) \backslash ({\mathfrak{X}}_1 \times P_1 ({\mathbb{A}}_f) / K_1) \; .
\]
\end{Prop}
\begin{Proof}
$f^{-1} (M^K ({\mathbb{C}}))$ equals the set
\[
{\mathfrak{U}} (P_1 , {\mathfrak{X}}_1 , p) := P_1 ({\mathbb{Q}}) \backslash \left( {\mathfrak{X}}^+ \times P_1 ({\mathbb{A}}_f) / K_1\right) \subset M^{K_1} ({\mathbb{C}})\; ,
\]
and $f \, |_{{\mathfrak{U}} (P_1 , {\mathfrak{X}}_1 , p)}$ is given by
\[
[(x , p_1)] \longmapsto [(x , p_1 \cdot p)]
\]
(\cite{P1}~6.10).
\end{Proof}
Using Theorem~\ref{1K}~(i), it is easy to construct a canonical isomorphism
of functors
\[
i^{\ast} j_{\ast} \quad , \quad i^{\ast}_1 (j_1)_{\ast} \circ f^{-1}:
D^b_{\rm c} ( M^K ({\mathbb{C}}) ) \longrightarrow D^b_{\rm c} ( \mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits ({\mathbb{C}}) ) \; .
\]
For us, it will be necessary to establish a connection between
$j_{\ast}$ and $(j_1)_{\ast} \circ f^{-1}$. This
relation will be given by Verdier's specialization functor
(\cite{V2}~9)
\[
\mathop{\SP_\sigma}\nolimits := \mathop{Sp}\nolimits_{\mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits} : D^b_{\rm c} ( M^K ({\mathfrak{S}}) ({\mathbb{C}}) ,F ) \longrightarrow
D^b_{\rm c} ( M^{K_1}(\mathop{\FS_{1,[\sigma]}}\nolimits) ({\mathbb{C}}) , F ) \; .
\]
According to \cite{V2}~p.~358, the functor $\mathop{\SP_\sigma}\nolimits$ has the properties
(SP0)--(SP6) of \cite{V2}~8, \emph{convenablement transpos\'ees}.
In particular:
\begin{itemize}
\item[(SP0)] It can be computed locally.
\item[(SP1)] The complexes in the image of $\mathop{\SP_\sigma}\nolimits$ are \emph{monodromic}, i.e.,
their cohomology objects are locally constant on each ${\mathbb{C}}^*$-orbit in
$M^{K_1}(\mathop{\FS_{1,[\sigma]}}\nolimits)({\mathbb{C}})$.
\item[(SP5)] We have the equality $i^* = i_1^* \circ \mathop{\SP_\sigma}\nolimits$.
\end{itemize}
From Theorem~\ref{1K}~(i) and from (SP0), we conclude
that in order to compute the effect of $\mathop{\SP_\sigma}\nolimits$ on a complex
of sheaves ${\cal F}^\bullet$, we may pass to the complex $f^{-1} {\cal F}^\bullet$.
On the other hand, in the case when $P=P_1$, one considers the specialization
functor for the zero section in a vector bundle. Using the definition of
$\mathop{\SP_\sigma}\nolimits$, and hence, of the nearby cycle
functor $\psi_\pi$ in the analytic context
(\cite{Comp}~1.2), one sees that in this case, the functor $\mathop{\SP_\sigma}\nolimits$
induces the identity on the
category of monodromic complexes.\\
By extension by zero, let us view objects of $\mathop{\bf Loc}\nolimits_F M^K ({\mathbb{C}})$ as
sheaves on $M^K ({\mathfrak{S}}) ({\mathbb{C}})$. From the above, one concludes that the functor
$\mathop{\SP_\sigma}\nolimits$ induces a functor
\[
\mathop{\bf Loc}\nolimits_F M^K ({\mathbb{C}}) \longrightarrow \mathop{\bf Loc}\nolimits_F M^{K_1} ({\mathbb{C}}) \; ,
\]
equally denoted by $\mathop{\SP_\sigma}\nolimits$. For local systems in the image of
$\mu_{K,\rm{top}}$, we have:
\begin{Prop} \label{2Ca}
There is a commutative diagram
\[
\vcenter{\xymatrix@R-10pt{
\mathop{\bf Rep}\nolimits_F P \ar[r]^{\mathop{\rm Res}\nolimits^P_{P_1}} \ar[d]_{\mu_{K,\rm{top}}} &
\mathop{\bf Rep}\nolimits_F P_1 \ar[d]^{\mu_{K_1,\rm{top}}} \\
\mathop{\bf Loc}\nolimits_F M^K ({\mathbb{C}}) \ar[r]^{\mathop{\SP_\sigma}\nolimits} &
\mathop{\bf Loc}\nolimits_F M^{K_1} ({\mathbb{C}}) \\}}
\]
\end{Prop}
\begin{Prop} \label{2Cb}
There is a commutative diagram
\[
\vcenter{\xymatrix@R-10pt{
\mathop{\bf Rep}\nolimits_F P \ar[r]^{\mathop{\rm Res}\nolimits^P_{P_1}} \ar[d]_{\mu_{K,\rm{top}}} &
\mathop{\bf Rep}\nolimits_F P_1 \ar[d]^{\mu_{K_1,\rm{top}}} \\
\mathop{\bf Loc}\nolimits_F M^K ({\mathbb{C}}) \ar[d]_{j_*} &
\mathop{\bf Loc}\nolimits_F M^{K_1} ({\mathbb{C}}) \ar[d]^{(j_1)_*} \\
D^b_{\rm c} \left( M^K ({\mathfrak{S}}) ({\mathbb{C}}) , F \right) \ar[r]^{\mathop{\SP_\sigma}\nolimits} &
D^b_{\rm c} \left( M^{K_1}(\mathop{\FS_{1,[\sigma]}}\nolimits) ({\mathbb{C}}) , F \right) \\}}
\]
\end{Prop}
Consequently:
\begin{Thm}\label{2D}
There is a commutative diagram
\[
\vcenter{\xymatrix@R-10pt{
D^b \left( \mathop{\bf Rep}\nolimits_F P \right) \ar[r]^{\mathop{\rm Res}\nolimits^P_{P_1}} \ar[d]_{\mu_{K,\rm{top}}} &
D^b \left( \mathop{\bf Rep}\nolimits_F P_1 \right) \ar[r]^{R (\;)^{\langle \sigma \rangle}} &
D^b \left( \mathop{\bf Rep}\nolimits_F P_{1,[\sigma]} \right)
\ar[d]^{\mu_{\pi_{[\sigma]} (K_1),\rm{top}}} \\
D^b_{\rm c} \left( M^K ({\mathbb{C}}) ,F \right) \ar[rr]^{i^{\ast} j_{\ast}} &&
D^b_{\rm c} \left( \mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits ({\mathbb{C}}) , F \right) \\}}
\]
Here, $R (\;)^{\langle \sigma \rangle}$ refers to Hochschild cohomology of the unipotent group $\langle \sigma \rangle \le P_1$.
\end{Thm}
\begin{Proof}
By (SP5) and Proposition~\ref{2Cb}, we may assume $P=P_1$.
Denote by $L_{\sigma}$ the monodromy group of $\mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits ({\mathbb{C}})$ inside $M^{K_1} (\mathop{\FS_{1,[\sigma]}}\nolimits) ({\mathbb{C}})$.
It is generated by the semi-group
\[
\Lambda_{\sigma} (1) := 2 \pi i \cdot \Lambda_{\sigma} \subset U_1 ({\mathbb{Q}})
\]
(see the definition before \ref{1P}),
and forms a lattice inside $\langle \sigma \rangle$.
It is well known that on the image of $\mu_{K,\rm{top}}$, the functor
$(i_1)^* (j_1)_*$
can be computed via group cohomology of the abstract group $L_{\sigma}$.
Since $\langle \sigma \rangle$ is unipotent, its Hochschild cohomology
coincides with cohomology of $L_\sigma$ on algebraic representations.
\end{Proof}
Let us reformulate Theorem~\ref{2D} in the language of perverse sheaves
(\cite{BBD}). Since local systems on the space of ${\mathbb{C}}$-valued points
of a smooth complex variety can be viewed
as perverse sheaves (up to a shift), we may consider $\mathop{\mu_{K,\topp}}\nolimits$ as exact functor
\[
\mathop{\bf Rep}\nolimits_F P \longrightarrow \mathop{\mathop{\bf Perv}\nolimits_F M^K _{\BC}}\nolimits \; .
\]
By \cite{B}~Main Theorem~1.3, the bounded derived category
\[
D^b \left( \mathop{\mathop{\bf Perv}\nolimits_F M^K _{\BC}}\nolimits \right)
\]
is canonically isomorphic to $D^b_{\rm c} (M^K_{\mathbb{C}} , F)$. Theorem \ref{2D} acquires
the following form:
\begin{Var}\label{2F}
There is a commutative diagram
\[
\vcenter{\xymatrix@R-10pt{
D^b \left( \mathop{\bf Rep}\nolimits_F P \right) \ar[r]^{\mathop{\rm Res}\nolimits^P_{P_1}} \ar[d]_{\mu_{K,\rm{top}}} &
D^b \left( \mathop{\bf Rep}\nolimits_F P_1 \right) \ar[r]^{R (\;)^{\langle \sigma \rangle}} &
D^b \left( \mathop{\bf Rep}\nolimits_F P_{1,[\sigma]} \right)
\ar[d]^{\mu_{\pi_{[\sigma]} (K_1),\rm{top}}} \\
D^b \left( \mathop{\mathop{\bf Perv}\nolimits_F M^K _{\BC}}\nolimits \right) \ar[rr]^{i^{\ast} j_{\ast}[-c]} &&
D^b \left( \mathop{\mathop{\bf Perv}\nolimits_F \MpC}\nolimits \right) \\}}
\]
\end{Var}
By definition of Shimura data, there is a tensor functor associating to an algebraic $F$-representation ${\mathbb{V}}$ of $P$, for $F \subseteq {\mathbb{R}}$, a variation of Hodge structure $\mu ({\mathbb{V}})$ on ${\mathfrak{X}}$ (\cite{P1}~1.18).
It descends to a variation $\mu_K ({\mathbb{V}})$ on $M^K ({\mathbb{C}})$ with underlying local system $\mu_{K,\rm{top}} ({\mathbb{V}})$.
We refer to the functor $\mu_K$ as the {\it canonical construction} of variations of Hodge structure from representations of $P$.
By \cite{W}~Thm.~2.2, the image of $\mu_K$ is contained in the category
$\mathop{\bf Var}\nolimits_F M^K_{{\mathbb{C}}}$ of {\it admissible} variations, and hence
(\cite{Sa}~Thm.~3.27), in the category $\mathop{\bf MHM}\nolimits_F M^K_{{\mathbb{C}}}$ of algebraic mixed Hodge modules.\\
According to \cite{Sa}~2.30, there
is a Hodge theoretic variant of the specialization functor:
\[
\mathop{\SP_\sigma}\nolimits := \mathop{Sp}\nolimits_{\mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits} : \mathop{\bf MHM}\nolimits_F M^K ({\mathfrak{S}})_{\mathbb{C}} \longrightarrow \mathop{\bf MHM}\nolimits_F M^{K_1} (\mathop{\FS_{1,[\sigma]}}\nolimits)_{\mathbb{C}} \; ,
\]
which is compatible with Verdier's functor discussed earlier. Since
the latter maps local
systems on $M^K ({\mathbb{C}})$ to local systems on $M^{K_1} ({\mathbb{C}})$ (viewed as
sheaves on the respective compactifications by extension by zero), we see that
$\mathop{\SP_\sigma}\nolimits$ induces a functor
\[
\mathop{\bf Var}\nolimits_F M^K_{{\mathbb{C}}} \longrightarrow \mathop{\bf Var}\nolimits_F M^{K_1}_{\mathbb{C}} \; ,
\]
equally denoted by $\mathop{\SP_\sigma}\nolimits$.
\begin{Thm} \label{2K}
There is a commutative diagram
\[
\vcenter{\xymatrix@R-10pt{
\mathop{\bf Rep}\nolimits_F P \ar[r]^{\mathop{\rm Res}\nolimits^P_{P_1}} \ar[d]_{\mu_K} &
\mathop{\bf Rep}\nolimits_F P_1 \ar[d]^{\mu_{K_1}} \\
\mathop{\bf Var}\nolimits_F M^K_{\mathbb{C}} \ar[r]^{\mathop{\SP_\sigma}\nolimits} &
\mathop{\bf Var}\nolimits_F M^{K_1}_{\mathbb{C}} \\}}
\]
which is compatible with that of \ref{2Ca}.
\end{Thm}
\begin{Proof}
For ${\mathbb{V}} \in \mathop{\bf Rep}\nolimits_F P$, denote by ${\mathbb{V}}_P$ and ${\mathbb{V}}_{P_1}$ the two variations on the open subset $f^{-1} (M^K ({\mathbb{C}}))$ of $M^{K_1} ({\mathbb{C}})$ obtained by restricting $\mu_K ({\mathbb{V}})$ and $\mu_{K_1} \left( \mathop{\rm Res}\nolimits^P_{P_1} ({\mathbb{V}}) \right)$ respectively.
By Proposition~\ref{2C}, the underlying local systems are identical.
By \cite{P1}~Prop.~4.12,
the Hodge filtrations of ${\mathbb{V}}_P$ and ${\mathbb{V}}_{P_1}$ coincide.
Denote the weight filtration on the variation
${\mathbb{V}}_P$ by $W_{\bullet}$, and that on ${\mathbb{V}}_{P_1}$ by $M_{\bullet}$. Denote by $L_{\sigma} \subset U_1 ({\mathbb{Q}})$ the monodromy group of $\mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits ({\mathbb{C}})$ inside $M^{K_1} (\mathop{\FS_{1,[\sigma]}}\nolimits) ({\mathbb{C}})$.
Let $T \in L_{\sigma}$ such that $\frac{1}{2 \pi i} T$ or $- \frac{1}{2 \pi i} T$ lies in $C ({\mathfrak{X}}^0 , P_1)$. According to Proposition~\ref{1C},
the weight filtration of $\log T$ relative to $W_{\bullet}$
is identical to $M_{\bullet}$.
Choosing $T$ as the product of the generators of the semi-group
\[
\Lambda_{\sigma}(1) \subset L_{\sigma} \; ,
\]
one concludes that ${\mathbb{V}}_{P_1}$ carries
the limit Hodge structure of ${\mathbb{V}}_P$ near $\mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits$.
Using the definition of $\mathop{\SP_\sigma}\nolimits$, and hence, of the nearby cycle functor
in the Hodge theoretic context (\cite{Sa}~2.3), one sees that the two
variations $\mu_{K_1} \circ \mathop{\rm Res}\nolimits^P_{P_1} {\mathbb{V}}$ and $\mathop{\SP_\sigma}\nolimits \circ \mu_K {\mathbb{V}}$
coincide.
\end{Proof}
\begin{Cor} \label{2Ka}
There is a commutative diagram
\[
\vcenter{\xymatrix@R-10pt{
\mathop{\bf Rep}\nolimits_F P \ar[r]^{\mathop{\rm Res}\nolimits^P_{P_1}} \ar[d]_{\mu_K} &
\mathop{\bf Rep}\nolimits_F P_1 \ar[d]^{\mu_{K_1}} \\
\mathop{\bf Var}\nolimits_F M^K_{\mathbb{C}} \ar[d]_{j_*} &
\mathop{\bf Var}\nolimits_F M^{K_1}_{\mathbb{C}} \ar[d]^{(j_1)_*} \\
\mathop{\bf MHM}\nolimits_F M^K ({\mathfrak{S}})_{\mathbb{C}} \ar[r]^{\mathop{\SP_\sigma}\nolimits} &
\mathop{\bf MHM}\nolimits_F M^{K_1} (\mathop{\FS_{1,[\sigma]}}\nolimits)_{\mathbb{C}} \\}}
\]
which is compatible with that of \ref{2Cb}.
\end{Cor}
\begin{Proof}
By Theorem~\ref{2K}, we have
\[
(j_1)^* \mathop{\SP_\sigma}\nolimits j_* \circ \mu_K = \mu_{K_1} \circ \mathop{\rm Res}\nolimits^P_{P_1} \; .
\]
In order to see that the adjoint morphism
\[
\mathop{\SP_\sigma}\nolimits j_* \circ \mu_K \longrightarrow (j_1)_* \circ \mu_{K_1} \circ \mathop{\rm Res}\nolimits^P_{P_1}
\]
is an isomorphism, one may apply the (faithful)
forgetful functor to perverse sheaves on $M^{K_1} (\mathop{\FS_{1,[\sigma]}}\nolimits)_{\mathbb{C}}$. There, the
claim follows from Proposition~\ref{2Cb}.
\end{Proof}
We are ready to prove our main result:
\begin{Thm}\label{2H}
There is a commutative diagram
\[
\vcenter{\xymatrix@R-10pt{
D^b \left( \mathop{\bf Rep}\nolimits_F P \right) \ar[r]^{\mathop{\rm Res}\nolimits^P_{P_1}} \ar[d]_{\mu_K} &
D^b \left( \mathop{\bf Rep}\nolimits_F P_1 \right) \ar[r]^{R (\;)^{\langle \sigma \rangle}} &
D^b \left( \mathop{\bf Rep}\nolimits_F P_{1,[\sigma]} \right)
\ar[d]^{\mu_{\pi_{[\sigma]} (K_1)}} \\
D^b \left( \mathop{\bf MHM}\nolimits_F M^K_{{\mathbb{C}}} \right) \ar[rr]^{i^{\ast} j_{\ast}[-c]} &&
D^b \left( \mathop{\bf MHM}\nolimits_F \mathop{M_{\BC}^{\pi_{[\sigma]} (K_1)}}\nolimits \right) \\}}
\]
which is compatible with that of \ref{2F}.
\end{Thm}
\begin{Proof}
According to \cite{Sa}~2.30, we have the equality
\[
i^* = i_1^* \circ \mathop{\SP_\sigma}\nolimits \; .
\]
Together with Corollary~\ref{2Ka}, this reduces us to the case $P=P_1$.
Now observe that
$(i_1)_*$ and $(j_1)_*$ are exact functors on the level of abelian categories
$\mathop{\bf MHM}\nolimits_F$
(\cite{BBD}~Cor.~4.1.3). $(i_1)^*$ is the left adjoint of $(i_1)_*$
on the level of
$D^b ( \mathop{\bf MHM}\nolimits_F )$. It follows formally that the zeroeth cohomology functor
\[
{\cal H}^0 (i_1)^*: \mathop{\bf MHM}\nolimits_F M^{K_1} (\mathop{\FS_{1,[\sigma]}}\nolimits)_{\mathbb{C}} \longrightarrow \mathop{\bf MHM}\nolimits_F \mathop{M_{\BC}^{\pi_{[\sigma]} (K_1)}}\nolimits
\]
is right exact, and that $({\cal H}^0 (i_1)^*, (i_1)_*)$ constitutes an adjoint pair of
functors on the level of $\mathop{\bf MHM}\nolimits_F$. In particular, there is an adjunction
morphism
\[
{\rm id} \longrightarrow (i_1)_* {\cal H}^0 (i_1)^*
\]
of functors on $\mathop{\bf MHM}\nolimits_F M^{K_1} (\mathop{\FS_{1,[\sigma]}}\nolimits)_{\mathbb{C}}$,
which induces a morphism of functors on
\[
K^b \left( \mathop{\bf MHM}\nolimits_F M^{K_1} (\mathop{\FS_{1,[\sigma]}}\nolimits)_{\mathbb{C}} \right) \; ,
\]
the homotopy category of complexes in $\mathop{\bf MHM}\nolimits_F M^{K_1} (\mathop{\FS_{1,[\sigma]}}\nolimits)_{\mathbb{C}}$.
Denote by $q$ the localization
functor from the homotopy to the derived category. We get a morphism in
\[
\begin{array}{cccc}
\mathop{\rm Hom}\nolimits \left( q, q \circ (i_1)_* {\cal H}^0 (i_1)^* \right) & =
& \mathop{\rm Hom}\nolimits \left( q, (i_1)_* \circ q \circ {\cal H}^0 (i_1)^* \right) & \\
& = & \mathop{\rm Hom}\nolimits \left( (i_1)^* \circ q, q \circ {\cal H}^0 (i_1)^* \right) & ,
\end{array}
\]
where $\mathop{\rm Hom}\nolimits$ refers to morphisms of exact functors. Composition with the
exact functor $(j_1)_* \circ \mu_{K_1}$ gives a morphism
\[
\eta' \in \mathop{\rm Hom}\nolimits \left( (i_1)^* (j_1)_* \circ \mu_{K_1} \circ q,
q \circ {\cal H}^0 (i_1)^* (j_1)_* \circ \mu_{K_1} \right) \; .
\]
Assuming the existence of the {\it total left derived functor}
\[
L \left( {\cal H}^0 (i_1)^* (j_1)_* \circ \mu_{K_1} \right):
D^b \left( \mathop{\bf Rep}\nolimits_F P_1 \right) \longrightarrow
D^b \left( \mathop{\bf MHM}\nolimits_F \mathop{M_{\BC}^{\pi_{[\sigma]} (K_1)}}\nolimits \right)
\]
for a moment (see (a) below), its universal property (\cite{V}~II.2.1.2) says
that the above $\mathop{\rm Hom}\nolimits$ equals
\[
\mathop{\rm Hom}\nolimits \left( (i_1)^* (j_1)_* \circ \mu_{K_1},
L \left( {\cal H}^0 (i_1)^* (j_1)_* \circ \mu_{K_1} \right) \right) \; .
\]
Denote by
\[
\eta: (i_1)^* (j_1)_* \circ \mu_{K_1} \longrightarrow
L \left( {\cal H}^0 (i_1)^* (j_1)_* \circ \mu_{K_1} \right)
\]
the morphism corresponding to $\eta'$.
It remains to establish the following claims:
\begin{itemize}
\item[(a)] The functor
\[
{\cal H}^0 (i_1)^* (j_1)_* \circ \mu_{K_1}: \mathop{\bf Rep}\nolimits_F P_1 \longrightarrow \mathop{\bf MHM}\nolimits_F \mathop{M_{\BC}^{\pi_{[\sigma]} (K_1)}}\nolimits
\]
is left derivable.\\
\item[(b)] There is a canonical isomorphism between the total
left derived functor
\[
L \left( {\cal H}^0 (i_1)^* (j_1)_* \circ \mu_{K_1} \right)
\]
and
\[
\mu_{\pi_{[\sigma]} (K_1)} \circ R (\;)^{\langle \sigma \rangle} [c] \; .
\]
\item[(c)] $\eta$ is an isomorphism.
\end{itemize}
For (a) and (b), observe that up to a twist by $c$,
the variation
\[
{\cal H}^0 (i_1)^* (j_1)_* \circ \mu_K ({\mathbb{V}})
\]
on $\mathop{M_{\BC}^{\pi_{[\sigma]} (K_1)}}\nolimits$ is given by the co-invariants of ${\mathbb{V}}$ under the local monodromy.
This is a general fact about the degeneration of variations along a divisor
with normal crossings; see e.g.\ the discussion preceding \cite{HZ}~(4.4.8).
By \cite{K}~Thm.~6.10, up to a twist
by $c$ (corresponding to the highest exterior power of $\mathop{\rm Lie}\nolimits \langle \sigma \rangle$), the co-invariants are identical to $H^c (\langle \sigma \rangle, \;\;)$.
We are thus reduced to showing that
the functor $H^c (\langle \sigma \rangle, \;\;)$ is left derivable, with
total left derived functor
$R (\;)^{\langle \sigma \rangle} [c]$. But this follows from standard facts
about Lie algebra homology and cohomology (see e.g.\ \cite{K}~Thm.~6.10 and
its proof).
(c) can be shown after applying the
forgetful functor to perverse sheaves. There, the
claim follows from \ref{2F}.
\end{Proof}
\begin{Rem}\label{2I}
If $(P, {\mathfrak{X}})$ is pure, and $c = \dim \langle \sigma \rangle$ is maximal,
i.e., equal to $\dim U_1$, then Theorem~\ref{2H} is
equivalent to \cite{HZ}~Thm.~(4.4.18). In fact,
by \ref{2H}, the recipe to compute $H^q i^{\ast} j_{\ast}
\circ \mu_K ({\mathbb{V}})$ given on pp.~286/287 of \cite{HZ} generalizes as follows: The complex
\[
C^{\bullet} = \Lambda^{\bullet} (\mathop{\rm Lie}\nolimits \langle \sigma \rangle )^{\ast}
\otimes_F {\mathbb{V}}
\]
carries the diagonal action of $P_1$
(where the action on $\mathop{\rm Lie}\nolimits \langle \sigma \rangle$ is via conjugation).
The induced action on the cohomology objects $H^q C^{\bullet}$ factors through $P_{1,[\sigma]}$ and gives the right Hodge structures via $\mu_{\pi_{[\sigma]} (K_1)}$. In \cite{HZ}, the Hodge and weight filtrations on $C^{\bullet}$ corresponding to the action of $P_1$ are made explicit.
\end{Rem}
\begin{Rem}\label{2L}
Because of \ref{1O}~(b), the isomorphism of Theorem \ref{2H} does not depend on the cone decomposition ${\mathfrak{S}}$, which contains $\sigma \times \{ p \}$. We leave it to the reader to formulate and prove
results like \cite{P2}~(4.8.5) on the behaviour of the isomorphism of
\ref{2H} under change of the group $K$, and of the element $p$.
\end{Rem}
Let us conclude the section with a statement on transitivity of degeneration.
In addition to the data used so far, fix a face $\tau$ of $\sigma$. Write
\[
i_{\tau}: \mathop{M^{\pi_{[\tau]} (K_1)}}\nolimits \hookrightarrow M^K({\mathfrak{S}}) \; .
\]
$\mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits$ lies in the closure of $\mathop{M^{\pi_{[\tau]} (K_1)}}\nolimits$ inside $M^K({\mathfrak{S}})$. Adjunction gives
a morphism
\[
i^* j_* \circ \mu_K \longrightarrow i^* (i_{\tau})_* (i_{\tau})^* j_* \circ \mu_K
\]
of exact functors from $D^b (\mathop{\bf Rep}\nolimits_F P )$ to
$D^b \left( \mathop{\bf MHM}\nolimits_F \mathop{M_{\BC}^{\pi_{[\sigma]} (K_1)}}\nolimits \right)$.
\begin{Prop}\label{2M}
This morphism is an isomorphism.
\end{Prop}
\begin{Proof}
This can be checked on the level of local systems. There, it follows from
Theorem~\ref{1K}~(i), and standard facts about degenerations
along strata in torus embeddings.
\end{Proof}
\section{Higher direct images for $\ell$-adic sheaves}
\label{3}
The main result of this section (Theorem~\ref{3J}) provides an $\ell$-adic analogue of the formula of \ref{2H}. The main ingredients of the proof are the machinery developed in \cite{P2}, and our knowledge of the local situation (\ref{1M}). \ref{3E}--\ref{3Fa} are concerned with the problem of extending certain infinite families of \'etale sheaves to ``good'' models of a Shimura variety. We conclude by discussing mixedness of the $\ell$-adic sheaves obtained via the canonical construction.\\
With the exception of condition $(\cong)$, which will not be needed, we fix the same set of geometric data as in the beginning of Section \ref{2}.
In particular, the cone $\sigma$ is assumed smooth, the group $K$ is neat, and $(P, {\mathfrak{X}})$ satisfies condition $(+)$.\\
Define $\tilde{M} ({\mathfrak{S}})$ as the inverse limit of all
\[
M^{K'} ({\mathfrak{S}}) = M^{K'} (P , {\mathfrak{X}} , {\mathfrak{S}})
\]
for open compact $K' \le K$. The group $K$ acts on $\tilde{M} ({\mathfrak{S}})$, and
\[
M^K ({\mathfrak{S}}) = \tilde{M} ({\mathfrak{S}}) / K \; .
\]
Inside $\tilde{M} ({\mathfrak{S}})$ we have the inverse limit $\tilde{M}$ of
\[
M^{K'} = M^{K'} (P , {\mathfrak{X}}) \; , \quad K' \le K \; ,
\]
and the inverse limit $\tilde{M}_{[\sigma]}$ of all
\[
M^{K'_{1,[\sigma]}} = M^{K'_{1,[\sigma]}} (\mathop{P_{1,[\sigma]}}\nolimits , \mathop{\FX_{1,[\sigma]}}\nolimits)
\]
for open compact $K'_{1,[\sigma]} \le K_{1,[\sigma]} := \pi_{[\sigma]} (K_1)$.
We get a commutative diagram
\begin{myequation}\label{3eq}
\vcenter{\xymatrix@R-10pt{
\tilde{M} \ar@{^{ (}->}[r]^{\tilde{j}} \ar[d] &
\tilde{M} ({\mathfrak{S}}) \ar@{<-^{ )}}[r]^{\tilde{i}} \ar[d] &
\tilde{M}_{[\sigma]} \ar[d] \\
\tilde{M} / K_1 \ar@{^{ (}->}[r]^{j'} \ar[d]_{\varphi} &
\tilde{M} ({\mathfrak{S}}) / K_1 \ar@{<-^{ )}}[r]^{i'} \ar[d]_{\tilde{\varphi}} &
\tilde{M}_{[\sigma]} / K_{1 , [\sigma]} \ar@{=}[d] \\
M^K = \tilde{M} / K \ar@{^{ (}->}[r]^{\!\!\!\!\!\!\!\! j} &
M^K ({\mathfrak{S}}) = \tilde{M} ({\mathfrak{S}}) / K \ar@{<-^{ )}}[r]^{\quad\quad i} &
\mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits \\}}
\end{myequation}
\begin{Prop}\label{3B}
The morphism
\[
\tilde{\varphi} : \tilde{M} ({\mathfrak{S}}) / K_1 \longrightarrow M^K ({\mathfrak{S}}) = \tilde{M} ({\mathfrak{S}}) / K
\]
is \'etale near the stratum
\[
\mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits = \tilde{M}_{[\sigma]} / K_{1,[\sigma]} \; .
\]
\end{Prop}
\begin{Proof}
By Theorem~\ref{1M}, the map $\tilde{\varphi}$ induces an isomorphism of the respective formal completions along our stratum. The claim thus follows from \cite{EGAIV4}~Prop.~(17.6.3).
\end{Proof}
Let $\mathop{\rm Tor}\nolimits \mathop{\rm Mod}\nolimits_K$ be the category of all continuous discrete torsion $K$-modules. The left vertical arrow of (\ref{3eq})
gives an evident functor
\[
\mu_K : \mathop{\rm Tor}\nolimits \mathop{\rm Mod}\nolimits_K \longrightarrow \mathop{\bf Et}\nolimits M^K
\]
into the category of \'etale sheaves on $M^K$; since $K$ is neat, this functor is actually an exact tensor functor with values in the category of lisse sheaves. Similar remarks apply to $K_1$ or $\pi_{[\sigma]} (K_1)$ in place of $K$. We are interested in the behaviour of the functor
\[
i^{\ast} j_{\ast} : D^+ (\mathop{\bf Et}\nolimits M^K ) \longrightarrow D^+ ( \mathop{\bf Et}\nolimits \mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits )
\]
on the image of $\mu_K$.
From \ref{3B}, we conclude:
\begin{Prop}\label{3C}
(i) The two functors
\[
i^{\ast} j_{\ast} \quad , \quad (i')^{\ast} j'_{\ast} \circ \varphi^{\ast}:
D^+ (\mathop{\bf Et}\nolimits M^K) \longrightarrow D^+ ( \mathop{\bf Et}\nolimits \mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits )
\]
are canonically isomorphic.\\
(ii) The two functors
\[
i^{\ast} j_{\ast} \circ \mu_K \; , \; (i')^{\ast} j'_{\ast} \circ \mu_{K_1} \circ \mathop{\rm Res}\nolimits^K_{K_1}:
D^+ (\mathop{\rm Tor}\nolimits \mathop{\rm Mod}\nolimits_K) \longrightarrow D^+ ( \mathop{\bf Et}\nolimits \mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits )
\]
are canonically isomorphic. Here, $\mathop{\rm Res}\nolimits^K_{K_1}$ denotes the pullback via the monomorphism
\[
K_1 \longrightarrow K \; , \; k_1 \longmapsto p^{-1} \cdot k_1 \cdot p \; .
\]
\end{Prop}
\begin{Proof}
(i) is smooth base change, and (ii) follows from (i).
\end{Proof}
Write $K_\sigma$ for $\ker (\pi_{[\sigma]} \, |_{K_1}) = K_1 \cap \langle \sigma \rangle ({\mathbb{A}}_f)$.
\begin{Thm}\label{3A}
There is a commutative diagram
\[
\vcenter{\xymatrix@R-10pt{
D^+ \left( \mathop{\rm Tor}\nolimits \mathop{\rm Mod}\nolimits_K \right) \ar[r]^{\mathop{\rm Res}\nolimits^K_{K_1}} \ar[d]_{\mu_K} &
D^+ \left( \mathop{\rm Tor}\nolimits \mathop{\rm Mod}\nolimits_{K_1} \right)
\ar[r]^{R (\;)^{\! K_\sigma}} &
D^+ ( \mathop{\rm Tor}\nolimits \mathop{\rm Mod}\nolimits_{\pi_{[\sigma]} (K_1)} )
\ar[d]^{\mu_{\pi_{[\sigma]} (K_1)}} \\
D^+ \left( \mathop{\bf Et}\nolimits M^K \right) \ar[rr]^{i^{\ast} j_{\ast}} &&
D^+ \left( \mathop{\bf Et}\nolimits \mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits \right) \\}}
\]
Here, $R (\;)^{K_\sigma}$ refers to continuous group cohomology of $K_\sigma$.
\end{Thm}
\begin{Proof}
We need to show that the diagram
\[
\vcenter{\xymatrix@R-10pt{
D^+ \left( \mathop{\rm Tor}\nolimits \mathop{\rm Mod}\nolimits_{K_1} \right)
\ar[r]^{R (\;)^{K_\sigma}}
\ar[d]_{\mu_{K_1}} &
D^+ ( \mathop{\rm Tor}\nolimits \mathop{\rm Mod}\nolimits_{\pi_{[\sigma]}(K_1)} )
\ar[d]^{\mu_{\pi_{[\sigma]} (K_1)}} \\
D^+ ( \mathop{\bf Et}\nolimits \tilde{M} / K_1 )
\ar[r]^{(i')^{\ast} j'_{\ast}} &
D^+ \left( \mathop{\bf Et}\nolimits \mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits \right) \\}}
\]
commutes. The proof of this statement makes use of the full machinery developed in the first two section of \cite{P2}.
In fact, \cite{P2}~Prop.~(4.4.3) contains the analogous statement for the (coarser) stratification of $M^K ({\mathfrak{S}})$ induced from the canonical stratification of the {\it Baily--Borel compactification} of $M^K$. One faithfully imitates the proof, observing that \cite{P2}~(1.9.1) can be applied because the upper half of (\ref{3eq})
is cartesian up to nilpotent elements. The statement on ramification along a stratum in \cite{P2}~(3.11) holds for arbitrary, not just pure Shimura data.
\end{Proof}
\begin{Rem}\label{3D}
Because of Remark~\ref{1O}~(b), the isomorphism of \ref{3A} does not depend on the cone decomposition ${\mathfrak{S}}$ containing $\sigma \times \{ p \}$.
\end{Rem}
Fix a set ${\cal T} \subset \mathop{\rm Tor}\nolimits \mathop{\rm Mod}\nolimits_K$, let $E = E (P, {\mathfrak{X}})$ be the field of definition of our varieties, and write $O_E$ for its ring of integers. Consider a {\it model}
\[
{\cal M}^K \stackrel{j}{\hookrightarrow} {\cal M}^K ({\mathfrak{S}}) \stackrel{i}{\hookleftarrow} {\cal M}^{\pi_{[\sigma]} (K_1)}
\]
of
\[
M^K \stackrel{j}{\hookrightarrow} M^K ({\mathfrak{S}}) \stackrel{i}{\hookleftarrow} \mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits
\]
over $O_E$, i.e., normal schemes of finite type over $O_E$, an open immersion $j$ and an immersion $i$ whose generic fibres give the old situation over $E$; we require also that the generic fibres lie dense in their models.
(Finitely many special fibres of our models might be empty.)
Assume
\begin{enumerate}
\item [(1)] All sheaves in $\mu_K ({\cal T})$ extend to lisse sheaves on ${\cal M}^K$.
\item [(2)] For any $S \in \mu_K ({\cal T})$ and any $q \ge 0$, the extended sheaf ${\cal S}$ on ${\cal M}^K$ satisfies the following:
\[
i^{\ast} R^q j_{\ast} {\cal S} \in \mathop{\bf Et}\nolimits {\cal M}^{\pi_{[\sigma]} (K_1)} \; \mbox{is lisse.}
\]
\end{enumerate}
Then the generic fibre of $i^{\ast} R^q j_{\ast} {\cal S}$ is necessarily equal to $i^{\ast} R^q j_{\ast} S$, i.e., it is given by the formula of \ref{3A}. So $i^{\ast} R^q j_{\ast} {\cal S}$ is the unique lisse extension of $i^{\ast} R^q j_{\ast} S$ to ${\cal M}^{\pi_{[\sigma]} (K_1)}$. Observe that if ${\cal T}$ is finite, then conditions (1) and (2) hold after passing to an open sub-model of any given model.\\
If ${\cal T}$ is an abelian subcategory of $\mathop{\rm Tor}\nolimits \mathop{\rm Mod}\nolimits_K$ and (1) holds, then (2) needs to be checked only for the simple noetherian objects in ${\cal T}$.\\
Let us show how to obtain a model as above for a \emph{particular} choice of ${\cal T}$:\\
Fix a prime $\ell$, write
\[
\mathop{\rm pr}\nolimits_\ell : P ({\mathbb{A}}_f) \longrightarrow P ({\mathbb{Q}}_\ell)
\]
and $K_\ell := \mathop{\rm pr}\nolimits_\ell (K)$. Denote by ${\cal T}_\ell \subset \mathop{\rm Tor}\nolimits \mathop{\rm Mod}\nolimits_{K_\ell} \subset \mathop{\rm Tor}\nolimits \mathop{\rm Mod}\nolimits_K$ the abelian subcategory of ${\mathbb{Z}}_\ell$-torsion $K_\ell$-modules. The quotient $K_\ell$ of $K$ corresponds to a certain part of the ``Shimura tower''
\[
( M^{K'} )_{K'} \; ,
\]
namely the one indexed by the open compact $K' \le K$ containing the kernel of $\mathop{\rm pr}\nolimits_\ell \, |_K$. According to \cite{P2}~(4.9.1), the following is known:
\begin{Prop}\label{3E}
There exists a model ${\cal M}^K$ such that all the sheaves in
\[
\mu_K ({\cal T}_\ell)
\]
extend to lisse sheaves on ${\cal M}^K$. Equivalently, the whole \'etale $K_\ell$-covering of $M^K$ considered above extends to an \'etale $K_\ell$-covering of ${\cal M}^K$.
\end{Prop}
\begin{Proof}
Write $L$ for the product of $\ell$ and the primes dividing the order of the
torsion elements in $K_\ell$; thus $K_\ell$ is a pro-$L$-group.
Let $S$ be a finite set in $\mathop{{\bf Spec}}\nolimits O_E$ containing the prime factors of $L$, and ${\cal M}^K$ a model of $M^K$ over $O_S$ which is the complement of an $NC$-divisor relative to $O_S$ in a smooth, proper $O_S$-scheme.
We give a construction of a suitable enlargement $S'$ of $S$ such that the claim holds for the restriction of ${\cal M}^K$ to $O_{S'}$.\\
First, assume that $P$ is a torus. Recall (\cite{P1}~2.6) that the Shimura varieties associated to tori are
finite over their reflex field.
Since Shimura varieties are normal, each $M^K$ is the spectrum of a
finite product $E_K$ of number fields. But then the $K_\ell$-covering corresponds to an {\it abelian} $K_\ell$-extension
\[
\tilde{E} / E_K \; .
\]
By looking at the kernel of the reduction map to ${\rm GL}_N ({\mathbb{Z}} / \ell^f {\mathbb{Z}})$, $\ell^f \ge 3$, one sees that there is an intermediate extension
\[
\tilde{E} / F / E_K
\]
finite over $E_K$, such that $\tilde{E} / F$ is a ${\mathbb{Z}}^r_\ell$-extension. Hence the only primes that ramify in $\tilde{E} / F$ are those over $\ell$, and one adds to $S$ the finitely many primes which ramify in $F / E_K$.\\
In the general case, choose an embedding
\[
e : (T , {\cal Y}) \longrightarrow (P , {\mathfrak{X}})
\]
of Shimura data, with a torus $T$
such that $E = E(P,{\mathfrak{X}})$ is contained in $E(T, {\cal Y})$
(\cite{P1}~Lemma~11.6), and finitely many $K^T_m \le T ({\mathbb{A}}_f)$ and $p_m \in P ({\mathbb{A}}_f)$ such that the maps
\[
[\cdot p_m] \circ [e] : M^{K^T_m} (T , {\cal Y}) \longrightarrow M^K (P , {\mathfrak{X}})
\]
are defined and meet all components of $M^K$ (\cite{P1}~Lemma~11.7).
Each $M^{K^T_m}$ equals the spectrum of a product $F_m$ of number fields.
Define $x_m \in M^K (F_m)$ as the image of $[\cdot p_m] \circ [e]$.
Let $S_m \subset \mathop{{\bf Spec}}\nolimits O_{F_m}$ denote the set of bad primes for $M^{K^T_m}$ and $(K^T_m)_\ell$, plus a suitable finite set such that $x_m$ extends to a section of ${\cal M}^K$ over $O_{S_m}$.
Enlarge $S = S((T, {\cal Y}),e,p_m)$ so as to contain all primes which ramify in some $F_m$, and those below a prime in some $S_m$. We continue to write $S$ for the
enlargement, and ${\cal M}^K$ and $x_m$ for the objects obtained via restriction to $O_S$.\\
We claim that with these choices, the whole \'etale $K_\ell$-covering of
$M^K$ extends to an \'etale $K_\ell$-covering of ${\cal M}^K$.
Let $M^0$ and ${\cal M}^0$ be connected components of $M^K$ and ${\cal M}^K$. We have to show that the map
\[
s : \pi_1 (M^0) \longrightarrow K_\ell
\]
given by the $K_\ell$-covering factors through the epimorphism
\[
\beta : \pi_1 (M^0) \ontoover{\ } \pi_1 ({\cal M}^0) \; .
\]
There is an $m$ and intermediate field extensions
\[
F_m / F' / F / E
\]
such that $M^0$ is a scheme over $F$ with geometrically connected fibres,
and such that $x_m$ induces an $F'$-valued point of $M^0$.
Since ${\cal M}^0$ is normal, ${\cal M}^0$ is a scheme over the integral closure $O_{S_F}$ of $O_S$. By \cite{SGA1XIII}~4.2--4.4, there is a commutative diagram of exact sequences
\[
\vcenter{\xymatrix@R-10pt{
1 \ar[r] &
\pi_1 (\overline{M^0}) \ar[r] \ar[d]_{\alpha} &
\pi_1 (M^0) \ar[r] \ar[d]_{\beta'} &
\mathop{\rm Gal}\nolimits (\overline{F} / F) \ar[r] \ar[d]^{\gamma} &
1 \\
1 \ar[r] &
\pi^L_1 (\overline{M^0}) \ar[r] &
\pi'_1 ({\cal M}^0) \ar[r] &
\pi_1 (\mathop{{\bf Spec}}\nolimits O_{S_F}) \ar[r] &
1 \\}}
\]
Here, $\pi^L_1 (\overline{M^0})$ is the largest pro-$L$-quotient of $\pi_1 (\overline{M^0})$, the fundamental group of $\overline{M^0} := M^0 \otimes_F \overline{F}$, and $\pi'_1 ({\cal M}^0)$ is a suitable quotient of $\pi_1 ({\cal M}^0)$.
Hence all vertical arrows are surjections.
Clearly $\ker \alpha$ is contained in $\ker s$; we thus get a map
\[
s' : \pi_1 (M^0) / \ker \alpha \longrightarrow K_\ell \; .
\]
We have to check that
\[
\ker \gamma = \ker \beta' / \ker \alpha \subset \pi_1 (M^0) / \ker \alpha
\]
is contained in $\ker s'$. But $\ker \gamma$ remains unchanged under passing to the extension $F' / F$, which is unramified outside $S_F$. There, the corresponding exact sequence splits thanks to the existence of $x_m$.\\
The map
\[
\ker \gamma \longrightarrow \pi_1 (M^0)
\]
is induced by pullback via $[\cdot p_m] \circ [e]$, and by construction its image is contained in $\ker s$.
\end{Proof}
This takes care of condition (1).
\begin{Lem}\label{3F}
Up to isomorphism, there are only finitely many simple objects in ${\cal T}_\ell$.
\end{Lem}
\begin{Proof}
There is a normal subgroup $K'_\ell \le K_\ell$ of finite index which is a projective limit of $\ell$-groups. Write
${\cal T}'_\ell$ for the subcategory of $\mathop{\rm Tor}\nolimits \mathop{\rm Mod}\nolimits_{K'_\ell}$ of ${\mathbb{Z}}_\ell$-torsion modules. Since any element of order $\ell^n$ in ${\rm GL}_r ({\mathbb{F}}_\ell)$ is unipotent,
any simple non-trivial object in ${\cal T}'_\ell$ is isomorphic to the trivial representation ${\mathbb{Z}} / \ell {\mathbb{Z}}$ of $K'_\ell$.
Therefore, the simple objects in ${\cal T}_\ell$ all occur in the Jordan--H\"older decomposition of
\[
\mathop{\rm Ind}\nolimits^{K_\ell}_{K'_\ell} \mathop{\rm Res}\nolimits^{K_\ell}_{K'_\ell} ({\mathbb{Z}} / \ell{\mathbb{Z}}) \; .
\]
\end{Proof}
\begin{Prop}\label{3Fa}
Conditions (1) and (2) hold for
a suitable open sub-model of
any model as in \ref{3E}.
\end{Prop}
\begin{Proof}
By generic base change (\cite{SGAhalbTh}~Thm.~1.9), condition (2) can be
achieved for any single constructible sheaf ${\cal S}$ on ${\cal M}^K$,
which is lisse on the generic fibre.
The claim
follows from \ref{3F} by applying the long exact sequences associated
to $i^\ast R j_\ast$.
\end{Proof}
\forget{
Consequently, conditions (1) and (2) hold for some open sub-model of any ${\cal M}^K$ as in the proposition. One may be interested in having a geometric criterion which guarantees that (1) and (2) actually hold for a {\it given} ${\cal M}^K$.\\
Choose a normal subgroup $K'_\ell \trianglelefteq K_\ell$ as above, and write
\[
K' := \mathop{\rm pr}\nolimits^{-1}_\ell (K'_\ell) \trianglelefteq K \; .
\]
Assume in addition that the cone $\sigma \times \{ p \}$ is smooth with respect to
\[
\frac{1}{2 \pi i} \cdot \big( U_1 ({\mathbb{Q}}) \cap p \cdot K' \cdot p^{-1} \big) \; ,
\]
i.e., that $M^{K'} ({\mathfrak{S}})$
is smooth near $M^{\pi_{[\sigma]} (K'_1)}$.
Let $S$ be a finite set in $\mathop{{\bf Spec}}\nolimits O_E$ containing the primes dividing $\ell$.
Write $O_S$ for the ring of $S$-integers.
Consider {\it any} diagram of models over $\mathop{{\bf Spec}}\nolimits O_S$
\[
\vcenter{\xymatrix@R-10pt{
{\cal M}^{K'} \ar@{^{ (}->}[r]^j \ar[d]_{\varphi} &
{\cal M}^{K'} ({\mathfrak{S}}) \ar@{<-^{ )}}[r]^i \ar[d]_{\tilde{\varphi}} &
{\cal M}^{\pi_{[\sigma]} (K'_1)} \ar[d]^{\varphi_1} \\
{\cal M}^{K} \ar@{^{ (}->}[r]^j &
{\cal M}^{K} ({\mathfrak{S}}) \ar@{<-^{ )}}[r]^i &
{\cal M}^{\pi_{[\sigma]} (K_1)} \\}}
\]
satisfying the following:
\begin{itemize}
\item[(i)] ${\cal M}^K$ is as in Proposition~\ref{3E}, i.e., it satisfies (1) for the category ${\cal T}_\ell$.
\item[(ii)] The actions of $K / K'$ resp. $\pi_{[\sigma]} (K_1) / \pi_{[\sigma]} (K'_1)$ extend to the schemes ${\cal M}^{K'}$, ${\cal M}^{K'} ({\mathfrak{S}})$ resp. ${\cal M}^{\pi_{[\sigma]} (K'_1)}$. The reduced scheme underlying the
preimage under $\tilde{\varphi}$ of ${\cal M}^{\pi_{[\sigma]} (K_1)}$ is a coproduct of translates of ${\cal M}^{\pi_{[\sigma]} (K'_1)}$. $\varphi$ and $\varphi_1$ are finite and \'etale, and $\tilde{\varphi}$ is finite.
\item[(iii)] The complement of ${\cal M}^{K'}$ in ${\cal M}^{K'} ({\mathfrak{S}})$ is an $NC$-divisor relative to $O_S$ near ${\cal M}^{\pi_{[\sigma]} (K'_1)}$, and ${\cal M}^{\pi_{[\sigma]} (K'_1)}$ is a union of strata in the stratification induced by the divisor.
\end{itemize}
\begin{Prop}\label{3G}
${\cal M}^K \stackrel{j}{\hookrightarrow} {\cal M}^K ({\mathfrak{S}}) \stackrel{i}{\hookleftarrow} {\cal M}^{\pi_{[\sigma]} (K_1)}$ as above satisfies (1) and (2) for the category ${\cal T}_\ell$.
\end{Prop}
\begin{Proof}
The claim holds for $K'$ instead of $K$: by Lemma~\ref{3F}, we only need to show that the
\[
i^{\ast} R^q j_{\ast} ({\mathbb{Z}} / \ell {\mathbb{Z}}) \in \mathop{\bf Et}\nolimits {\cal M}^{\pi_{[\sigma]} (K'_1)}
\]
are lisse.
Given (iii), this is a standard computation; see e.g.\ \cite{SGAhalbTh}~page~A4.
For $A \in {\cal T}_\ell$, consider the exact sequence
\[
0 \longrightarrow A \longrightarrow \mathop{\rm Ind}\nolimits^K_{K'} \mathop{\rm Res}\nolimits^K_{K'} A \longrightarrow B \longrightarrow 0
\]
of objects in ${\cal T}_\ell$, and the associated sequence
\[
0 \longrightarrow {\cal S} \longrightarrow \varphi_{\ast} \varphi^{\ast} {\cal S} \longrightarrow {\cal R} \longrightarrow 0
\]
of sheaves. By induction on $q$, one sees that it suffices to show the claim for modules of the shape
\[
A = \mathop{\rm Ind}\nolimits^K_{K'} A' \; , \; A' \in {\cal T}'_\ell \; ,
\]
and again by Lemma \ref{3F}, we may assume that
\[
A' = {\mathbb{Z}} / \ell {\mathbb{Z}}
\]
with the trivial representation. But by proper base change,
\[
i^{\ast} Rj_{\ast} \varphi_{\ast} ({\mathbb{Z}} / \ell {\mathbb{Z}}) = i^{\ast} \tilde{\varphi}_{\ast} Rj_{\ast} ({\mathbb{Z}} / \ell {\mathbb{Z}}) = \bigoplus_k (\varphi_1) \, _{\ast} \left( i^{\ast} Rj_{\ast} ({\mathbb{Z}} / \ell {\mathbb{Z}}) \right) \; ,
\]
where $k$ runs over the number of copies in ${\cal M}^{\pi_{[\sigma]} (K'_1)}$ comprising the preimage $\tilde{\varphi}^{-1} \left( {\cal M}^{\pi_{[\sigma]} (K_1)} \right)_{{\rm red}}$. Since $\varphi_1$ is finite and \'etale, the cohomology objects of the right hand side are lisse.
\end{Proof}
\begin{Rem}\label{3I}
The conclusion of \ref{3H} continues to hold if one replaces
$S = S((T, {\cal Y}),e,p_j)$ by the set
\[
\bigcap S((T, {\cal Y}),e,p_j) \; ,
\]
where the intersection runs over {\it all} embeddings of torus Shimura data
\[
e: (T,{\cal Y}) \longrightarrow (P, {\mathfrak{X}})
\]
and $p_j$ as above.
\end{Rem}
}
Fix a finite extension $F = F_{\lambda}$ of ${\mathbb{Q}}_\ell$. By passing to projective limits, we get an exact tensor functor
\[
\mu_K : \mathop{\bf Rep}\nolimits_F P \longrightarrow \mathop{\bf Et}\nolimits^l_F M^K
\]
into the category of lisse $\lambda$-adic sheaves on $M^K$ (\cite{P2}~(5.1)).
We refer to $\mu_K$ as the {\it canonical construction} of $\lambda$-adic
sheaves from representations of $P$.
Denote by $D^b_{\rm c}(?,F)$ Ekedahl's bounded ``derived'' category of constructible
$F$-sheaves (\cite{E}~Thm.~6.3). Consider the functor
\[
i_{\ast} j^{\ast} : D^b_{\rm c} (M^K,F) \longrightarrow D^b_{\rm c} ( \mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits ,F) \; .
\]
From Theorem~\ref{3A}, we obtain the main result of this section:
\begin{Thm}\label{3J}
There is a commutative diagram
\[
\vcenter{\xymatrix@R-10pt{
D^b \left( \mathop{\bf Rep}\nolimits_F P \right) \ar[r]^{\mathop{\rm Res}\nolimits^P_{P_1}} \ar[d]_{\mu_K} &
D^b \left( \mathop{\bf Rep}\nolimits_F P_1 \right) \ar[r]^{R (\;)^{\langle \sigma \rangle}} &
D^b \left( \mathop{\bf Rep}\nolimits_F P_{1,[\sigma]} \right)
\ar[d]^{\mu_{\pi_{[\sigma]} (K_1)}} \\
D^b_{\rm c} \left( M^K, F \right) \ar[rr]^{i^{\ast} j_{\ast}} &&
D^b_{\rm c} \left( \mathop{M^{\pi_{[\sigma]} (K_1)}}\nolimits, F \right) \\}}
\]
Here, $\mathop{\rm Res}\nolimits^P_{P_1}$ denotes the pullback via the monomorphism
\[
P_{1,F} \longrightarrow P_F \; , \; p_1 \longmapsto \pi_\ell (p)^{-1} \cdot p_1 \cdot \pi_\ell (p) \; ,
\]
and $R (\;)^{\langle \sigma \rangle}$ is Hochschild cohomology of the unipotent group $\langle \sigma \rangle$.
\end{Thm}
\begin{Proof}
Since $\langle \sigma \rangle$ is unipotent, $R (\;)^{K_\sigma}$ and $R (\;)^{\langle \sigma \rangle}$ agree.
\end{Proof}
Let us note a refinement of the above. Consider smooth models
\[
{\cal M}^K \stackrel{j}{\hookrightarrow} {\cal M}^K ({\mathfrak{S}}) \stackrel{i}{\hookleftarrow} {\cal M}^{\pi_{[\sigma]} (K_1)}
\]
satisfying conditions (1), (2) for ${\cal T}_\ell$. Thus
all the sheaves in the image of $\mu_K$ extend to ${\cal M}^K$; in particular they can be considered as (locally constant)
{\it perverse $F$-sheaves} in the sense of \cite{Hu2}:
\[
\mu_K : \mathop{\bf Rep}\nolimits_F P \longrightarrow \mathop{\bf Perv}\nolimits_F {\cal M}^K \subset D^b_{\rm c} ({\mathfrak{U}} {\cal M}^K , F)
\]
(notation as in \cite{Hu2}).
Consider the functor
\[
i_{\ast} j^{\ast} : D^b_{\rm c} ({\mathfrak{U}} {\cal M}^K , F) \longrightarrow
D^b_{\rm c} ( {\mathfrak{U}} {\cal M}^{\pi_{[\sigma]} (K_1)} , F ) \; .
\]
\begin{Var}\label{3K}
There is a commutative diagram
\[
\vcenter{\xymatrix@R-10pt{
D^b \left( \mathop{\bf Rep}\nolimits_F P \right) \ar[r]^{\mathop{\rm Res}\nolimits^P_{P_1}} \ar[d]_{\mu_K} &
D^b \left( \mathop{\bf Rep}\nolimits_F P_1 \right) \ar[r]^{R (\;)^{\langle \sigma \rangle}} &
D^b \left( \mathop{\bf Rep}\nolimits_F P_{1,[\sigma]} \right)
\ar[d]^{\mu_{\pi_{[\sigma]} (K_1)}} \\
D^b_{\rm c} \left({\mathfrak{U}} {\cal M}^K , F \right) \ar[rr]^{i^{\ast} j_{\ast}[-c]} &&
D^b_{\rm c} \left({\mathfrak{U}} {\cal M}^{\pi_{[\sigma]} (K_1)} , F \right) \\}}
\]
\end{Var}
\begin{Rem}\label{3L}
As in \ref{3A}, the isomorphism
\[
\mu_{\pi_{[\sigma]} (K_1)} \circ
R (\;)^{\langle \sigma \rangle} \circ \mathop{\rm Res}\nolimits^P_{P_1} \arrover{\sim}
i^{\ast} j_{\ast} \circ \mu_K [-c]
\]
does not depend on the cone decomposition ${\mathfrak{S}}$ containing $\sigma \times \{ p \}$. It is possible, as in \cite{P2}~(4.8.5), to identify the effect on
the isomorphism of change of the group $K$ and of the element $p$. Similarly,
one has an $\ell$-adic analogue of Proposition~\ref{2M}.
\end{Rem}
In the above situation, consider the {\it horizontal stratifications} (\cite{Hu2}~page~110) ${\bf S} = \{ {\cal M}^K \}$ of ${\cal M}^K$ and ${\bf T} = \{ {\cal M}^{\pi_{[\sigma]} (K_1)} \}$ of ${\cal M}^{\pi_{[\sigma]} (K_1)}$. Write $L_{\bf S}$ and $L_{\bf T}$ for the sets of extensions to the models of irreducible objects of $\mu_K (\mathop{\bf Rep}\nolimits_F P)$ and $\mu_{\pi_{[\sigma]} (K_1)} (\mathop{\bf Rep}\nolimits_F \mathop{P_{1,[\sigma]}}\nolimits)$ respectively. In the terminology of \cite{Hu2}~Def.~2.8, we have the following:
\begin{Prop}\label{3M}
$i_{\ast} j^{\ast}$ is $({\bf S} , L_{\bf S})$-to-$({\bf T} , L_{\bf T})$-admissible.
\end{Prop}
\begin{Proof}
This is \cite{Hu2}~Lemma~2.9, together with Theorem \ref{3J}.
\end{Proof}
It is conjectured (\cite{LR}~\S\,6; \cite{P2}~(5.4.1); \cite{W}~4.2) that the image of $\mu_K$ consists of {\it mixed sheaves with a weight filtration}; furthermore, the filtration should be the one induced from the weight filtration of representations of $P$. Let us refer to this as the {\it mixedness conjecture} for $(P,{\mathfrak{X}})$; cmp.\ \cite{P2}~(5.5)--(5.6) and \cite{W}~pp~112--116
for a discussion. The conjecture is known if every ${\mathbb{Q}}$-simple factor of $G^{{\rm ad}}$ is {\it of abelian type} (\cite{P2}~Prop.~(5.6.2), \cite{W}~Thm.~4.6~(a)).
\begin{Prop}\label{3N}
If the mixedness conjecture holds for $(P,{\mathfrak{X}})$,
then it holds for any rational boundary component $(P_1, {\mathfrak{X}}_1)$.
\end{Prop}
\begin{Proof}
By \cite{W}~Thm.~4.6, it suffices to check that $\mu_{\pi_{[\sigma]} (K_1)} ({\mathbb{W}})$ is mixed for some faithful representation ${\mathbb{W}}$ of $\mathop{P_{1,[\sigma]}}\nolimits$. By \cite{Hm}~Thm.~11.2, there is a representation ${\mathbb{V}}$ of $P$ and a one-dimensional subspace ${\mathbb{V}}' \subset {\mathbb{V}}$ such that
\[
\langle \sigma \rangle = \mathop{\rm Stab}\nolimits_P ({\mathbb{V}}') \; .
\]
Since $\langle \sigma \rangle$ is unipotent, we have
\[
{\mathbb{V}}' \subset {\mathbb{W}} := H^0 (\langle \sigma \rangle , {\mathbb{V}}) \; .
\]
${\mathbb{W}}$ is a faithful representation of $\mathop{P_{1,[\sigma]}}\nolimits$, and by Theorem~\ref{3J}, $\mu_{\pi_{[\sigma]} (K_1)} ({\mathbb{W}})$ is a cohomology object of the complex
\[
i^{\ast} j_{\ast} \circ \mu_K ({\mathbb{V}}) \; .
\]
It is therefore mixed (\cite{D}~Cor.~6.1.11).
\end{Proof}
| proofpile-arXiv_065-9032 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{I. INTRODUCTION}
From 1927 to 1935, shortly after the construction of
the basic formalism of quantum mechanics, there appeared several ingenious
gedanken experiments, culminating in Einstein, Podolsky and
Rosen (EPR) experiment \cite{epr,wheeler}, which has since
been a focus of attention in researches on foundations of quantum
mechanics, with entanglement \cite{shrodinger} appreciated
as the key notion.
As we shall show here, entanglement also plays a crucial role in the
other famous gedanken experiments, such as Heisenberg's gamma ray
microscope \cite{heisenberg,bohr1}, Einstein's
single-particle diffraction and
recoiling double-slit \cite{bohr2,jammer}, and Feynman's electron-light
scattering scheme for double-slit \cite{feynman}.
To present, for the first time, a systematic analysis on these
famous gedanken experiments from fully quantum mechanical point of view is
one of the purposes of this paper.
Seventy years' practice tells us to distinguish
quantum mechanics itself, as a self-consistent mathematical
structure, from any ontological interpretation. This does not
include the sole probability interpretation, which
is directly connected to observation, for instance, the probability
distribution is given by the frequencies
of different results of an ensemble of identically prepared systems.
Whether there should be more to say about
an underlying ontology, or what it is, is another issue.
Furthermore, to recognize any ontological
element that was mixed with quantum mechanics itself for historic
reasons is the first step in looking for the right one. Unfortunately,
the Copenhagen school's ontological version of the uncertainty relation (UR)
is still adopted in many contemporary books and by many contemporary
physicists,
and Bohr's analyses of the famous gedanken experiments, which are based on his
interpretation of the UR, are still widely
accepted. We shall comment on these analyses one by one. It turns out that
both his interpretation of the uncertainty relation and his analyses of
gedanken
experiments based on the former are not consistent with what quantum
mechanics says. It should be noted that the Copenhagen interpretation (CI)
of
quantum mechanics just originated in those analyses of gedanken experiments.
Thus the second purpose of this paper is to point out that CI originated
in a misconception of physics.
The third purpose of this paper lies in the current research frontiers.
Recently there were studies on the so-called which-way
experiments \cite{scully,durr},
which resemble the recoiling double-slit gedanken experiment in many
aspects. In these experiments, the interference loss is clearly due to
entanglement, rather than uncertainty relation, because no momentum
exchange is involved. On the other hand, these results are regarded
as another way of enforcing the so-called ``complementarity principle'',
coexisting with Bohr's uncertainty relation arguments for
the original gedanken experiments \cite{scully,durr}.
Here we expose the entanglement based on the momentum exchange and
show that in such a case, which does not seem to have been investigated
in laboratories, the interference loss is also only due to
entanglement, while the uncertainty relation arguments are wrong.
Interestingly, it will be seen that in the recoiling double-slit experiment,
the interference is not entirely lost in Einstein's proposal of
measuring which-slit the particle passes.
\section*{II. Uncertainty relation and photon box experiment}
We have now learned that the momentum-position UR,
\begin{equation}
\Delta x \Delta p \geq \hbar/2, \label{ur1}
\end{equation}
is an example of the general relation \cite{robertson}
\begin{equation}
\Delta A \Delta B \geq |\langle [A,B] \rangle|/2, \label{ur}
\end{equation}
where
$A$ and $B$ are two operators, $[A,B]=AB-BA$,
$\Delta A=(\langle A^2\rangle-\langle A \rangle ^2)^{1/2}$,
$\langle \dots \rangle$ represents quantum mechanical average,
which is over a
same quantum state for different terms in an inequality.
This is only a relation between standard deviations of the
respective measurements following a same preparation procedure \cite{peres}:
if the same
preparation procedure is made many times,
but followed by measurements of $A$, {\em or} by
measurements of $B$, {\em or} by measurements of
$[A,B]$ (if it is not a c-number),
and the results for $A$ have standard deviation
$\Delta A$ while the results for $B$ have standard deviation
$\Delta B$, and the results for $[A,B]$ give an average
$\langle [A,B] \rangle$, then $\Delta A$, $\Delta B$ and
$\langle [A,B] \rangle$ satisfy relation (\ref{ur}).
The UR is only a consequence of the formalism of
quantum mechanics, rather than a basic postulate or principle.
It has nothing to do with the accuracy
of the measuring instrument, nor with the disturbance between incompatible
measurements, which are performed on different, though identically prepared,
systems.
The mathematical
derivation in Heisenberg's original paper, though
only for a special wavefunction, is consistent
with the correct meaning. But
an ontological interpretation was given and was more emphasized on
\cite{heisenberg,bohr1}.
In Copenhagen school's version\cite{heisenberg,bohr1,bohr2},
largely justified through the analyses of
the famous gedanken experiments,
it is vague
whether an UR is an equality or
an inequality,
and an uncertainty is interpreted
as the objective absence or the absence of exact
knowledge of the value of the physical quantity,
{\em caused by measurement or ``uncontrollable'' interaction with another
object,
at a given moment}. For instance, in a diffraction, the momentum uncertainty
of the particle is thought to be ``inseparately connected with the
possibility of an exchange of momentum between the particle and the diaphragm''
\cite{bohr3}. The
UR was understood as that
{\em determination} of the precise
value of
$x$ {\em causes} a disturbance which
destroys the precise value of $p$, and vice versa, {\em in a single
experiment}.
The notion of uncertainty at a given moment or in a single run of the
experiment is beyond the formalism of quantum mechanics.
Experimentally,
an uncertainty or standard deviation can only be attached
to an ensemble of experiments. Furthermore, {\em the uncertainty is determined
by the quantum state, and may remain unchanged after the interaction with
another object. In fact, in an ideal
measurement, the buildup of entanglement
does not change the uncertainty. Hence the uncertainty is not
caused by an interaction with a measuring agency.}
Now people understand that there does not exist an ``energy-time
UR'' in the sense of (\ref{ur}) \cite{peres}.
We make some new comments in the following.
In Bohr's derivation \cite{bohr1,bohr2},
one obtains an approximate equality through the
relation between the frequency width and the time duration of a
classical wave pulse,
or transforms
the momentum-position UR to an ``energy-time UR''
by equating an energy uncertainty with the product of
the momentum uncertainty and velocity, and incorrectly
identifying time interval or ``uncertainty''
as the ratio between the
position uncertainty and the velocity.
Sometimes it was a time interval, while sometimes it was a ``time
uncertainty'', that appeared in the ``energy-time UR'' \cite{bohr2}.
Therefore, they actually provide only a dimensional relation.
A later justification due to Landau and Peierls
was from the transition probability
induced by a perturbation \cite{landau1,landau2}.
For a system consisting of two parts, between which
the interaction is treated as a time-independent perturbation,
the probability of a transition of the system after time $t$,
from a state in which energies of the two parts
being $E$ and $\epsilon$
to one
with energies $E'$ and $\epsilon'$, is proportional to
$\sin^{2} [(E'+\epsilon'-E-\epsilon)t/2\hbar]/(E'+\epsilon'-E-\epsilon)^2$.
Thus the most probable value of $E'+\epsilon'$
satisfies
$|E+\epsilon-E'-\epsilon'|t\sim \hbar$.
This interpretation has
changed the meaning of ``energy-time UR'', and has
nothing to do with the general relation Eq. (\ref{ur}). Furthermore, this
relation only applies to the specific situation it concerns, and
it is inappropriate to regard it
as a general relation for measurement, with
the two parts interpreted as the
measured system and the measuring instrument.
A counter-example was given by Aharonov and Bohm \cite{ab}.
In fact, the relation obtained in this way must be an equality
with $\hbar$ and cannot be
an inequality,
since the transition probability oscillates rapidly
with
$|E+\epsilon-E'-\epsilon'|t$;
if it is $2\hbar$,
the transition probability is zero.
In most of the gedanken experiments discussed by Bohr, the
``energy-time UR'' was touched on
only as a
vague
re-expression of the momentum-position UR.
It was directly dealt with only in
the photon box experiment, proposed by Einstein
in sixth Solvay conference held in 1930 \cite{bohr2}.
Consider a box filled with radiation and containing a clock
which controls the opening of a shutter
for a time $T$ (Fig. 1). Einstein suggested that the energy escaped
from the hole can be measured to an arbitrary precision by weighing
the box before and after the opening of the shutter.
In Bohr's analysis,
the box is weighed by a spring balance. He argued that for a determination of
the displacement, there is an accuracy $\Delta q$ connected to a
momentum uncertainty $\Delta p \sim h/\Delta q$. It was thought that it should
be smaller than the impulse given by gravitation to a mass uncertainty
$\Delta m$ during the interval $T$, i.e. $h/\Delta q < Tg\Delta m$.
On the other hand, $\Delta q$ is thought to be related to a $\Delta T$
through gravitational redshift,
$\Delta T/T=g\Delta q/c^2$, where $g$ is the gravitational constant,
$c$ is the light velocity.
Consequently, $\Delta T\Delta E >h$, where $E=mc^2$.
One problem in this derivation is that in the UR, the momentum uncertainty
is an intrinsic property of the box, there is no reason why it should be
smaller than $Tg\Delta m$. Another problem is that gravitation redshift
causes a change of time, hence in the above derivation,
$\Delta T$ corresponding to an
uncertainty $\Delta q$ has to be ``an uncertainty of
the change of $T$'' if $g$ is taken as a constant. In
contemporary established physics, both $T$ and $g$ cannot have
well-defined uncertainties.
To cut the mess,
we simply regard Bohr's analysis as
no more than giving dimensional relations.
Although we do not need to go further, it may be noted that the state
of the box plus the inside photons is entangled with
the state of the photons which have leaked out.
\section*{III. Gamma ray microscope experiment.}
Introduced during the discovery of
UR in 1927, the gamma ray microscope experiment
greatly influenced the formation of CI.
The approach of Copenhagen school
is as follows \cite{bohr1,jammer}.
Consider an electron near the focus $P$ under the
lens (Fig. 2).
It is observed through the scattering of a
photon of wavelength $\lambda$, thus the
resolving power of the microscope, obtained from Abbe's formula
in classical optics {\em for a classical object},
gives an uncertainty of any
position measurement,
$\Delta x \sim \lambda/(2\sin \epsilon$), where $x$ is parallel to the lens,
$2\epsilon$ is
the angle subtended by the
aperture of the lens at P.
For the electron
to be observed, the photon must be scattered into the angle $2\epsilon$,
correspondingly there is a latitude for the electron's momentum
after scattering, $\Delta p_{x}\sim 2h\sin\epsilon/\lambda$.
Therefore $\Delta x \Delta p_{x}\sim h$.
Heisenberg's initial analysis, which had been acknowledged to be wrong,
attributed the momentum uncertainty to the discontinuous momentum
change due to scattering \cite{heisenberg}.
A quantum mechanical approach can be made for a general
situation. The account is totally different from the views of Bohr
and Heisenberg.
Suppose that the state of the electron before scattering
is
\begin{equation}
|\Phi\rangle_e = \int\psi (\bbox{r})|\bbox{r}\rangle_e d\bbox{r} =
\int \phi(\bbox{p})|\bbox{p}\rangle_e d\bbox{p},
\end{equation}
and that the state of the photon is a plane wave with a given
momentum $\bbox{k}$,
\begin{equation}
|\Phi\rangle_{ph} = \frac{1}{\sqrt{2\pi}}\int e^{i\bbox{k}\cdot\bbox{r}}
|\bbox{r}\rangle_{ph} d\bbox{r}=|\bbox{k}\rangle_{ph},
\end{equation}
where $\hbar$ is set to be $1$.
Before interaction, the state of the system is simply
$|\Phi\rangle_e |\Phi\rangle_{ph}$. After interaction,
the electron and the photon
become entangled, until decoherence. If decoherence does not happen till
the detection of the photon, the situation is similar to
EPR experiment. If decoherence
occurs before the photon is detected, as in a more realistic situation,
the observation is made on a
mixed state.
The entangled state after scattering is
\begin{eqnarray}
|\Psi\rangle &=& \int \int \phi(\bbox{p})C(\bbox{\delta p})
|\bbox{p}+\bbox{\delta p}\rangle_e|\bbox{k}-\bbox{\delta p}\rangle_{ph}
d\bbox{p}d(\bbox{\delta p})\\
&=& \int\int C(\bbox{\delta p})
\psi(\bbox{r}) e^{i\bbox{\delta p}\cdot\bbox{r}}
|\bbox{r}\rangle_e |\bbox{k}-\bbox{\delta p}\rangle_{ph} d\bbox{r}
d(\bbox{\delta p}), \label{ss}
\end{eqnarray}
where $\bbox{\delta p}$ is the momentum exchange between the electron and the
photon, subject to the constraint of energy conservation,
$C(\bbox{\delta p})$ represents the probability amplitude
for each possible value of $\bbox{\delta p}$ and is determined by the
interaction.
Note that the states before and after scattering are connected by
a $S$ matrix, which depends only on the interaction Hamiltonian.
This is because
the interaction happens in a very local regime
and very short time interval, i.e.
in the time and spatial scales concerning us,
we may neglect the time interval of the interaction.
$|\Psi\rangle$ may be simply re-written as
\begin{equation}
|\Psi\rangle = \int \psi(\bbox{r}) |\bbox{r}\rangle_e
|\bbox{r}\rangle_s
d\bbox{r}, \label{s}
\end{equation}
where $|\bbox{r}\rangle_s=\int C(\bbox{\delta p})
e^{i\bbox{\delta p}\cdot\bbox{r}}|\bbox{k}-\bbox{\delta p}\rangle_{ph}
d(\bbox{\delta p})$ represents that a scattering takes place at $\bbox{r}$.
$_{s}\langle \bbox{r'}|\bbox{r}\rangle_s$$=$$\int |C(\bbox{\delta p})|^2
e^{i\bbox{\delta p}(\bbox{r}-\bbox{r'})}d(\bbox{\delta p})$, hence
$_{s}\langle \bbox{r}|\bbox{r}\rangle_s=1$, but $|\bbox{r}\rangle_s$
with different $\bbox{r}$ are
not orthogonal to each other.
One may find that {\em the position uncertainty remains unchanged while
the momentum uncertainty becomes larger}.
More generally, if the incipient photon is also a wave-packet, i.e.,
$|\Phi\rangle_{ph} =\int \phi_{ph}(\bbox{k})|\bbox{k}\rangle_{ph}
d\bbox{k}$$=\int \psi_{ph}(\bbox{r})|\bbox{r}\rangle_{ph}
d\bbox{r}$,
then
$|\Psi\rangle=\int\int\int \phi(\bbox{p})\phi_{ph}(\bbox{k})C(\bbox{\delta p})
|\bbox{p}+\bbox{\delta p}\rangle_e|\bbox{k}-\bbox{\delta p}\rangle_{ph}
d\bbox{p}d\bbox{k}d(\bbox{\delta p})$$=$$\int\int\int C(\bbox{\delta p})
e^{i\bbox{\delta p}(\bbox{r_1}-\bbox{r_2})}\psi(\bbox{r_1})
\psi_{ph}(\bbox{r_2})|\bbox{r_1}\rangle|\bbox{r_2}\rangle_{ph}
d\bbox{r_1}d\bbox{r_2}d(\bbox{\delta p})$.
It is clear that an uncertainty is an intrinsic
property determined by the quantum state, its meaning is totally
different from
that considered
by Bohr from the perspective of the optical properties of the microscope such
as the resolution power as given by classical optics.
How an uncertainty changes depends on the states before and
after the interaction, and it
may remain unchanged.
In an ideal measurement as discussed by von Neumann
\cite{von,jammer},
if the
system's state is $\sum_k \psi_k\sigma_k$, then after interaction with the
apparatus, the system-plus-apparatus has the entangled state
$\sum_k \psi_k\sigma_k\alpha_k$. $\sigma_k$ and $\alpha_k$ are orthonormal sets
of the system and apparatus, respectively. In such a case, the expectation
value of any observable of the system and thus its uncertainty
is not changed by the interaction with the apparatus.
\section*{IV. Detection of a diffracted particle}
At the Fifth Solvay Conference held in 1927,
Einstein considered the diffraction of
a particle from a slit to a hemispheric screen \cite{jammer} (Fig. 3).
He declared that if the wavefunction represents an ensemble of
particles distributed in space rather than one particle, then
$|\psi(\bbox{r})|^2$ expresses the percentage of particles presenting
at $\bbox{r}$. But if quantum mechanics describes individual processes,
$|\psi(\bbox{r})|^2$ represents the probability that at a given moment a
same particle shows its presence at $\bbox{r}$. Then as long as no
localization is effected, the particle has the possibility over the whole
area of the screen; but as soon as it is localized, a
peculiar instantaneous action at a
distance must be assumed to take place which prevents
the continuously distributed wave from producing an effect at two places
on the screen.
By definition,
the concept of probability implies the concept of ensemble,
which means the repeat of identically prepared processes.
Therefore as far
as $\psi(\bbox{r})$ represents a probability wave rather than a physical
wave in the real space, it makes no difference whether
it is understood
in terms of an ensemble or in terms of a single particle.
What Einstein referred to
here by ensemble is effectively the classical ensemble,
i.e. only the probability plays a role while $\psi(\bbox{r})$
does not matter directly.
The essential
problem of Einstein is
how a classical event originates from the
quantum state, which was unsolved in the early years and
remains, of course,
an important and active subject.
In the fully quantum mechanical view,
the diffraction is not essential
for Einstein's problem.
Here comes the entanglement between the detector and the particle.
The state of the combined system evolutes from the product of those of
the particle and the screen into an entangled state
\begin{equation}
|\Psi\rangle = \int \psi(\bbox{r}) |\bbox{r}\rangle_p|\bbox{r}\rangle_d
d\bbox{r},
\end{equation}
where $|\bbox{r}\rangle_p$ is the position eigenstate of the particle,
$|\bbox{r}\rangle_d$ represents that a particle is detected at
$\bbox{r}$.
The measurement result of the particle position
is described by a classical ensemble, with the
diagonal density matrix
\begin{equation}
\rho_p = \int |\psi(\bbox{r})|^2|\bbox{r}\rangle_{pp}\langle\bbox{r}|d\bbox{r}.
\label{pd}
\end{equation}
von Neumann formulated this as a postulate
that in addition to the unitary evolution,
there is a
nonunitary, discontinuous
``process of the first kind'' \cite{von},
which cancels the off-diagonal terms in the pure-state density matrix
$|\Psi\rangle\langle \Psi|$,
leaving
a diagonal reduced density matrix
$\rho_r$$=$$\int |\psi(\bbox{r})|^2 |\bbox{r}\rangle_{pp}\langle \bbox{r}|
|\bbox{r}\rangle_{dd}\langle \bbox{r}|d\bbox{r}$,
which implies $\rho_p$. Equivalently the projection
postulate may also apply directly
to the particle state
to obtain (\ref{pd}).
There are various alternative
approaches to this problem.
Nevertheless, the projection postulate should effectively
valid in most situations.
What did Bohr say?
Instead of addressing the event on the {\em screen},
Bohr discussed the correlation between the
particle and the {\em diaphragm} using his version of
UR \cite{bohr2} .
Einstein's problem was conceived as to what extent a control of the
momentum and energy transfer can be used for a specification of the state
of the particle after passing through the hole.
This is a misunderstanding:
even if its position on
the diaphragm is specified, the particle is still not in a momentum
eigenstate, moreover, the particle
still has nonzero position wavefunction at
every point on the screen after arrival;
what is lost is the interference
effect.
\section*{V. Recoiling double-slit and Feynman's electron-light scattering}
After Bohr's argument of the single-slit diffraction, Einstein proposed the
recoiling double-slit arrangement \cite{bohr2,jammer}.
Consider identically prepared particles
which, one by one, are
incident on
a diaphragm with two slits, and then
arrive at a screen. Einstein argued against UR as follows.
As shown in Fig. 4, midway between a stationary diaphragm $D_1$ with a
single slit as the particle source
and a screen $P$, a movable diaphragm $D_2$ is suspended
by a weak spring $Sp$.
The two slits $S'_2$ and $S''_2$ are separated by a distance $a$,
much smaller than $d$, the distance between $D_1$ and $D_2$. Since
the momentum imparted to $D_2$ depends on
whether the particle passes
through
$S'_2$ or $S''_2$, hence the position in passing through the slits
can be determined.
On the other hand, the momentum of the particle
can be measured from the interference
pattern.
Bohr
pointed out
that
the two paths'
difference of momentum transfer is $\Delta p=p\omega$ (this is a
fault, in this setup, it should be $2p\omega$), where
$\omega$ is the angle subtended by the two slits at the single slit in
$D_1$. He argued that any measurement of momentum with an accuracy sufficient
to decide $\Delta p$ must involve a position uncertainty
of at least $\Delta x=h/\Delta p=
\lambda/\omega$, which equals
the width
of the interference fringe. Therefore the momentum determination of $D_2$
for the decision of
which path the particle
passes involves a position uncertainty which
destroys the interference.
This was regarded as a typical ``complementary phenomenon''
\cite{bohr2}.
In Feynman's light-electron scattering scheme, which slit the electron passes
is observed by the scattering with a photon. One usually
adopts Bohr's analysis above by replacing the momentum
of the diaphragm as that of the photon.
{\em Bohr's argument means that
in determination of which slit the particle passes, its momentum uncertainty
becomes smaller enough}. Clearly this may not happen. For instance,
as we have seen in
our analysis on the gamma ray microscope,
scattering with a plane wave increases the momentum
uncertainty on the contrary.
This is an indication that Bohr's analysis is not correct.
Bohr's reasoning
is avoided in a proposal using two laser cavities as the
which-slit tag for an atomic beam \cite{scully}, and in a recent
experiment where the internal electronic
state of the atom acts as the which-slit tag \cite{durr,knight}.
However, as a showcase of the contemporary influence of
CI, there was no doubt on
Bohr's argument
in the original gedanken experiment, and the current experimental results
were framed in terms of Copenhagen ideology, even the debate
on whether
UR plays a role in such which-way experiments was titled as
``Is complementarity more fundamental than the uncertainty principle?''
\cite{durr}.
We shall show that it is a universal mechanism that interference is
destroyed by entanglement with another degree of freedom. Here
the momentum exchange is just the basis for the entanglement.
Bohr's analysis is not consistent with the fully
quantum mechanical account.
We make a
general account applicable
to both single-slit diffraction and many-slit interference.
Let us assume that just before diffraction by the slit(s),
the state of the particle is
\begin{equation}
|\Phi(t_0-0)\rangle_p
= \int\psi(\bbox{r_0},t_0)|\bbox{r_0}\rangle_p d\bbox{r_0}
= \int\phi(\bbox{p},t_0)|\bbox{p}\rangle_p d\bbox{p}
\end{equation}
where $\bbox{r}_0$ belong to the slit(s).
After diffraction, the state is
\begin{equation}
|\Phi(t>t_0)\rangle_p =\int \psi(\bbox{r},t) |\bbox{r}\rangle_p
d\bbox{r},\label{ad}
\end{equation}
with
\begin{equation}
\psi(\bbox{r},t)=\int
\psi(\bbox{r_0},t_0)G(\bbox{r},t;\bbox{r_0},t_0)d\bbox{r_0},
\end{equation}
where $G(\bbox{r},t;\bbox{r_0},t_0)$ is a propagator.
The interference appears since
the probability that the diffracted
particle is at $\bbox{r}$ is
$|\int\psi({\bbox{r}_0,t_0)G(\bbox{r},t;\bbox{r}_0,t_0})
d\bbox{r}_0|^2$, instead of
$\int|\psi({\bbox{r}_0,t_0)G(\bbox{r},t;\bbox{r}_0,t_0})|^{2}
d\bbox{r}_0$.
The diffraction does not change the uncertainties of
position and momentum, as seen from
$|\Phi(t\rightarrow t_0)\rangle\rightarrow |\Phi(t_0-0)\rangle$.
Before interacting with the photon ($t< t_0$), the
diaphragm has a definite position $\bbox{r_i}$. This is a gedanken
experiment, in which the relevant degrees of freedom are well isolated.
Hence the diaphragm is described by the quantum state
\begin{equation}
|\Psi(t< t_0)\rangle_d=|\bbox{r_i}\rangle_d=\int \delta(\bbox{r}-\bbox{r_i})
|\bbox{r}\rangle_d d\bbox{r}=
\frac{1}{\sqrt{2\pi}}
\int e^{-i\bbox{k}\cdot\bbox{r_i}}|\bbox{k}\rangle_d d\bbox{k}, \label{del}
\end{equation}
and the state of the combined
system of particle-plus-diaphragm is
$|\Phi(t< t_0)\rangle_p|\bbox{r_i}\rangle_d$.
If the diaphragm is fixed, the state of whole system after diffraction
is the product of $|\bbox{r_i}\rangle_d$
and the state of the particle
as given by Eq. (\ref{ad}).
If the diaphragm is moveable,
after the interaction, the state of the combined
system is an entangled one. Right after the scattering,
\begin{eqnarray}
|\Psi( t_0+0)\rangle &=& \frac{1}{\sqrt{2\pi}}\int\int\int C(\bbox{\delta p})
\phi(\bbox{p},t_0)e^{-i\bbox{k}\cdot\bbox{r_i}}
|\bbox{p}+\bbox{\delta p}\rangle_p|\bbox{k}-\bbox{\delta p}\rangle_d
d(\bbox{\delta p})d\bbox{p}d\bbox{k} \\
&=& \int\int C(\bbox{\delta p})\psi(\bbox{r_0},t_0)
e^{i\bbox{\delta p}\cdot\bbox{r_0}}|\bbox{r}_0\rangle_p
e^{-i\bbox{\delta p}\cdot\bbox{r_i}}|\bbox{r_i}\rangle_d
d(\bbox{\delta p})d\bbox{r_0}.
\end{eqnarray}
Then,
\begin{eqnarray}
|\Psi(t > t_0\rangle &=& \int\int\int C(\bbox{\delta p})
\psi(\bbox{r_0},t_0)G(\bbox{r},t;\bbox{r_0},t_0)
e^{i\bbox{\delta p}\cdot\bbox{r_0}}|\bbox{r}_0\rangle_p
U(t,t_0)e^{-i\bbox{\delta p}\cdot\bbox{r_i}}|\bbox{r_i}\rangle_d
d(\bbox{\delta p})d\bbox{r_0}d\bbox{r},\\
&=& \int \int \psi(\bbox{r_0},t_0)G(\bbox{r},t;\bbox{r_0},t_0)|\bbox{r_0}\rangle_p
|\bbox{r_0}\rangle_s(t)d\bbox{r_0}d\bbox{r}, \label{fey}
\end{eqnarray}
where $U(t,t_0)$ represents the evolution of the diaphragm state, and
\begin{equation}
|\bbox{r_0}\rangle_s(t)= \int C(\bbox{\delta p})
e^{i\bbox{\delta p}\cdot\bbox{r_0}}
U(t,t_0)e^{-i\bbox{\delta p}\cdot\bbox{r_i}}
|\bbox{r_i}\rangle_d d(\bbox{\delta p}).
\end{equation}
Generally speaking,
this entanglement is not maximal, hence the interfere is
not completely destroyed.
In Feynman's electron-light scattering scheme, we may suppose that
the scattering
takes place at the slit(s).
In general, the photon is a wave packet
$|\Phi\rangle_{ph}=\int \phi_{ph}(\bbox{k})|\bbox{k}\rangle_{ph} d\bbox{k}$
$=$$\int \psi_{ph}(\bbox{r})|\bbox{r}\rangle_{ph} d\bbox{r}$,
then
$|\Psi(t>t_0)\rangle$$=$$\int\int\int\int C(\bbox{\delta p})
\psi(\bbox{r_0})G(\bbox{r},t;\bbox{r_0},t_0)
e^{i\bbox{\delta p}\cdot\bbox{r_0}}|\bbox{r}_0\rangle_p
U(t,t_0)e^{-i\bbox{\delta p}\cdot\bbox{r_1}}
\psi(\bbox{r_1})|\bbox{r_1}\rangle_{ph}
d(\bbox{\delta p})d\bbox{r_0}d\bbox{r_1}d\bbox{r}$.
Again, the interference is not completely destroyed.
If the photon is a plane wave, as
we have known in the above discussions on
the gamma ray microscope,
the position uncertainty remains unchanged while the
momentum uncertainty increases, contrary to Bohr's claim.
In general, in both the moveable diaphragm and
the photon scattering schemes, the change of the uncertainty is dependent
of the states before and after the interaction. It is not right to simply
say that the momentum uncertainty becomes smaller, as thought by Bohr.
One should also note that there are various possible
momentum exchanges $\bbox{\delta p}$, subject to energy conservation, and this
is {\em independent} of the position
on the diaphragm, or which slit the particle
passes in the double-slit experiment. {\em In general,
both before and after the interaction with the diaphragm, the particle is in a
superposition of different momentum eigenstates}. Even after the detection of
the particle on the screen, the states of the diaphragm
and particle
still do not reduce to those
with a definite momentum exchange. This was not appreciated by
either Bohr or Einstein, and
is another point inconsistent with
Bohr's analysis, which is based on a classical picture supplemented by
an uncertainty relation.
\section*{VI. Concluding remarks}
Regarding Copenhagen school's view of uncertainty, one cannot directly
prove or disprove the notion of uncertainty at a given moment in a single
run of experiment, which is beyond the
standard formalism of quantum mechanics.
However, it is clearly wrong to attribute the uncertainty to the
interaction with a ``measuring agency''. It is also wrong to regard the
uncertainty as a bound for the accuracy of the measuring instrument, given
by classical physics, as done in Bohr's analyses of gamma ray microscope
and recoiling double-slit.
On the other hand, it is inappropriate to regard
the consequence of the interaction simply as
causing the uncertainty, while neglect
the buildup of the entanglement, which may not change the uncertainty,
or may change it but not in the way thought by Bohr.
We have seen that Bohr's analyses of the gedanken experiments are not
consistent with the quantum mechanical accounts.
This indicates that the essence of quantum mechanics
cannot be simply reduced to a classical picture supplemented by
an uncertainty relation.
More weirdness of quantum phenomena comes from the superposition,
especially the entanglement.
The crucial importance of entanglement in quantum mechanics was not
well appreciated in the early gedanken experiments, with the attention
focused on uncertainty relation in Copenhagen school's version.
However, it was finally exposed
in EPR experiment, and had been noted in an earlier paper
\cite{etp}.
Had Bohr been aware of this, he might say
that the entanglement versus interference implements the ``complementary
principle'', and might be happy that more commonness exists between
the early gedanken experiments and EPR-experiment than discussed
in his reply to EPR paper\cite{bohr3}, where
most of the discussions deal with the ``mechanical disturbance''
in the diffraction experiment by using uncertainty relation argument;
regarding EPR, it is only said that ``even
at this stage there is essentially the
question of {\it an influence on the very conditions which define the possible
types of predictions regarding the future behavior of the system}''.
Note that
if the ``very condition'' refers to the quantum state, it
is just the basis of the discussions of EPR, and that
Einstein did consider it as
possible to relinquish the assertion that
``the states of spatially
separated objects are independent on each other'', but with wavefunction
regarded as a description of an ensemble \cite{ein2}.
The wording of the Copenhagen interpretation is so vague and flexible
that one can easily embed new meanings, as people have been doing in the
past many years. However, no matter how to refine its meaning, the
``complementary principle'' does not provide any
better understanding than that provided by quantum mechanics itself.
Afterall,
the language of physics is mathematics rather than philosophy.
In fact, decoherence based on entanglement
could have been studied in the early
days, had not there been the advent of the Copenhagen
interpretation \cite{kiefer},
which originated in
the misconception on the early gedanken experiments.
\bigskip
I thank Claus Kiefer for useful discussions.
\newpage
{\bf Figures}
Figure 1. Photon box. Copied from Ref. [6].
Figure 2. Gamma ray microscope. Copied from Ref. [7].
Figure 3. Detection of diffracted particle. Copied from
Ref. [7].
Figure 4. Recoiling double-slit. Copied from Ref. [7].
\psfig{figure=fig.ps}
| proofpile-arXiv_065-9033 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
After M87 and Centaurus A, NGC 3379 (M105) is one of the best-studied
elliptical galaxies in the sky. Virtually a walking advertisement for
the $r^{1/4}$ law (de Vaucouleurs \& Capaccioli \markcite{dVC79}1979),
this system is regularly used as a control object or calibrator for a
variety of photometric and spectroscopic studies. NGC 3379 has all the
hallmarks of a ``classic'' early-type galaxy: almost perfectly elliptical
isophotes and colors characteristic of an old stellar population (Peletier
et al.\ \markcite{Pel90}1990, Goudfrooij et al.\ \markcite{Gou94}1994);
slow rotation about the apparent minor axis (Davies \& Illingworth
\markcite{DaI83}1983, Davies \& Birkinshaw \markcite{DaB88}1988);
no shells, tails, or other signs of interactions (Schweizer \& Seitzer
\markcite{ScS92}1992); no detection in either \ion{H}{1} (Bregman, Hogg,
\& Roberts \markcite{BHR92}1992) or CO (Sofue \& Wakamatsu
\markcite{SoW93}1993); very modest H$\alpha$+[\ion{N}{2}] emission
(Macchetto et al.\ \markcite{Mac96}1996); and only minimal absorption by
dust in the inner $4\arcsec$ (van Dokkum \& Franx \markcite{vDF95}1995,
Michard \markcite{Mic98}1998).
Yet, for all its familiarity, there are serious questions as to the
true nature of our ``standard elliptical.'' For one, there is a nagging
concern that it might not be an elliptical at all. Capaccioli and
collaborators (Capaccioli \markcite{Cap87}1987; Nieto, Capaccioli, \& Held
\markcite{NCH88}1988; Capaccioli, Held, Lorenz, \& Vietri
\markcite{Cap90}1990; Capaccioli, Vietri, Held, \& Lorenz
\markcite{Cap91}1991, hereafter CVHL) have argued, mainly on photometric
grounds, that NGC 3379 could be a misclassified S0 seen close to face-on.
CVHL demonstrate that a deprojected spheroid+disk model for the edge-on
S0 NGC 3115, seen face-on, would show deviations from the best-fit
$r^{1/4}$ law very similar to the $\sim 0.1$ magnitude ripple-like
residuals seen in NGC 3379. They propose that NGC 3379
could be a triaxial S0, since a triaxiality gradient could explain
the observed $5\arcdeg$ isophotal twist.
Statler \markcite{Sta94}(1994) has also examined the shape of NGC 3379,
using dynamical models to fit surface photometry
(Peletier et al.\ \markcite{Pel90}1990) and multi-position-angle velocity
data (Davies \& Birkinshaw \markcite{DaB88}1988, Franx,
Illingworth, \& Heckman \markcite{FIH89}1989). The
data are found to rule out very flattened, highly triaxial shapes
such as that suggested by CVHL, while still being consistent with either
flattened axisymmetric or rounder triaxial figures. The results are limited,
however, by the accuracy of the kinematic data, which are unable to
constrain the rotation on the minor axis beyond $R=15\arcsec$ to any better
than $30\%$ of the peak major-axis velocity. This large an
uncertainty implies a $\sim 30\arcdeg$ ambiguity in the position of the
apparent rotation axis. Moreover, there are hints of steeply increasing
minor-axis rotation beyond $30\arcsec$. It is far from clear from the
current data that the common perception of NGC 3379 as a
``classic major-axis rotator'' is an accurate description of the galaxy
beyond---or even at---one effective radius.
Deeper, higher accuracy spectroscopic data are needed, both to define more
precisely the kinematic structure of the galaxy at moderate radii, and also to
establish the connection with the large-$R$ kinematics as determined from
planetary nebulae. Ciardullo et al.\ \markcite{CJD93}(1993) find that
the velocity dispersion in the PN population declines steadily with radius,
reaching $\sim 70\mbox{${\rm\,km\,s}^{-1}$}$ at $170\arcsec$ (roughly 3 effective radii). This
decline is consistent with a Keplerian falloff outside $1 r_e$, apparently
making NGC 3379 one of the strongest cases for an elliptical galaxy
with a constant mass-to-light ratio and no significant contribution
from dark matter inside $9\mbox{$\rm\,kpc$}$. On the other hand, if the PNe were in a
nearly face-on disk, the line-of-sight dispersion may not reflect
the true dynamical support of the system. To correctly interpret the PN
data, therefore, one needs to know how the stellar data join onto
the PN dispersion profile, as well as have a good model for the shape and
orientation of the galaxy.
At small $R$, {\em HST\/} imaging shows NGC 3379 to be a ``core galaxy'';
i.e., its surface brightness profile turns over near $1\arcsec$ --
$2\arcsec$ to an inner logarithmic slope of about $-0.18$ (Byun et al.\
\markcite{Byu96}1996). A non-parametric deprojection assuming spherical
symmetry (Gebhardt et al.\ \markcite{Geb96}1996) gives a logarithmic
slope in the {\em volume\/} luminosity density of $-1.07 \pm 0.06$
at $r=0\farcs 1$ ($5\mbox{$\rm\,pc$}$). This is rather a shallow slope for galaxies
of this luminosity ($M_V=-20.55$), and is actually more characteristic of
galaxies some 4 times as luminous (Gebhardt et al.\ \markcite{Geb96}1996).
At the same time, NGC 3379 is
a likely candidate for harboring a central dark mass of several hundred
million $\mbox{$M_\odot$}$ (Magorrian et al.\ \markcite{Mag98}1998). Since both
density cusps and central point masses have been implicated as potential
saboteurs of triaxiality through orbital chaos (Merritt \& Fridman
\markcite{MeF96}1996, Merritt \& Valluri \markcite{MeV96}1996, Merritt
\markcite{Mer97}1997), a measurement of triaxiality from the stellar
kinematics would be valuable in gauging the importance of this mechanism
in real systems.
Here we present new spectroscopic observations of NGC 3379, as part
of our program to obtain multi-position-angle kinematic data at high
accuracy and good spatial resolution for a sample of photometrically
well-studied ellipticals. We obtain a far more detailed rendition of the
kinematic fields through the main body of the galaxy
than has been available from previous data. We find that
these fields suggest a two-component structure for the galaxy, and closely
resemble those of the S0 NGC 3115. We reserve firm
conclusions on the shape and Hubble type of NGC 3379 for a later paper
devoted to dynamical modeling; here we present the data. Section
\ref{s.observations} of this paper describes the observational
procedure. Data reduction techniques are detailed in Sec.\ \ref{s.reduction},
and the results are presented in Sec.\ \ref{s.results}. We compare our
data with previous work and discuss some of the implications for the structure
of the galaxy in Sec.\ \ref{s.discussion}, and Sec.\ \ref{s.conclusions}
concludes.
\section{Observations\label{s.observations}}
NGC 3379 was observed with the Multiple Mirror Telescope and the Red Channel
Spectrograph (Schmidt et al.\ \markcite{Sch89}1989) on 3--4 February 1995
UT. The $1\farcs0 \times 180\arcsec$ slit was used with the 1200 grooves/mm
grating to give a resolution of approximately $2.2\mbox{$\rm\,\AA$}$ and a spectral
coverage from $\lambda\lambda$ $4480$ -- $5480\mbox{$\rm\,\AA$}$. The spectra were imaged
on the $1200\times 800$ Loral CCD ($15 \mbox{$\rm\,\mu m$}$ pixels, 1 pix = $0\farcs 3$,
read noise = 7 $e^-$), resulting in a nominal dispersion of $0.72 \mbox{$\rm\,\AA$}$/pix.
The CCD was read-binned $1 \times 4$ pixels in the dispersion $\times$
spatial directions to reduce read noise, so that the final spatial scale
was $1\farcs2$ per binned pixel.
Except for a brief period of fluctuating seeing on the first night, all
data were taken in photometric conditions. NGC 3379 was observed at four
slit position angles: PA = $70\arcdeg$ (major axis), $340\arcdeg$ (minor
axis), $25\arcdeg$, and $115\arcdeg$. PA 340 was observed entirely on
night 1, PAs 70 and 115 on night 2, and PA 25 over both nights. Four
exposures of 1800 s each were obtained at each PA, except for the last
exposure at PA 70 which was shortened to 900 s due to impending twilight.
Because the galaxy filled the slit, separate 600 s blank sky exposures
were obtained at 30 -- 90 minute intervals depending on the elevation of
the galaxy. Comparison arc spectra were taken before and/or after each
galaxy and sky exposure.
In addition to the standard calibration frames, spectra of radial velocity
standard, flux standard, and Lick/IDS library stars were taken during
twilight. The Lick stars were chosen to have a range of spectral types
and metallicities in order to create composite spectral templates
and to calibrate measurements of line strength indices (to be
presented in a future paper). Stars were trailed across the slit to
illuminate it uniformly. This was an essential step in producing accurate
kinematic profiles because our slit width was wider than the seeing
disk; fits to the spatial profiles of all stellar spectra
give a mean Gaussian width of the point spread function of $0\farcs83$,
with a standard deviation of $0\farcs09$.
\section{Data Reduction\label{s.reduction}}
\subsection{Initial Procedures\label{s.initial}}
Basic reductions were performed as described by Statler, Smecker-Hane, \&
Cecil \markcite{SSC96}(1996, hereafter SSC), using standard
procedures in IRAF. The initial
processing consisted of overscan and bias corrections, flat fielding,
and removal of cosmic rays. This was followed by wavelength calibration
from the comparison arcs, and straightening of all spectra using stellar
traces at different positions along the slit. We used ``unweighted
extraction'' to derive one-dimensional stellar spectra, and rebinned all
spectra onto a logarithmic wavelength scale with pixel width $\Delta x
\equiv \Delta \ln \lambda = 1.626 \times 10^{-4}$ ($\Delta v = 48.756 \mbox{${\rm\,km\,s}^{-1}$}$).
In the same transformation, the galaxy frames for each PA were registered
spatially.
Time-weighted average sky spectra were created for each galaxy frame by
combining the two sky frames, $Y(t_1)$ and $Y(t_2)$, taken before and
after the galaxy frame $G(t)$. (Times refer to the middle of
the exposures.) The combined sky image was
$Y = K[aY(t_1) + (1-a) Y(t_2)]$, where $a=(t_2-t)/(t_2-t_1)$ and the
constant $K$ ($=3$ for all frames but one) scaled the image to the
exposure time of $G$. Because conditions were photometric, there was
no need to fine-tune $a$ and $K$ by hand, as was done by SSC to improve
the removal of the bright sky emission lines. To avoid degrading the
signal-to-noise ratio in the regions where accurate sky subtraction was
most crucial, the sky spectra were averaged in the spatial direction
by smoothing with a variable-width boxcar window. The width of the window
increased from 1 pixel at the center of the galaxy to 15 pixels at the slit
ends. Finally, after subtracting the smoothed sky images, the 4 galaxy
frames at each PA were coadded.
In parallel with the above procedure, we performed an alternative sky
subtraction in order to estimate the systematic error associated with
this part of the reduction. In the alternative method we simply
subtracted the sky exposure closest in time to each galaxy frame, scaled
up to the appropriate exposure time and boxcar-smoothed. These
``naive sky'' results will be discussed in Sec.\ \ref{s.systematic} below.
SSC worried extensively about the effect of scattered light on the
derived kinematics at large radii. Using their 2-D stellar spectra, they
constructed an approximate smoothed model of the scattered light
contribution and subtracted it from their coadded spectra of NGC 1700.
We have not attempted to do this here, for three reasons. First, we
found that the scattered light characteristics of the Red Channel had
changed significantly from 1993 to 1995, and could no longer be
modeled simply. Second, SSC had noted that the scattered-light correction
resulted in only tiny changes to their kinematic profiles, and that the
contribution to the systematic error budget was negligible
compared to those from sky subtraction and template mismatch. Finally,
NGC 3379 is much less centrally concentrated than NGC
1700, and therefore is much less prone to scattered-light
contamination since the galaxy is still fairly bright even at the ends of
the slit.
The 2-D galaxy spectra were binned into 1-D spectra with the bin width
chosen to preserve a signal-to-noise ratio $\gtrsim 50$ per pixel over most
of the slit length; $S/N$ decreases to around $30$ in the second-to-outermost
bins, and to $20$ in the last bins, which terminate at the end of the slit.
These last bins also suffer a slight degradation in focus, so that the
velocity dispersion
is likely to be overestimated there. The 1-D spectra were divided by
smooth continua fitted using moderate-order cubic splines.
Residual uncleaned cosmic rays and imperfectly subtracted sky lines
were replaced with linear interpolations plus Gaussian noise.
Spectra were tapered over the last 64 pixels at either end
and padded out to a length of 1300 pixels.
The velocity zero point was set using 5 spectra of the IAU radial velocity
standards HD 12029, HD 23169 (observed twice), HD 32963, and HD 114762.
The spectra were shifted to zero velocity and all 10 pairs were
cross-correlated as a consistency check. In only 3 cases were the derived
residual shifts greater than $0.01\mbox{${\rm\,km\,s}^{-1}$}$ and in no case were they greater
than $0.7\mbox{${\rm\,km\,s}^{-1}$}$. The velocities of the remaining 20 stars were then found by
averaging the results of cross-correlation against each of the standards,
and these stars were also shifted to zero velocity.
\subsection{LOSVD Extraction\label{s.extraction}}
Parametric extraction of the line-of-sight velocity distributions (LOSVDs)
was performed using
Statler's \markcite{Sta95}(1995) implementation of the cross-correlation
(XC) method, which follows from the relationship between the galaxy
spectrum $G(x)$, the observed spectral template $S(x)$, and the ``ideal
template'' $I(x)$---a zero-velocity composite spectrum of the actual mix of
stars in the galaxy. This relationship is given by
\begin{equation}\label{e.crosscorr}
G \circ S = (I \circ S) \otimes B,
\end{equation}
where $\otimes$ denotes convolution, $\circ$ denotes correlation,
and $B(x)$, the broadening function, is the LOSVD written as a
function of $v/c$. Since the ideal template is unknown, one
replaces $(I \circ S)$ with the template autocorrelation function
$A=S \circ S$, and then manipulates $B$ so that its convolution with $A$
fits the primary peak of the cross-correlation function
$X=G \circ S$. We adopted a Gauss-Hermite expansion for the LOSVD
(van der Marel \& Franx \markcite{vdMF93}1993):
\begin{equation}\label{e.losvd}
L(v) = {\gamma \over (2 \pi)^{1/2} \sigma}
\left[1 + h_3 {(2 w^3 - 3 w)\over 3^{1/2}} + h_4
{(4 w^4 - 12 w^2 + 3) \over 24^{1/2}} \right] e^{-w^2/2}, \qquad
w \equiv {v - V \over \sigma}.
\end{equation}
The expansion was truncated at $h_4$, and non-negativity of $L(v)$ was
enforced by cutting off the tails of the distribution beyond the first
zeros on either side of the center.
Because the XC method can be confused
by broad features in the spectra unrelated to Doppler broadening,
it was necessary to filter out low-frequency components before
cross-correlating. Our adopted filter was zero below a threshold wavenumber
$k_L$ (measured in inverse pixels), unity above $2k_L$, and joined by a
cosine taper in between. More conveniently we can quote the filter width in
Fourier-space pixels as a quantity $W_T=1300k_L$. Empirically we found our
results to be insensitive to $W_T$ over a range centered around
$W_T=15$, which value we adopted for all subsequent analysis.
A non-parametric approach also rooted in equation (\ref{e.crosscorr})
is the Fourier Correlation Quotient method (Bender \markcite{Ben90}1990),
which operates in the Fourier domain. Denoting the Fourier
transform by $\tilde{\ }$, we have
\begin{equation}\label{e.fcq}
\tilde{B} = \tilde{X}/\tilde{A};
\end{equation}
thus $B$ can, in principle, be obtained directly. However, the FCQ method
requires that, to avoid amplifying nose, {\em high\/} frequency components
also be filtered out of the data. This is generally done using an optimal
filter, the construction of which is not an entirely objective procedure
when $S/N \lesssim 50$. We present results from the FCQ method in section
\ref{s.nonparametric}.
\subsection{Composite Templates\label{s.templates}}
Sixteen stars with spectral types between G0 and M1 were available to be
used as templates. We first computed the kinematic profiles for the major
axis (PA 70) using all 16 templates, then set out to choose a set of 4
from which to construct composites. We found, as did SSC, that the
algebraic problem of fitting the galaxy spectrum with a set of very similar
stellar spectra becomes seriously ill-conditioned with more than 4 in
the library. Coefficients were calculated to optimize the fit to the
galaxy spectrum, using a random search of the parameter space as
described by SSC.
For the most part, kinematic profiles derived using different templates
had similar shapes but with different constant offsets, in agreement with
the results of SSC and others (e.g., Rix \& White
\markcite{RiW92}1992, van der Marel et al.\ \markcite{vdM94}1994).
A few templates could be discarded for giving wildly discrepant results.
In principle, a semi-objective criterion for choosing a
library ought to have been available from the requirement that the $h_3$
profile be antisymmetric across the center of the galaxy. However, every
template gave positive values of $h_3$ at $R=0$; we attribute this to
the well-documented discordancy between Mg and Fe line strengths in
ellipticals relative to population-synthesis models with solar Mg/Fe ratio
(Peletier \markcite{Pel89}1989, Gonzalez \markcite{Gon93}1993, Davies
\markcite{Dav96}1996 and references therein).
We therefore proceeded by trial and error, requiring that (1) weight be
distributed roughly evenly among the library spectra in the derived composites;
(2) the central $h_3$ values computed from the composites come out close to
zero; (3) the values of the line strength parameter $\gamma$ come out not
very far from unity; and (4) as wide a range as possible of spectral types and
metallicities be represented.
We found that acceptable composite templates could be constructed at all
positions in the galaxy, consistent with the above criteria, using the
following stars: HD 41636 (G9III), HD 145328 (K0III-IV), HD 132142 (K1V),
and HD 10380 (K3III). We constructed a separate composite at each radius and
position angle. The coefficients of the individual spectra varied, for the most
part, smoothly with radius, from average central values of
$(0.05,0.1,0.4,0.45)$ to roughly $(0.2,0.15,0.4,0.25)$
at large radii. The point-to-point scatter in the coefficients
exceeded $0.1$ outside of about $10\arcsec$ and $0.2$ beyond $30\arcsec$.
However, we saw no indication of this scatter inducing any systematic
effects in the kinematic results beyond those discussed in the next section.
All of the results presented in this paper use the template stars listed above;
however, the analysis was also carried through using an earlier,
unsatisfactory library in order to estimate the systematic error from residual
template mismatch in the composites.
\begin{figure}[t]{\hfill\epsfxsize=3.7in\epsfbox{systemplate.eps}\hfill}
\caption{\footnotesize
Systematic error due to template mismatch. Differences
in the kinematic parameters obtained using two different composite templates
are plotted against radius. Plotted points are the RMS over the four
position angles. Smooth curves indicate fitting functions given in
equation (\protect{\ref{e.systemplate}}).
\label {f.systemplate}}
\end{figure}
\begin{figure}[t]{\hfill\epsfxsize=3.7in\epsfbox{syssky.eps}\hfill}
\caption{\footnotesize
Systematic error due to sky subtraction. Differences in
the results obtained using two different sky subtractions are plotted
against radius, for PA 70 only. Smooth curves indicate the fitting
functions given in equation (\protect{\ref{e.syssky}}).
\label {f.syssky}}
\end{figure}
\begin{figure}[t]{\hfill\epsfxsize=4.1in\epsfbox{pa70.eps}\hfill}
\caption{\footnotesize
Kinematic profiles for NGC 3379. $V$, $\sigma$, $h_3$, and $h_4$ are
the parameters in the truncated Gauss-Hermite expansion for the line-of-sight
velocity distribution, equation (\protect{\ref{e.losvd}}). (a) PA 70
(major axis).
\label {f.kinematicprofiles}}
\end{figure}
\subsection{Systematic Errors\label{s.systematic}}
Formal uncertainties on the results presented in Sec.\ \ref{s.results}
below are obtained
from the covariance matrix returned by the XC algorithm. But
we also need to estimate the dominant systematic
errors, associated with sky subtraction and
template mismatch. To accomplish this, we carried out parallel reductions of
the data using the ``naive sky'' subtraction described in Sec.
\ref{s.initial}, and using composite templates generated from a different
set of library spectra.
Figure \ref{f.systemplate} shows the differences in the
kinematic parameters obtained using the different library spectra
for the composite templates, plotted against radius. The plotted points
give the root-mean-square differences, with the mean taken over the four
position angles. We have fitted these data by eye with the following
functions:\\
\parbox{5.9in}{
\begin{eqnarray*}
\Delta V_{\rm rms} &=& 0.037 |R| + 0.70, \\
\Delta \sigma_{\rm rms} &=& 0.0011 R^2 + 1.5, \\
\Delta h_{3,\rm rms} &=& 7.4 \times 10^{-6} R^2 + 0.008, \\
\Delta h_{4,\rm rms} &=& 2.0 \times 10^{-5} R^2 + 0.007,
\end{eqnarray*}}\hfil
\parbox{.3in}{
\begin{eqnarray}
\label{e.systemplate}
\end{eqnarray}}\\
where $\Delta V_{\rm rms}$ and $\Delta \sigma_{\rm rms}$ are given in
$\mbox{${\rm\,km\,s}^{-1}$}$ and $R$ is in arcseconds. These fits are plotted as the smooth
curves in Figure \ref{f.systemplate}.
The corresponding differences between the adopted sky subtraction and
the ``naive sky'' approach are shown in Figure \ref{f.syssky}. Here the
analysis has been repeated only for PA 70, so there is no averaging over
position angle. The smooth curves show the fitting functions, given by
\parbox{5.9in}{
\begin{eqnarray*}
\Delta V &=& 0.028 |R| + 0.10, \\
\Delta \sigma &=& 0.033 |R| + 0.08, \\
\Delta h_3 &=& 7.4 \times 10^{-6} R^2 + 0.0006, \\
\Delta h_4 &=& 9.9 \times 10^{-6} R^2 + 0.0004.
\end{eqnarray*}}\hfil
\parbox{.3in}{
\begin{eqnarray}
\label{e.syssky}
\end{eqnarray}}\\
Comparison of the figures shows that template mismatch dominates
sky subtraction in the systematic error budget by more than an order
of magnitude in the bright center of the galaxy, but by only factors
of order unity at the slit ends.
The final error bars given in Table 1 and the figures represent the
formal internal errors from the XC code added in quadrature with the
contributions from equations (\ref{e.systemplate}) and (\ref{e.syssky}).
\section{Results\label{s.results}}
\subsection{Parametric Profiles\label{s.parametric}}
Kinematic profiles along the four sampled PAs are shown in Figure
\ref{f.kinematicprofiles}a--d. For each PA, we plot the
Gauss-Hermite parameters $V$, $\sigma$, $h_3$, and $h_4$, which are also
listed along with their uncertainties in columns 2 -- 9 of Table 1. Remember
that only when $h_3=h_4=0$ are $V$ and $\sigma$ equal to
the true mean and dispersion, $\langle v \rangle$ and
$(\langle v^2 \rangle - \langle v \rangle^2)^{1/2}$; we will recover the
latter quantities in Sec.\ \ref{s.corrected}. In the plotted
rotation curves, we have subtracted a systemic velocity $V_{\rm sys}
= 911.9 \pm 0.2 \mbox{${\rm\,km\,s}^{-1}$}$, which has been determined
from pure Gaussian fits to the broadening functions,
averaging pairs of points in the resulting $V$ profiles on opposite
sides of the center.
\addtocounter{figure}{-1}
\begin{figure}[t]{\hfill\epsfxsize=4.1in\epsfbox{pa340.eps}\hfill}
\caption{\footnotesize
(b) As in (a), but for PA 340 (minor axis).}
\end{figure}
\addtocounter{figure}{-1}
\begin{figure}[t]{\hfill\epsfxsize=4.1in\epsfbox{pa25.eps}\hfill}
\caption{\footnotesize
(c) As in (a), but for PA 25.}
\end{figure}
\addtocounter{figure}{-1}
\begin{figure}[t]{\hfill\epsfxsize=4.1in\epsfbox{pa115.eps}\hfill}
\caption{\footnotesize
(d) As in (a), but for PA 115.}
\end{figure}
The issue of possible minor-axis rotation is settled fairly clearly
by Figure \ref{f.kinematicprofiles}b. PA 340 shows only very weak rotation,
at about the $3\mbox{${\rm\,km\,s}^{-1}$}$ level, inside $12\arcsec$. Doubling the radial
bin size and folding about the origin to improve $S/N$, we find no detectable
rotation above $6\mbox{${\rm\,km\,s}^{-1}$}$ from $12\arcsec$ to $50\arcsec$ or above $16\mbox{${\rm\,km\,s}^{-1}$}$ out
to $90\arcsec$ (95\% confidence limits). The maximum rotation speed of
approximately $60 \mbox{${\rm\,km\,s}^{-1}$}$ is found on the major axis, and intermediate
speeds are found on the diagonal PAs.
The most striking features of the kinematic profiles, however, are
the sharp bends in the major-axis rotation curve at $4\arcsec$ and
$17\arcsec$, and the comparably sharp inflections near $15\arcsec$ in all
of the $\sigma$ profiles. These kinks are invisible in the earlier data,
which have insufficient spatial resolution and kinematic accuracy to
reveal them (cf.\ Fig. \ref{f.otherdata} below).
Careful inspection of the $h_3$ and $h_4$ profiles suggests features
coincident with the kinks in
$V$ and $\sigma$, though this is difficult to see because the Gauss-Hermite
terms have proportionally larger
error bars. To improve the statistics, we have combined the data in Figure
\ref{f.kinematicprofiles} to create composite, azimuthally averaged
radial profiles. The mean $V$ profile is scaled to the major axis amplitude
by multiplying the PA 25 and PA 115 data by factors of $1.65$ and $1.41$,
respectively, before folding (antisymmetrizing) about the center and averaging;
the minor axis (PA 340) data is omitted from
the composite $V$ and $h_3$ profiles. Since we see no significant
differences with PA in the $\sigma$ and $h_4$ profiles, for these we simply
symmetrize and average all 4 PAs with no scaling.
The resulting radial profiles are shown in Figure \ref{f.radialprofiles}.
The shapes of the $V$ and $\sigma$ profiles are clarified, particularly
the almost piecewise-linear form of the rotation curve and the sudden
transition in $\sigma(R)$ near $15\arcsec$. We also see a small bump
at $13\arcsec$ in the $h_3$ profile. $V$ and $h_3$ are of opposite sign
at all radii, consistent with the usual sense of skewness. The $h_4$
profile shows a clear positive gradient out to $18\arcsec$, where it
turns over, then gradually increases again beyond about $35\arcsec$.
Positive $h_4$ indicates an LOSVD that is more ``peaky'' and has longer
tails than a Gaussian.
The change of sign of $h_4$ in the inner $7\arcsec$ or so should not be
taken too literally, since a constant offset in $h_4$ is an expected artifact
of template mismatch.
\begin{figure}[t]{\hfill\epsfxsize=3.7in\epsfbox{radialprofiles.eps}\hfill}
\caption{\footnotesize
Composite radial kinematic profiles derived from all four slit PAs, as
described in Sec.\ \protect{\ref{s.parametric}}.
\label{f.radialprofiles}}
\end{figure}
The clustering of interesting kinematic features in the region from
$13\arcsec$ to $20\arcsec$ is intriguing. In order of increasing $R$, we see
a drop in the skewness of the LOSVD, an abrupt flattening in the dispersion
and rotation curves, and a local maximum in the LOSVD ``peakiness.''
This is, moreover, a photometrically interesting region. Capaccioli et
al.\ \markcite{Cap90}(1990) find residuals from the best-fitting $r^{1/4}$ law
as large as $0.2$ mag; the logarithmic slope of the $B$-band surface brightness
profile peaks at $18\arcsec$. Evidently, this range of radii marks a very
important transition in the galaxy.
\subsection{Corrected Mean Velocities and Dispersions\label{s.corrected}}
For dynamical models based on the low-order moment (continuity and Jeans)
equations, it is important to have the true mean and dispersion, rather than
the Gauss-Hermite parameters $V$ and $\sigma$. We can calculate these
quantities and their associated errors using equations (5)--(7)
of SSC, which are based on the treatment of van der Marel \& Franx
\markcite{vdMF93}(1993).
First, however, we must determine whether including
the $h_3$ and $h_4$ terms actually results in a statistically significant
improvement to the estimate of the moments $\langle v \rangle$ and
$\langle v^2 \rangle$ from the data. We found above that $h_3$ and $h_4$
are generally small, and since their error bars grow with radius, it
is not obvious {\em a priori\/} that correcting $\langle v \rangle$ and
$\langle v^2 \rangle$ for these terms---and increasing the error
bars accordingly---will necessarily give more robust estimates than
simply assuming a Gaussian LOSVD. Therefore, we examine the distribution
of chi-square values, $\chi ^2_3$ and $\chi^2_5$, obtained from, respectively,
three-parameter (Gaussian) and five-parameter (Gauss-Hermite) fits to the
broadening functions. We find that the differences $\Delta\chi = \chi^2_3 -
\chi^2_5$ are significant only for $R < 4\arcsec$. For the rest of the
data, the distribution $F(\chi^2_3)$ is completely consistent with a
chi-square distribution with the appropriate number of degrees of
freedom, if our original estimates for the noise in the galaxy spectra
are scaled up by a factor of $1.17$. The noise in each
spectrum is estimated by differencing the spectrum with a smoothed
version of itself; and it is certainly believable that this procedure
could underestimate the actual noise level by 17\%. All of
the results in this paper have been computed including this correction
to the noise.
The adopted mean velocity and dispersion profiles are shown in Figure
\ref{f.correctedprofiles}a--b and listed in the last 4 columns of Table
1. For $R<4\arcsec$, we use the
results of the Gauss-Hermite fits, corrected for $h_3$ and $h_4$. To
avoid propagating the residual effects of template mismatch,
we have applied a constant offset to the $h_3$ profile on each PA so as to
shift the central value to zero.
For larger radii we adopt the $V$ and $\sigma$ values from pure-Gaussian
fits. We are not saying that the LOSVD {\em is\/} Gaussian
beyond $4\arcsec$, merely that the most reliable estimates of the mean and
dispersion come from the Gaussian fit. Since the corrections are all
small, the corrected rotation curves resemble the $V$ profiles in Figure
\ref{f.kinematicprofiles}, including the very weak minor-axis rotation,
the sharp kinks in the major-axis profile, and a slightly higher
rotation speed on PA 115 diagonal than on PA 25. The $h_4$ corrections
to the dispersion flatten out the central gradient slightly and have
little effect on the rest of the profiles.
\begin{figure}[t]{\hfill\epsfxsize=3.7in\epsfbox{vcorrected.eps}\hfill}
\caption{\footnotesize
(a) Mean velocity profiles, corrected for the
non-Gaussian terms in the LOSVD as described in
Sec.\ \protect{\ref{s.corrected}}.
\label{f.correctedprofiles}}
\end{figure}
\addtocounter{figure}{-1}
\begin{figure}[t]{\hfill\epsfxsize=3.7in\epsfbox{dispcorrected.eps}\hfill}
\caption{\footnotesize
(b) velocity dispersion profiles, as in (a).}
\end{figure}
\subsection{Reconstructed Two-Dimensional Fields\label{s.twod}}
With multiple-PA sampling, we can create Fourier reconstructions
of the two-dimensional kinematic fields from the profiles in Figure
\ref{f.correctedprofiles}. Our $45\arcdeg$ spacing lends
itself to a representation of the form
\begin{equation}
f(R,\theta) = C_0 + \sum_{i=1}^4 \left(C_i \cos m \theta
+ S_i \sin m \theta\right),
\end{equation}
where the coefficients are all functions of $R$, and $S_4\equiv 0$ if we let
one of our sampled PAs correspond to
$\theta=0$. An explicit expression for the reconstructed velocity field in
terms of the measured velocities is given in equation (9) of SSC; the
corresponding expression for the dispersion field has the same form since no
particular symmetry is assumed. To reduce the noise in the 2-D fields, we
interpolate and smooth the 1-D profiles using a smoothing spline
(Green \& Silverman \markcite{GrS94}1994) before computing the reconstructions.
The resulting velocity and dispersion fields are shown in Figure
\ref{f.vfield}a--b. The plotted region is $56\arcsec$ in radius, which
omits only the outermost points on each PA. Black ellipses show
representative isophotes, as fitted by Peletier et al.\ \markcite{Pel90}(1990);
we have drawn the principal axes for two of the isophotes to
indicate the modest photometric twist in the galaxy. In Figure
\ref{f.vfield}a, note the rotation of the kinematic major axis away from the
photometric major axis for $R \gtrsim 30\arcsec$. This rotation is due
in roughly equal measure to the $4\arcdeg$ isophotal twist and to a $\sim
5\arcdeg$ kinematic twist of the velocity field in the opposite direction.
Figure \ref{f.vfield}b
nicely illustrates the steep central rise in the dispersion, as well
as the quite flat profile outside of $15\arcsec.$ The odd structure with
apparent 3-fold symmetry is almost certainly not real; however, the
azimuthally averaged profile does show a very weak ``hump'' near
$20\arcsec$, which would be consistent with a ring of slightly higher
dispersion at around this radius.
\begin{figure}[t]{\hfill\epsfxsize=3.0in\epsfbox{vfield.eps}\hfill}
\caption{\footnotesize
(a) Fourier reconstruction of the mean velocity field.
Ellipses show isophotes from Peletier et al.\ (1990); major and minor axes
drawn for two isophotes indicate the magnitude of the isophotal twist. The
plotted region is $56\arcsec$ in radius.
Notice the twist of the kinematic major axis (line joining the extreme
velocities at each radius) in the direction opposite to the isophotal
twist.
\label{f.vfield}}
\end{figure}
\addtocounter{figure}{-1}
\begin{figure}[t]{\hfill\epsfxsize=3.0in\epsfbox{sfield.eps}\hfill}
\caption{\footnotesize
(b) Fourier reconstruction of the velocity dispersion field, as in (a).}
\end{figure}
\subsection{Non-Parametric LOSVDs\label{s.nonparametric}}
Non-Parametric LOSVDs derived by the FCQ method are plotted in Figure
\ref{f.fcqplot}a--b, at the appropriate positions on the sky. Representative
isophotes are shown for orientation; in each little profile, the vertical line
marks the systemic velocity. Consistent with the results of Sec.
\ref{s.parametric}, one can see that nowhere is the LOSVD strongly
non-Gaussian. Careful inspection, however, does show a very modest
skewness in the usual sense along the inner major axis, and a
tendency for the LOSVD to be slightly sharper-peaked at large radii.
\begin{figure}[t]{\hfill\epsfxsize=3.0in\epsfbox{fcqplot_a.eps}\hfil\epsfxsize=3.0in\epsfbox{fcqplot_b.eps}\hfill}
\caption{\footnotesize
LOSVDs obtained by Fourier Correlation Quotient method, plotted at the sampled
positions on the sky. Vertical lines indicate the systemic velocity.
Representative isophotes are shown for orientation. (a) Inner region; (b)
outer region. Square in (b) shows the area plotted in (a).
\label{f.fcqplot}}
\end{figure}
\pagebreak
\section{Discussion\label{s.discussion}}
\nopagebreak
\subsection{Comparison with Previous Work}
Kinematic data for NGC 3379 have been published previously by
Sargent et al. \markcite{Sar78}(1978), Davies \markcite{Dav81}(1981),
Davies \& Illingworth \markcite{DaI83}(1983), Davies \& Birkinshaw
\markcite{DaB88}(1988), Franx et al.\ \markcite{FIH89}(1989), and
Bender, Saglia, \& Gerhard
\markcite{BSG94}(1994). Major axis $V$ and $\sigma$ profiles from all but
the first of these studies are plotted in Figure \ref{f.otherdata}.
Comparison with the top two panels of Fig.\ \ref{f.kinematicprofiles}a
shows that the present data are largely consistent with the earlier results,
but reveal structure that could not be seen in the earlier data. With the
benefit of hindsight, one can discern a change of slope in
$\sigma(R)$ near $20\arcsec$, but this feature is quite murky except in
the Bender et al.\ data. A steep
decline in dispersion is noted by Davies \markcite{Dav81}(1981), though
his mean dispersion of $114\mbox{${\rm\,km\,s}^{-1}$}$ outside of $15\arcsec$ is not reproduced
in the later work. Davies and Illingworth \markcite{DaI83}(1983)
conclude that the overall gradient is consistent with constant $M/L$.
Their data also show a shallow hump in $\sigma(R)$ beyond $20\arcsec$,
but at no more than $1\sigma$ significance. Bender et
al.\ \markcite{BSG94}(1994) obtain a $\sigma$ profile with a local minimum
of approximately $185\mbox{${\rm\,km\,s}^{-1}$}$ near $10\arcsec$, rising again to about
$200\mbox{${\rm\,km\,s}^{-1}$}$ at $27\arcsec$, their outermost data point.
This is somewhat inconsistent with our results, though not alarmingly so.
Their dispersions seem to be systematically $\sim 20\mbox{${\rm\,km\,s}^{-1}$}$ higher than ours,
which could easily be caused by their use of a single template star not matched
to the galactic spectrum.
They detect a sharp bend in the rotation curve at $4\arcsec$, but do not
have fine enough sampling to see a second bend farther out.
Bender et al.\ also derive $h_3$ and $h_4$, obtaining a
generally featureless $h_3$ profile with $\langle h_3 \rangle \approx -0.02$
on the positive-velocity side, and a weak positive gradient in $h_4$.
This again is consistent with our results, though at significantly
coarser resolution.
\begin{figure}[t]{\hfill\epsfxsize=5.0in\epsfbox{otherdata.eps}\hfill}
\caption{\footnotesize
Major-axis (or near-major-axis) $V$ and $\sigma$ profiles from previous
authors, for comparison with the top two panels of Fig.\
\protect{\ref{f.kinematicprofiles}}a. Error bars have been omitted for clarity.
\label{f.otherdata}}
\end{figure}
The minor axis velocity data from Davies \& Birkinshaw \markcite{DaB88}(1988)
and Franx et al.\ \markcite{FIH89}(1989) are compiled in Figure 4d of
Statler \markcite{Sta94}(1994). Those data show a scatter of $18\mbox{${\rm\,km\,s}^{-1}$}$ about
a mean of $3\mbox{${\rm\,km\,s}^{-1}$}$, with a possible increase in rotation beyond $30\arcsec$.
As discussed
in Sec. \ref{s.parametric}, we obtain 95\% confidence upper limits
of $6\mbox{${\rm\,km\,s}^{-1}$}$ for $12\arcsec < R < 50\arcsec$ and $16\mbox{${\rm\,km\,s}^{-1}$}$ for $50\arcsec < R <
90\arcsec$, with a marginal detection of $\sim 3\mbox{${\rm\,km\,s}^{-1}$}$ rotation on PA 340
interior to $12\arcsec$.
\subsection{Connection with Planetary Nebulae}
Ciardullo, Jacoby, \& Dejonghe \markcite{CJD93}(1993) have measured the
radial velocities of 29 planetary nebulae in NGC 3379, at radii between
$21\arcsec$ and $209\arcsec$. They find no evident rotation in the PN
population, but a clear signature of a negative radial gradient in the
RMS velocity. Breaking their sample into 3 radial bins, they obtain the
dispersion profile plotted as the large diamonds in Figure
\ref{f.pnebulae}. To compare with these data, we compute the RMS
velocity profile for the integrated starlight by computing composite
radial profiles for the Hermite-corrected mean velocity and dispersion
profiles, and adding them in quadrature. We make no correction to the
mean velocity for any assumed inclination. The results are plotted as the
squares in Figure \ref{f.pnebulae}. To within the errors, the profiles
join smoothly. (The upward jump in our last data point is not
statistically significant.) While it remains somewhat puzzling that the
PNe should show no rotation, at least from the dispersion profile it
would seem that they are representative of the general stellar
population at smaller radii.
\begin{figure}[t]{\hfill\epsfxsize=3.0in\epsfbox{pnebulae.eps}\hfill}
\caption{\footnotesize
Kinematics at small and large radii. The composite line-of-sight RMS velocity
profile of the integrated stellar light ({\em squares\/}) and the velocity
dispersion profile of planetary nebulae measured by Ciardullo et al.\ (1993)
({\em diamonds\/}) join up smoothly, to within the errors.
\label{f.pnebulae}}
\end{figure}
\nopagebreak
\subsection{Implications for Dynamics and Structure}
The double-humped RMS velocity profile plotted in Fig. \ref{f.pnebulae}
provokes a strong impression of a two-component system. At the very
least, the sharp bends in the rotation curve (Fig.\ \ref{f.radialprofiles},
top) indicate that there are special radii within the galaxy where
abrupt---though perhaps subtle---changes in dynamical structure occur. This
sort of behavior is not generally thought of as being characteristic of
elliptical galaxies. But it has, in fact, been seen before in S0's,
most notably in NGC 3115. \markcite{Cap91}CVHL have argued that these
two systems share enough photometric characteristics that they could be near
twins, seen in different orientations. Here we show that they share a
kinematic kinship as well.
In Figure \ref{f.compare} we plot the major axis rotation curve of NGC
3115 from Fisher \markcite{Fis97}(1997), along with that of NGC 3379
on the same linear scale.
Both show a sharp inner kink $\sim 200\mbox{$\rm\,pc$}$ from the center
and an outer kink in the rough vicinity of $1\mbox{$\rm\,kpc$}$, outside of which the
rotation curve is basically flat. The similarity is striking, even
though the locations of the bends do not match exactly. NGC 3115's outer kink
coincides almost exactly with a photometric bump in the major axis
$B$-band brightness profile (Capaccioli et al.\ \markcite{CHN87}1987)
evidently related to structure in the disk. The dispersion also appears
to level off at about this radius (Bender et al.\ \markcite{BSG94}1994),
although the transition does not seem especially sharp.
\begin{figure}[t]{\hfill\epsfxsize=5.0in\epsfbox{compare3115.eps}\hfill}
\caption{\footnotesize
Major axis rotation curves of NGC 3379 and NGC 3115 (Fisher 1997)
plotted on the same linear scale. Both galaxies have nearly
piecewise-linear rotation curves, with sharp bends near $0.2\mbox{$\rm\,kpc$}$ and
$1\mbox{$\rm\,kpc$}$.
\label{f.compare}}
\end{figure}
In addition, there are similarities in the $h_3$ profiles. Fisher
\markcite{Fis97}(1997) finds a bump in $h_3$ at around $R=5\arcsec$
in NGC 3115, similar to the feature we see at $R=13\arcsec$ in NGC 3379.
Since they appear at rather different places relative to the kinks in the
rotation curves, these small bumps may be unrelated; however,
the correlation between $h_3$ and {\em local\/} $v/\sigma$ hints that
there may be a more subtle connection.
Fisher plots $h_3$ against $v/\sigma$ at the same projected radius for
a sample of 20 S0's, and finds that 10 show a distinctive N-shaped
profile through the center.
In 9 of those, $h_3$ changes sign in the legs of the N, reversing the
usual sense of skewness. This is quite different from ellipticals
(Bender et al.\ \markcite{BSG94}1994), which tend to show only a monotonic
anticorrelation (i.e., only the middle segment of the N). In NGC 3115,
$h_3$ does not change sign more than once, but the profile turns around again
past either end of the N-shaped segment. We plot the $h_3$ {\em vs.\/}
$v/\sigma$ profile for NGC 3115 as the dashed line in Figure
\ref{f.fisherplot}. We have taken the antisymmetric part to reduce the
noise, and plotted only the positive-velocity half, so that one sees
only the right half of the N, and the outer part ($v/\sigma >0.8$) where
the profile turns over.
\begin{figure}[t]{\hfill\epsfxsize=3.0in\epsfbox{fisherplot.eps}\hfill}
\caption{\footnotesize
Correlation between $h_3$ and local $v/\sigma$ at the same projected
radius, for NGC 3379 and NGC 3115. Curves have been folded
(antisymmetrized) about the center, and
the curve for NGC 3379 has been scaled to the central dispersion and
maximum rotation speed of NGC 3115. The small peaks in the $h_3$ profiles
occur at the same value of scaled $v/\sigma$.
\label{f.fisherplot}}
\end{figure}
To test whether NGC 3379 might plausibly be a scaled and reoriented
copy of NGC 3115, we have derived the corresponding curve for NGC 3379
from the composite radial profiles plotted in Fig. \ref{f.radialprofiles}.
We scale $\sigma$ up by a factor of $1.3$ so that the central dispersion
matches that of NGC 3115, and scale $v$ up by a factor of $4.3$ to match
the maximum speed in the flat part of NGC 3115's rotation curve. The
result is plotted as the solid line in Figure \ref{f.fisherplot}. In
terms of the scaled $v/\sigma$, the $h_3$ bump occurs in the same
place in the two galaxies.
Does this rather arbitrary scaling of $v$ and $\sigma$ correspond to
a sensible geometry? If, for simplicity, we assume an isotropic dispersion
tensor, so that the line-of-sight $\sigma$ is (at least to lowest order)
unaffected by orientation, the above scaling would require a trigonometric
factor $1.3/4.3 = \sin 18\arcdeg$ to dilute the rotation speed to the observed
value. At an inclination of $18\arcdeg$, an intrinsically E6 or E7 oblate
galaxy would be seen to have an ellipticity of $0.04$. This is a bit
rounder than the actual ellipticity of NGC 3379, which increases outward
from about $0.08$ to $0.13$ over the range of radii spanned by our data
(Peletier et al. \markcite{Pel90}1990). But the difference in apparent
shape could, in principle, be made up by a small triaxiality, so this low
an inclination is not entirely out of the question.
We would not go so far as to argue that the similarity in the $h_3$ {\em
vs.\/} $v/\sigma$ curves marks NGC 3379 as a twin of NGC 3115, or, for
that matter, as an S0 at all. There is no particular theoretical reason
to expect a bump in $h_3$ at $v/\sigma\approx 1$, no dynamical model that
predicts such a feature, and no indication that it is even present in most
S0's. But we can turn the argument around, and say that {\em if\/} it is
determined by other means that NGC 3379 is a low-inclination S0, then we
will have reason to ask what common aspect of the dynamical structure of
these two galaxies creates similar features in $h_3$ in corresponding
locations.
Heuristic arguments such as these, however, are no substitute for
dynamical modeling, which is the only proper way to determine
the true shape and dynamical structure of NGC 3379. While we leave a full
treatment of this issue to a future paper, some general discussion is
worthwhile. To lowest order, the mean velocity field of NGC 3379 is
characteristic of an oblate axisymmetric system: the highest measured
velocities are on the major axis, the minor axis rotation is near
zero, and the profiles on the diagonal slit positions are nearly the
same. Similarly, the closely aligned, almost exactly elliptical
isophotes are just what one would expect of an axisymmetric galaxy.
However, there are significant deviations, at the $\sim 10\%$ level,
from the pure axisymmetric signature, which appear as an isophotal twist
of roughly $5\arcdeg$ and a kinematic twist of about the same size
in the opposite direction.
Very approximately, the distortion to the velocity field
induced by a small triaxiality $T$ is $\delta V / V \sim T$, so
a $5\arcdeg$ kinematic twist might be characteristic of a weak
triaxiality $T\sim 0.1$. The photometric twist, if one assumes that
the true principal axes of the luminosity density surfaces are aligned,
signals a triaxiality {\em gradient\/}; but for small $T$, in order to observe
an isophotal twist of more than a degree or two requires a line of sight
for which the apparent ellipticity is small. Thus,
unless NGC 3379 is intrinsically twisted, the photometric and kinematic
data may well be indicating, completely independent of any arguments regarding
NGC 3115, a quite flattened, weakly triaxial system
seen in an orientation that makes it appear round.
\section{Conclusions\label{s.conclusions}}
We have measured the stellar kinematic profiles of NGC 3379 along four
position angles using the MMT. We have obtained mean velocities and dispersions
at excellent spatial resolution, with precisions better than $10\mbox{${\rm\,km\,s}^{-1}$}$
and frequently better than $5\mbox{${\rm\,km\,s}^{-1}$}$ out to $55\arcsec$, and at slightly lower
precision farther out. The $h_3$ and $h_4$ parameters are measured over
the entire slit length, and are found to be generally small. From a
Fourier reconstruction of the mean velocity field, we detect a $\sim
5\arcdeg$ twist of the kinematic major axis, over roughly the same range
of radii where the photometric major axis twists by $\sim 5\arcdeg$ in
the opposite direction. The most surprising aspect of our results
is the appearance of sharp features in the kinematic profiles.
There are sharp bends in the major-axis rotation curve, visible (though
less pronounced) on the diagonal position angles, which closely
resemble similar bends seen in the edge-on S0 NGC 3115.
Moreover, there is an abrupt flattening of the dispersion profile,
as well as local peaks in $h_3$ and $h_4$, all apparently
associated with the outer rotation curve bend near $17\arcsec$, and all
coinciding with a region where the surface photometry shows some of its
largest departures from an $r^{1/4}$ law.
The sharp kinematic transitions that we see in NGC 3379
are, as far as we know, unprecedented in any elliptical galaxy. But this is
much less a statement about galaxies than about data: no other elliptical
has been observed at this resolution over this large a range of radii.
The correspondence with kinematic features seen in NGC 3115
does not prove that NGC 3379 is an S0, since we do not know whether these
features are unique to S0's. Previously published data on NGC 3379 give the
impression of a gently rising rotation curve and a featureless, smoothly
falling dispersion profile. Except for the few systems identified
as having kinematically distinct cores, a cursory survey of the literature
gives a similar impression of most other ellipticals.
In the current standard conceptual picture, elliptical galaxies have smooth
and unremarkable rotation and dispersion profiles, except for a few
peculiar cases. Yet, one of the most ordinary ellipticals in the
sky, when examined with high enough precision, turns out to have far
richer dynamical structure
than expected. We should hardly be surprised to see this sort of thing
happen, though. Fifteen years ago it was also part of the standard
picture that elliptical galaxies had precisely elliptical isophotes,
except for a few peculiar cases. It was only after techniques had been
developed to measure departures from pure ellipses {\em at the 1\%
level\/} (Lauer \markcite{Lau85}1985, Jedrzejewski \markcite{Jed87}1987),
and after these measurements had been made for a respectable sample
of objects, that the distinction between the properties of
disky and boxy ellipticals (Bender et al.\ \markcite{Ben89}1989),
now regarded as fundamental, emerged. The potential of detailed
kinematic studies of elliptical galaxies to further elucidate their
structure and evolution remains, at this point, almost entirely unexplored.
\acknowledgments
TSS acknowledges support from NASA Astrophysical Theory Grant NAG5-3050
and NSF CAREER grant AST-9703036.
We thank the director and staff of the Multiple
Mirror Telescope Observatory for their generous assistance and allocations
of time to this project. Ralf Bender kindly provided additional details
on his published data, and the anonymous referee helped us to improve the
paper by catching a number of errors.
\begin{deluxetable}{rrrrrrrrrrrrr}
\scriptsize
\tablenum{1a}
\tablewidth{6.5in}
\tablecaption{Data for PA 70 (major axis)}
\tablehead{
\colhead{$R(\arcsec)$} &
\colhead{$V$} & \colhead{$\pm$} &
\colhead{$\sigma$} & \colhead{$\pm$} &
\colhead{$h_3$} & \colhead{$\pm$} &
\colhead{$h_4$} & \colhead{$\pm$} &
\colhead{Mean} & \colhead{$\pm$} &
\colhead{Disp.} & \colhead{$\pm$}}
\startdata
$-78.0$&$-26.0$&$15.8$&$183.1$&$17.8$&$0.153$&$0.133$&$-0.119$&$0.167$&$-23.9$&$14.6$&$183.4$&$22.4$\nl
$-55.2$&$-48.1$&$7.6$&$144.8$&$14.3$&$0.097$&$0.044$&$0.162$&$0.063$&$-41.7$&$7.3$&$149.0$&$10.4$\nl
$-41.4$&$-59.1$&$6.4$&$152.4$&$10.3$&$0.089$&$0.036$&$0.072$&$0.048$&$-53.6$&$6.0$&$154.0$&$8.6$\nl
$-33.6$&$-61.0$&$5.7$&$160.1$&$7.6$&$0.101$&$0.042$&$-0.003$&$0.055$&$-55.5$&$5.3$&$159.4$&$7.8$\nl
$-28.2$&$-51.4$&$5.1$&$160.6$&$6.5$&$0.060$&$0.038$&$-0.033$&$0.050$&$-48.0$&$5.0$&$160.5$&$7.4$\nl
$-24.0$&$-54.5$&$4.8$&$162.8$&$7.1$&$0.031$&$0.028$&$0.017$&$0.036$&$-52.7$&$4.8$&$162.4$&$7.0$\nl
$-21.0$&$-53.7$&$4.8$&$157.5$&$7.7$&$0.021$&$0.028$&$0.048$&$0.037$&$-52.1$&$4.7$&$159.3$&$6.8$\nl
$-18.6$&$-59.4$&$4.5$&$158.1$&$8.0$&$0.090$&$0.024$&$0.104$&$0.032$&$-53.2$&$4.2$&$160.7$&$6.1$\nl
$-16.8$&$-60.1$&$4.7$&$155.5$&$6.8$&$0.093$&$0.026$&$0.037$&$0.034$&$-54.1$&$4.2$&$156.1$&$6.1$\nl
$-15.6$&$-49.4$&$4.3$&$145.6$&$6.0$&$0.048$&$0.030$&$0.001$&$0.040$&$-46.0$&$4.0$&$144.0$&$5.9$\nl
$-14.4$&$-51.2$&$4.0$&$159.7$&$5.7$&$0.085$&$0.026$&$0.014$&$0.036$&$-46.5$&$3.7$&$156.2$&$5.5$\nl
$-13.2$&$-45.6$&$3.5$&$166.5$&$4.8$&$-0.022$&$0.020$&$-0.004$&$0.026$&$-46.9$&$3.5$&$166.4$&$5.1$\nl
$-12.0$&$-47.0$&$3.5$&$159.9$&$4.9$&$0.045$&$0.022$&$-0.001$&$0.030$&$-44.2$&$3.4$&$159.2$&$5.1$\nl
$-10.8$&$-47.4$&$3.0$&$172.4$&$4.0$&$0.030$&$0.017$&$-0.010$&$0.023$&$-45.7$&$3.0$&$172.0$&$4.5$\nl
$-9.6$&$-43.7$&$2.8$&$169.2$&$4.0$&$0.015$&$0.015$&$0.010$&$0.020$&$-42.9$&$2.8$&$169.3$&$4.1$\nl
$-8.4$&$-38.6$&$2.6$&$181.8$&$3.8$&$0.005$&$0.013$&$0.012$&$0.017$&$-38.4$&$2.7$&$182.0$&$3.9$\nl
$-7.2$&$-38.3$&$2.4$&$184.2$&$3.2$&$0.025$&$0.013$&$-0.014$&$0.017$&$-36.9$&$2.5$&$183.4$&$3.7$\nl
$-6.0$&$-35.1$&$2.2$&$188.8$&$2.8$&$0.023$&$0.012$&$-0.029$&$0.016$&$-33.9$&$2.3$&$188.0$&$3.5$\nl
$-4.8$&$-34.8$&$1.9$&$194.2$&$2.7$&$0.028$&$0.009$&$-0.007$&$0.012$&$-33.5$&$2.1$&$193.5$&$3.2$\nl
$-3.6$&$-34.3$&$1.8$&$196.1$&$2.5$&$0.041$&$0.009$&$-0.007$&$0.012$&$-25.7$&$3.8$&$193.3$&$6.1$\nl
$-2.4$&$-24.9$&$1.6$&$203.7$&$2.2$&$0.038$&$0.008$&$-0.012$&$0.010$&$-17.2$&$3.6$&$198.8$&$5.5$\nl
$-1.2$&$-13.4$&$1.5$&$214.1$&$2.1$&$0.031$&$0.007$&$-0.013$&$0.009$&$-7.7$&$3.7$&$208.1$&$5.5$\nl
$0.0$&$-1.4$&$1.5$&$218.1$&$2.1$&$0.015$&$0.006$&$-0.014$&$0.008$&$-1.4$&$3.7$&$211.2$&$5.5$\nl
$1.2$&$9.5$&$1.5$&$212.8$&$2.1$&$0.003$&$0.006$&$-0.016$&$0.008$&$5.7$&$3.6$&$205.6$&$5.2$\nl
$2.4$&$22.9$&$1.6$&$203.8$&$2.2$&$-0.009$&$0.007$&$-0.010$&$0.009$&$15.3$&$3.6$&$199.7$&$5.6$\nl
$3.6$&$32.5$&$1.7$&$199.4$&$2.4$&$-0.001$&$0.008$&$-0.006$&$0.010$&$27.3$&$3.9$&$196.4$&$6.4$\nl
$4.8$&$34.8$&$1.9$&$193.7$&$2.8$&$-0.007$&$0.009$&$0.011$&$0.012$&$34.5$&$2.1$&$193.7$&$3.1$\nl
$6.0$&$35.4$&$2.2$&$192.3$&$3.2$&$0.013$&$0.010$&$0.007$&$0.013$&$35.9$&$2.3$&$192.0$&$3.5$\nl
$7.2$&$38.5$&$2.5$&$188.0$&$3.3$&$-0.020$&$0.013$&$-0.012$&$0.016$&$37.7$&$2.6$&$188.4$&$3.8$\nl
$8.4$&$40.3$&$2.6$&$188.0$&$3.6$&$-0.018$&$0.013$&$-0.007$&$0.017$&$39.5$&$2.8$&$188.3$&$4.1$\nl
$9.6$&$45.3$&$2.8$&$177.1$&$3.7$&$-0.059$&$0.021$&$-0.037$&$0.027$&$42.9$&$2.9$&$178.1$&$4.3$\nl
$10.8$&$41.3$&$3.0$&$172.7$&$3.8$&$-0.036$&$0.019$&$-0.032$&$0.026$&$39.4$&$3.0$&$172.3$&$4.5$\nl
$12.0$&$47.8$&$3.2$&$164.6$&$4.7$&$-0.015$&$0.018$&$0.010$&$0.024$&$46.9$&$3.2$&$164.5$&$4.8$\nl
$13.2$&$47.0$&$3.6$&$166.0$&$5.4$&$-0.028$&$0.020$&$0.026$&$0.026$&$45.4$&$3.6$&$166.3$&$5.2$\nl
$14.4$&$59.6$&$3.9$&$168.2$&$5.8$&$-0.006$&$0.021$&$0.019$&$0.028$&$59.3$&$3.9$&$169.3$&$5.7$\nl
$15.6$&$56.8$&$4.2$&$165.7$&$5.7$&$-0.031$&$0.024$&$-0.003$&$0.032$&$54.9$&$4.1$&$165.0$&$6.0$\nl
$16.8$&$57.5$&$4.6$&$170.2$&$8.4$&$-0.003$&$0.024$&$0.076$&$0.031$&$58.5$&$4.7$&$168.0$&$6.7$\nl
$18.6$&$56.5$&$4.2$&$161.4$&$7.4$&$0.021$&$0.024$&$0.067$&$0.032$&$58.2$&$4.3$&$162.0$&$6.1$\nl
$21.0$&$60.0$&$4.9$&$154.8$&$6.8$&$-0.030$&$0.030$&$0.000$&$0.040$&$58.0$&$4.7$&$153.7$&$6.9$\nl
$24.0$&$56.2$&$4.8$&$168.3$&$6.1$&$-0.008$&$0.029$&$-0.028$&$0.041$&$55.7$&$4.8$&$167.9$&$6.9$\nl
$28.2$&$59.8$&$6.1$&$157.0$&$8.9$&$-0.153$&$0.079$&$-0.077$&$0.077$&$54.2$&$5.3$&$160.9$&$7.7$\nl
$33.6$&$53.9$&$5.4$&$165.2$&$8.2$&$0.004$&$0.030$&$0.028$&$0.039$&$53.8$&$5.5$&$165.8$&$7.9$\nl
$41.4$&$58.9$&$5.9$&$145.7$&$9.7$&$-0.026$&$0.037$&$0.056$&$0.050$&$56.7$&$5.9$&$149.2$&$8.5$\nl
$55.2$&$60.9$&$7.7$&$130.9$&$12.8$&$0.040$&$0.052$&$0.061$&$0.071$&$64.7$&$7.2$&$138.9$&$10.8$\nl
$78.0$&$19.4$&$16.1$&$103.1$&$21.5$&$0.209$&$0.132$&$0.289$&$0.196$&$43.5$&$13.4$&$137.4$&$20.6$\nl
\enddata
\end{deluxetable}
\begin{deluxetable}{rrrrrrrrrrrrr}
\scriptsize
\tablenum{1b}
\tablewidth{6.5in}
\tablecaption{Data for PA 340 (minor axis)}
\tablehead{
\colhead{$R(\arcsec)$} &
\colhead{$V$} & \colhead{$\pm$} &
\colhead{$\sigma$} & \colhead{$\pm$} &
\colhead{$h_3$} & \colhead{$\pm$} &
\colhead{$h_4$} & \colhead{$\pm$} &
\colhead{Mean} & \colhead{$\pm$} &
\colhead{Disp.} & \colhead{$\pm$}}
\startdata
$-78.0$&$7.2$&$16.4$&$152.2$&$33.3$&$-0.056$&$0.095$&$0.133$&$0.131$&$6.1$&$15.3$&$142.5$&$22.6$\nl
$-55.2$&$8.6$&$10.0$&$115.5$&$14.0$&$0.058$&$0.085$&$0.007$&$0.116$&$13.3$&$8.1$&$117.3$&$12.7$\nl
$-41.4$&$3.4$&$6.6$&$135.8$&$11.1$&$0.023$&$0.044$&$0.065$&$0.059$&$5.6$&$6.3$&$140.6$&$9.1$\nl
$-33.6$&$-9.4$&$5.7$&$157.9$&$7.8$&$-0.001$&$0.034$&$-0.004$&$0.044$&$-9.5$&$5.7$&$157.8$&$8.3$\nl
$-28.2$&$-6.8$&$5.8$&$166.0$&$6.7$&$0.011$&$0.049$&$-0.090$&$0.081$&$-6.5$&$5.7$&$163.5$&$8.4$\nl
$-24.0$&$-8.7$&$5.3$&$155.7$&$6.3$&$-0.134$&$0.064$&$-0.158$&$0.095$&$-10.4$&$5.0$&$159.7$&$7.4$\nl
$-21.0$&$4.1$&$5.3$&$152.3$&$6.4$&$-0.037$&$0.064$&$-0.147$&$0.108$&$2.7$&$5.2$&$151.0$&$7.7$\nl
$-18.6$&$7.2$&$4.5$&$180.6$&$5.7$&$0.025$&$0.027$&$-0.032$&$0.038$&$8.5$&$4.5$&$179.3$&$6.5$\nl
$-16.8$&$7.5$&$4.4$&$171.6$&$6.3$&$-0.005$&$0.024$&$0.009$&$0.031$&$7.1$&$4.4$&$172.0$&$6.4$\nl
$-15.6$&$3.8$&$4.2$&$158.1$&$5.3$&$-0.035$&$0.031$&$-0.032$&$0.043$&$2.1$&$4.0$&$158.1$&$5.9$\nl
$-14.4$&$1.2$&$4.2$&$155.1$&$5.7$&$0.015$&$0.026$&$-0.007$&$0.034$&$2.2$&$4.0$&$154.3$&$5.9$\nl
$-13.2$&$2.4$&$3.6$&$165.8$&$5.4$&$0.019$&$0.020$&$0.020$&$0.027$&$3.5$&$3.7$&$166.0$&$5.3$\nl
$-12.0$&$2.3$&$3.3$&$168.7$&$4.7$&$0.047$&$0.018$&$0.013$&$0.024$&$5.0$&$3.3$&$168.6$&$4.8$\nl
$-10.8$&$2.1$&$3.0$&$168.8$&$4.1$&$0.063$&$0.018$&$0.000$&$0.024$&$5.6$&$2.9$&$168.6$&$4.3$\nl
$-9.6$&$0.8$&$2.7$&$172.5$&$3.8$&$0.034$&$0.014$&$0.006$&$0.019$&$2.7$&$2.7$&$172.4$&$4.0$\nl
$-8.4$&$-0.1$&$2.6$&$178.3$&$3.7$&$0.032$&$0.013$&$0.007$&$0.017$&$1.5$&$2.7$&$177.8$&$3.9$\nl
$-7.2$&$-0.4$&$2.3$&$185.9$&$3.3$&$0.024$&$0.012$&$-0.004$&$0.015$&$0.8$&$2.5$&$185.4$&$3.7$\nl
$-6.0$&$2.1$&$2.2$&$189.4$&$3.0$&$0.014$&$0.011$&$-0.008$&$0.014$&$2.8$&$2.3$&$189.0$&$3.4$\nl
$-4.8$&$4.2$&$1.9$&$191.1$&$2.6$&$0.022$&$0.010$&$-0.011$&$0.013$&$5.2$&$2.1$&$190.6$&$3.2$\nl
$-3.6$&$2.8$&$1.7$&$193.9$&$2.4$&$0.016$&$0.008$&$-0.004$&$0.011$&$2.1$&$3.9$&$191.9$&$6.7$\nl
$-2.4$&$1.2$&$1.5$&$206.9$&$2.2$&$0.020$&$0.007$&$-0.002$&$0.009$&$1.7$&$3.9$&$206.0$&$6.5$\nl
$-1.2$&$0.7$&$1.4$&$208.7$&$2.0$&$0.014$&$0.006$&$-0.014$&$0.008$&$-0.7$&$3.5$&$202.0$&$5.2$\nl
$0.0$&$-1.9$&$1.4$&$210.8$&$1.9$&$0.018$&$0.006$&$-0.020$&$0.008$&$-1.9$&$3.3$&$201.9$&$4.5$\nl
$1.2$&$-1.6$&$1.4$&$212.8$&$1.9$&$0.023$&$0.006$&$-0.015$&$0.008$&$-0.0$&$3.5$&$205.8$&$5.2$\nl
$2.4$&$-1.8$&$1.5$&$205.0$&$2.0$&$0.014$&$0.007$&$-0.016$&$0.009$&$-3.3$&$3.5$&$197.7$&$5.1$\nl
$3.6$&$-1.1$&$1.7$&$195.4$&$2.4$&$0.028$&$0.008$&$-0.004$&$0.011$&$2.2$&$3.9$&$193.7$&$6.7$\nl
$4.8$&$0.0$&$1.9$&$192.3$&$2.6$&$0.014$&$0.009$&$-0.007$&$0.012$&$0.6$&$2.0$&$192.2$&$3.1$\nl
$6.0$&$-4.5$&$2.1$&$184.2$&$2.9$&$-0.005$&$0.010$&$0.000$&$0.014$&$-4.8$&$2.2$&$184.3$&$3.3$\nl
$7.2$&$-5.5$&$2.3$&$178.4$&$3.1$&$-0.009$&$0.012$&$-0.018$&$0.017$&$-6.0$&$2.4$&$177.9$&$3.6$\nl
$8.4$&$-1.3$&$2.6$&$172.1$&$3.7$&$-0.013$&$0.014$&$0.004$&$0.018$&$-2.0$&$2.7$&$172.2$&$4.0$\nl
$9.6$&$2.9$&$2.8$&$173.6$&$4.0$&$-0.022$&$0.015$&$0.007$&$0.019$&$1.6$&$2.9$&$174.1$&$4.2$\nl
$10.8$&$-5.5$&$3.0$&$170.2$&$3.8$&$0.038$&$0.020$&$-0.035$&$0.027$&$-3.5$&$3.0$&$168.8$&$4.5$\nl
$12.0$&$-8.8$&$3.2$&$163.5$&$4.8$&$0.020$&$0.018$&$0.020$&$0.024$&$-7.5$&$3.3$&$164.1$&$4.7$\nl
$13.2$&$-0.5$&$3.6$&$163.1$&$5.2$&$-0.012$&$0.021$&$0.013$&$0.027$&$-1.2$&$3.6$&$164.0$&$5.3$\nl
$14.4$&$-2.3$&$3.8$&$163.8$&$5.7$&$0.008$&$0.021$&$0.022$&$0.028$&$-1.7$&$3.8$&$164.9$&$5.5$\nl
$15.6$&$-2.4$&$4.2$&$163.3$&$5.8$&$0.015$&$0.024$&$-0.002$&$0.031$&$-1.5$&$4.1$&$163.1$&$6.0$\nl
$16.8$&$0.7$&$4.8$&$153.1$&$6.5$&$-0.013$&$0.029$&$-0.004$&$0.038$&$-0.1$&$4.6$&$153.1$&$6.7$\nl
$18.6$&$-0.7$&$4.5$&$148.3$&$7.9$&$0.061$&$0.027$&$0.080$&$0.037$&$4.0$&$4.4$&$156.2$&$6.2$\nl
$21.0$&$-3.2$&$4.9$&$145.9$&$6.9$&$0.082$&$0.035$&$0.014$&$0.048$&$2.0$&$4.5$&$147.7$&$6.6$\nl
$24.0$&$-2.6$&$5.0$&$152.9$&$7.8$&$0.056$&$0.029$&$0.046$&$0.039$&$1.0$&$4.8$&$156.0$&$7.0$\nl
$28.2$&$-3.7$&$5.5$&$162.6$&$6.4$&$0.113$&$0.054$&$-0.088$&$0.067$&$-0.1$&$5.2$&$163.5$&$7.6$\nl
$33.6$&$1.2$&$6.0$&$166.8$&$6.8$&$-0.000$&$0.051$&$-0.097$&$0.086$&$2.5$&$6.1$&$163.9$&$9.0$\nl
$41.4$&$10.4$&$7.2$&$160.9$&$9.1$&$-0.009$&$0.075$&$-0.109$&$0.130$&$9.7$&$7.2$&$158.9$&$10.7$\nl
$55.2$&$27.5$&$10.1$&$150.9$&$10.8$&$-0.093$&$0.124$&$-0.181$&$0.185$&$25.2$&$9.6$&$147.6$&$15.0$\nl
$78.0$&$-4.2$&$17.9$&$126.7$&$23.1$&$-0.047$&$0.464$&$-0.617$&$0.819$&$-6.1$&$19.2$&$133.7$&$29.5$\nl
\enddata
\end{deluxetable}
\begin{deluxetable}{rrrrrrrrrrrrr}
\scriptsize
\tablenum{1c}
\tablewidth{6.5in}
\tablecaption{Data for PA 25}
\tablehead{
\colhead{$R(\arcsec)$} &
\colhead{$V$} & \colhead{$\pm$} &
\colhead{$\sigma$} & \colhead{$\pm$} &
\colhead{$h_3$} & \colhead{$\pm$} &
\colhead{$h_4$} & \colhead{$\pm$} &
\colhead{Mean} & \colhead{$\pm$} &
\colhead{Disp.} & \colhead{$\pm$}}
\startdata
$-78.0$&$-16.7$&$14.7$&$170.2$&$17.4$&$0.150$&$0.157$&$-0.177$&$0.229$&$-18.0$&$14.0$&$180.6$&$21.6$\nl
$-55.2$&$-17.3$&$7.7$&$133.5$&$11.1$&$0.082$&$0.056$&$0.020$&$0.078$&$-11.4$&$6.8$&$134.3$&$10.4$\nl
$-41.4$&$-27.4$&$5.2$&$133.9$&$8.2$&$-0.008$&$0.035$&$0.044$&$0.047$&$-27.5$&$5.0$&$134.8$&$7.3$\nl
$-33.6$&$-34.1$&$5.7$&$153.9$&$6.1$&$0.188$&$0.061$&$-0.127$&$0.071$&$-29.1$&$4.9$&$159.5$&$7.5$\nl
$-28.2$&$-34.1$&$4.8$&$160.1$&$9.6$&$0.070$&$0.026$&$0.142$&$0.035$&$-28.1$&$4.8$&$163.8$&$6.8$\nl
$-24.0$&$-31.9$&$4.7$&$175.1$&$7.9$&$0.067$&$0.024$&$0.065$&$0.030$&$-27.0$&$4.8$&$180.6$&$6.7$\nl
$-21.0$&$-31.4$&$4.7$&$164.4$&$8.0$&$0.004$&$0.026$&$0.058$&$0.034$&$-31.5$&$4.7$&$165.6$&$6.8$\nl
$-18.6$&$-28.4$&$4.0$&$155.0$&$6.6$&$-0.002$&$0.023$&$0.053$&$0.031$&$-28.7$&$3.9$&$154.8$&$5.6$\nl
$-16.8$&$-27.6$&$4.3$&$164.2$&$6.1$&$-0.017$&$0.024$&$0.009$&$0.032$&$-28.6$&$4.3$&$164.7$&$6.1$\nl
$-15.6$&$-31.7$&$4.0$&$175.2$&$6.7$&$0.011$&$0.021$&$0.048$&$0.027$&$-30.9$&$4.2$&$175.6$&$5.9$\nl
$-14.4$&$-27.8$&$3.7$&$164.8$&$6.4$&$-0.004$&$0.020$&$0.064$&$0.027$&$-28.3$&$3.8$&$167.5$&$5.4$\nl
$-13.2$&$-23.4$&$3.3$&$173.0$&$5.2$&$-0.006$&$0.017$&$0.034$&$0.023$&$-23.9$&$3.4$&$174.2$&$4.9$\nl
$-12.0$&$-26.8$&$3.0$&$166.0$&$5.2$&$0.018$&$0.017$&$0.059$&$0.022$&$-25.7$&$3.2$&$169.5$&$4.5$\nl
$-10.8$&$-27.9$&$2.9$&$173.1$&$4.4$&$0.020$&$0.016$&$0.024$&$0.020$&$-26.8$&$3.0$&$173.7$&$4.4$\nl
$-9.6$&$-28.1$&$2.7$&$170.7$&$3.7$&$0.032$&$0.015$&$0.003$&$0.019$&$-26.2$&$2.7$&$170.9$&$4.0$\nl
$-8.4$&$-25.7$&$2.5$&$173.2$&$3.7$&$0.041$&$0.013$&$0.020$&$0.017$&$-23.2$&$2.5$&$174.0$&$3.8$\nl
$-7.2$&$-27.8$&$2.2$&$183.2$&$3.1$&$0.001$&$0.011$&$-0.000$&$0.015$&$-27.7$&$2.4$&$183.2$&$3.5$\nl
$-6.0$&$-25.0$&$2.0$&$193.8$&$2.8$&$0.028$&$0.009$&$0.002$&$0.012$&$-23.8$&$2.1$&$193.6$&$3.2$\nl
$-4.8$&$-25.9$&$1.9$&$189.8$&$2.6$&$0.033$&$0.009$&$-0.001$&$0.012$&$-24.4$&$2.0$&$189.6$&$3.0$\nl
$-3.6$&$-20.3$&$1.7$&$198.8$&$2.2$&$0.023$&$0.008$&$-0.022$&$0.011$&$-19.7$&$3.4$&$189.6$&$4.8$\nl
$-2.4$&$-16.6$&$1.5$&$205.5$&$2.1$&$0.027$&$0.007$&$-0.011$&$0.009$&$-14.5$&$3.7$&$200.2$&$5.8$\nl
$-1.2$&$-9.9$&$1.4$&$209.1$&$2.0$&$0.029$&$0.006$&$-0.012$&$0.008$&$-6.9$&$3.5$&$203.5$&$5.5$\nl
$0.0$&$-1.5$&$1.4$&$209.4$&$2.0$&$0.020$&$0.006$&$-0.012$&$0.008$&$-1.5$&$3.6$&$203.5$&$5.5$\nl
$1.2$&$6.4$&$1.4$&$209.1$&$1.9$&$0.007$&$0.006$&$-0.021$&$0.008$&$2.3$&$3.3$&$200.2$&$4.5$\nl
$2.4$&$11.4$&$1.5$&$204.8$&$2.1$&$0.002$&$0.007$&$-0.004$&$0.009$&$4.9$&$3.7$&$202.8$&$6.0$\nl
$3.6$&$16.5$&$1.6$&$198.4$&$2.4$&$0.007$&$0.008$&$0.005$&$0.010$&$11.9$&$3.9$&$200.8$&$6.8$\nl
$4.8$&$19.5$&$1.8$&$197.6$&$2.6$&$-0.007$&$0.009$&$-0.009$&$0.011$&$19.3$&$2.0$&$197.9$&$3.1$\nl
$6.0$&$20.3$&$2.2$&$192.3$&$3.0$&$-0.012$&$0.010$&$-0.005$&$0.014$&$19.8$&$2.3$&$192.4$&$3.5$\nl
$7.2$&$21.8$&$2.3$&$183.7$&$3.3$&$-0.007$&$0.012$&$-0.002$&$0.015$&$21.4$&$2.5$&$183.7$&$3.7$\nl
$8.4$&$24.9$&$2.5$&$174.8$&$3.4$&$-0.026$&$0.014$&$-0.006$&$0.018$&$23.4$&$2.6$&$174.6$&$3.8$\nl
$9.6$&$19.7$&$2.7$&$177.2$&$3.6$&$-0.025$&$0.015$&$-0.016$&$0.020$&$18.4$&$2.8$&$176.8$&$4.1$\nl
$10.8$&$28.2$&$2.7$&$175.3$&$3.8$&$-0.005$&$0.015$&$-0.001$&$0.019$&$27.9$&$2.8$&$175.2$&$4.2$\nl
$12.0$&$28.3$&$3.1$&$164.8$&$4.4$&$0.011$&$0.017$&$0.009$&$0.022$&$29.1$&$3.1$&$165.2$&$4.5$\nl
$13.2$&$26.0$&$3.5$&$171.5$&$4.9$&$0.024$&$0.019$&$0.009$&$0.024$&$27.3$&$3.5$&$171.7$&$5.1$\nl
$14.4$&$30.9$&$3.6$&$167.7$&$5.6$&$0.023$&$0.020$&$0.034$&$0.026$&$32.1$&$3.7$&$168.5$&$5.3$\nl
$15.6$&$33.3$&$4.2$&$134.3$&$8.1$&$-0.044$&$0.028$&$0.110$&$0.040$&$29.8$&$3.9$&$140.5$&$5.6$\nl
$16.8$&$32.6$&$4.6$&$163.8$&$7.7$&$-0.018$&$0.026$&$0.054$&$0.034$&$31.3$&$4.6$&$166.8$&$6.6$\nl
$18.6$&$30.4$&$4.0$&$164.0$&$6.8$&$-0.054$&$0.022$&$0.065$&$0.029$&$25.9$&$4.1$&$170.1$&$5.8$\nl
$21.0$&$34.8$&$4.4$&$176.8$&$6.5$&$0.011$&$0.023$&$0.016$&$0.030$&$35.4$&$4.6$&$177.4$&$6.5$\nl
$24.0$&$34.3$&$4.6$&$153.2$&$6.8$&$-0.020$&$0.028$&$0.019$&$0.037$&$32.9$&$4.6$&$154.0$&$6.6$\nl
$28.2$&$40.1$&$5.1$&$148.9$&$6.3$&$-0.058$&$0.043$&$-0.038$&$0.057$&$37.2$&$4.8$&$148.7$&$7.1$\nl
$33.6$&$40.5$&$5.3$&$144.0$&$9.0$&$-0.065$&$0.032$&$0.076$&$0.044$&$35.4$&$5.1$&$148.1$&$7.2$\nl
$41.4$&$31.3$&$5.5$&$152.3$&$8.4$&$-0.059$&$0.032$&$0.041$&$0.043$&$27.5$&$5.4$&$152.8$&$7.9$\nl
$55.2$&$36.9$&$7.7$&$113.5$&$14.1$&$-0.055$&$0.058$&$0.101$&$0.086$&$33.0$&$6.5$&$119.0$&$9.8$\nl
$78.0$&$37.0$&$14.4$&$178.0$&$15.8$&$-0.103$&$0.142$&$-0.214$&$0.230$&$38.5$&$14.8$&$183.7$&$22.6$\nl
\enddata
\end{deluxetable}
\begin{deluxetable}{rrrrrrrrrrrrr}
\scriptsize
\tablenum{1d}
\tablewidth{6.5in}
\tablecaption{Data for PA 115 }
\tablehead{
\colhead{$R(\arcsec)$} &
\colhead{$V$} & \colhead{$\pm$} &
\colhead{$\sigma$} & \colhead{$\pm$} &
\colhead{$h_3$} & \colhead{$\pm$} &
\colhead{$h_4$} & \colhead{$\pm$} &
\colhead{Mean} & \colhead{$\pm$} &
\colhead{Disp.} & \colhead{$\pm$}}
\startdata
$-78.0$&$-62.0$&$19.3$&$214.4$&$23.0$&$-0.042$&$0.099$&$-0.060$&$0.143$&$-63.7$&$19.2$&$208.9$&$28.3$\nl
$-55.2$&$-42.0$&$8.9$&$134.2$&$14.5$&$0.152$&$0.056$&$0.199$&$0.079$&$-30.2$&$8.1$&$147.1$&$11.8$\nl
$-41.4$&$-48.1$&$6.3$&$149.8$&$9.8$&$0.078$&$0.037$&$0.055$&$0.049$&$-43.9$&$5.9$&$150.4$&$8.6$\nl
$-33.6$&$-33.9$&$5.8$&$152.4$&$7.7$&$0.025$&$0.039$&$-0.015$&$0.052$&$-32.1$&$5.5$&$150.5$&$8.2$\nl
$-28.2$&$-33.7$&$5.0$&$164.4$&$7.4$&$-0.007$&$0.028$&$0.015$&$0.037$&$-34.1$&$5.1$&$164.6$&$7.4$\nl
$-24.0$&$-36.1$&$5.4$&$167.6$&$7.1$&$0.092$&$0.042$&$-0.017$&$0.052$&$-31.5$&$5.1$&$167.6$&$7.4$\nl
$-21.0$&$-39.0$&$5.4$&$167.2$&$8.6$&$0.123$&$0.027$&$0.089$&$0.034$&$-30.3$&$4.9$&$173.1$&$7.1$\nl
$-18.6$&$-31.4$&$4.6$&$165.3$&$6.2$&$0.142$&$0.052$&$-0.092$&$0.056$&$-27.2$&$4.2$&$169.7$&$6.1$\nl
$-16.8$&$-39.3$&$5.2$&$161.7$&$8.0$&$0.075$&$0.028$&$0.046$&$0.037$&$-34.7$&$4.9$&$163.6$&$7.0$\nl
$-15.6$&$-37.6$&$4.4$&$169.7$&$6.6$&$0.065$&$0.023$&$0.031$&$0.030$&$-34.0$&$4.3$&$171.7$&$6.2$\nl
$-14.4$&$-32.5$&$4.0$&$157.2$&$6.5$&$0.042$&$0.023$&$0.054$&$0.030$&$-29.2$&$3.9$&$160.8$&$5.6$\nl
$-13.2$&$-35.0$&$3.6$&$163.7$&$5.5$&$0.015$&$0.020$&$0.025$&$0.027$&$-34.0$&$3.6$&$164.5$&$5.3$\nl
$-12.0$&$-35.7$&$3.5$&$160.1$&$5.3$&$0.056$&$0.019$&$0.041$&$0.025$&$-31.7$&$3.4$&$163.3$&$4.9$\nl
$-10.8$&$-31.3$&$3.2$&$169.7$&$4.5$&$0.057$&$0.018$&$0.008$&$0.024$&$-27.8$&$3.1$&$170.7$&$4.6$\nl
$-9.6$&$-32.6$&$2.9$&$169.5$&$3.9$&$0.038$&$0.017$&$-0.006$&$0.022$&$-30.3$&$2.9$&$168.7$&$4.2$\nl
$-8.4$&$-29.2$&$2.6$&$176.0$&$3.9$&$0.007$&$0.014$&$0.010$&$0.018$&$-28.9$&$2.7$&$175.9$&$4.0$\nl
$-7.2$&$-28.5$&$2.5$&$189.2$&$3.5$&$0.017$&$0.012$&$-0.008$&$0.016$&$-27.7$&$2.6$&$189.2$&$3.9$\nl
$-6.0$&$-27.2$&$2.2$&$190.1$&$3.2$&$0.026$&$0.011$&$0.009$&$0.014$&$-26.2$&$2.3$&$189.6$&$3.5$\nl
$-4.8$&$-24.1$&$2.0$&$192.9$&$2.9$&$0.028$&$0.009$&$0.003$&$0.012$&$-22.9$&$2.1$&$192.7$&$3.2$\nl
$-3.6$&$-22.2$&$1.8$&$197.4$&$2.5$&$0.022$&$0.008$&$-0.008$&$0.011$&$-20.3$&$4.0$&$193.7$&$6.7$\nl
$-2.4$&$-19.3$&$1.6$&$203.7$&$2.3$&$0.031$&$0.007$&$-0.001$&$0.009$&$-13.8$&$3.9$&$203.1$&$6.4$\nl
$-1.2$&$-10.5$&$1.5$&$210.2$&$2.1$&$0.024$&$0.007$&$-0.009$&$0.009$&$-7.6$&$3.8$&$206.0$&$6.0$\nl
$0.0$&$3.3$&$1.5$&$211.6$&$2.1$&$0.016$&$0.006$&$-0.009$&$0.008$&$3.3$&$3.8$&$207.1$&$6.0$\nl
$1.2$&$9.0$&$1.5$&$204.4$&$2.0$&$0.004$&$0.007$&$-0.016$&$0.009$&$5.3$&$3.4$&$197.4$&$5.0$\nl
$2.4$&$17.9$&$1.6$&$200.7$&$2.2$&$-0.003$&$0.008$&$-0.014$&$0.010$&$11.8$&$3.6$&$194.8$&$5.4$\nl
$3.6$&$15.9$&$1.7$&$196.6$&$2.4$&$-0.002$&$0.008$&$-0.009$&$0.011$&$10.2$&$3.8$&$192.7$&$6.1$\nl
$4.8$&$24.8$&$1.9$&$192.3$&$2.7$&$0.007$&$0.009$&$-0.007$&$0.012$&$25.2$&$2.1$&$192.2$&$3.2$\nl
$6.0$&$25.6$&$2.2$&$189.8$&$3.2$&$-0.013$&$0.011$&$0.005$&$0.014$&$25.0$&$2.3$&$189.9$&$3.5$\nl
$7.2$&$29.3$&$2.6$&$186.1$&$3.6$&$-0.022$&$0.013$&$-0.003$&$0.017$&$28.3$&$2.7$&$186.2$&$4.0$\nl
$8.4$&$28.3$&$2.6$&$179.8$&$3.4$&$0.016$&$0.015$&$-0.026$&$0.020$&$29.1$&$2.7$&$179.2$&$4.0$\nl
$9.6$&$32.5$&$2.8$&$169.9$&$4.2$&$-0.008$&$0.015$&$0.026$&$0.019$&$31.9$&$2.9$&$170.9$&$4.2$\nl
$10.8$&$30.5$&$3.3$&$169.7$&$5.0$&$-0.027$&$0.018$&$0.030$&$0.023$&$28.8$&$3.3$&$170.8$&$4.8$\nl
$12.0$&$27.5$&$3.6$&$167.9$&$5.8$&$0.017$&$0.020$&$0.042$&$0.026$&$28.4$&$3.7$&$169.2$&$5.3$\nl
$13.2$&$32.7$&$3.7$&$159.6$&$5.8$&$-0.017$&$0.022$&$0.034$&$0.028$&$31.9$&$3.7$&$161.5$&$5.4$\nl
$14.4$&$31.6$&$4.0$&$142.3$&$6.9$&$-0.034$&$0.025$&$0.075$&$0.034$&$29.1$&$3.7$&$145.0$&$5.4$\nl
$15.6$&$40.8$&$4.5$&$147.8$&$7.5$&$-0.026$&$0.027$&$0.060$&$0.037$&$39.3$&$4.3$&$151.0$&$6.2$\nl
$16.8$&$42.3$&$5.2$&$141.6$&$10.1$&$-0.045$&$0.032$&$0.122$&$0.045$&$39.3$&$4.8$&$146.4$&$7.0$\nl
$18.6$&$35.2$&$4.3$&$160.1$&$6.9$&$-0.026$&$0.024$&$0.047$&$0.032$&$33.4$&$4.3$&$162.9$&$6.2$\nl
$21.0$&$32.1$&$4.9$&$151.4$&$7.5$&$-0.068$&$0.029$&$0.040$&$0.039$&$27.3$&$4.7$&$155.2$&$6.7$\nl
$24.0$&$41.2$&$5.3$&$153.9$&$6.6$&$-0.096$&$0.045$&$-0.030$&$0.057$&$36.0$&$4.9$&$154.1$&$7.2$\nl
$28.2$&$37.8$&$5.4$&$159.1$&$8.0$&$-0.061$&$0.030$&$0.034$&$0.040$&$33.7$&$5.3$&$162.5$&$7.5$\nl
$33.6$&$42.4$&$5.6$&$143.0$&$7.9$&$0.038$&$0.036$&$0.009$&$0.048$&$44.9$&$5.3$&$142.8$&$7.8$\nl
$41.4$&$36.2$&$6.0$&$143.0$&$12.2$&$0.014$&$0.037$&$0.123$&$0.053$&$36.6$&$6.1$&$151.8$&$8.6$\nl
$55.2$&$38.1$&$8.5$&$134.1$&$13.6$&$-0.068$&$0.056$&$0.057$&$0.075$&$32.9$&$7.7$&$137.8$&$11.6$\nl
$78.0$&$35.6$&$36.7$&$72.2$&$44.2$&$-0.241$&$0.721$&$0.048$&$1.027$&$22.6$&$10.4$&$66.7$&$18.1$\nl
\enddata
\end{deluxetable}
\newpage
| proofpile-arXiv_065-9048 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The observation of cosmic ray particles with energies higher than $10^{11}~GeV$
\cite{EHE,FE} gives a serious challenge to the known mechanisms of
acceleration.
The shock acceleration in different
astrophysical objects typically gives maximal energy of accelerated protons
less than $(1-3)\cdot 10^{10}~GeV$ \cite{NMA}. The unipolar induction can
provide the maximal energy $1\cdot 10^{11}~GeV$ only for the extreme values
of the parameters \cite{BBDGP}. Much attention has recently been given to
acceleration by ultrarelativistic shocks \cite{Vie},\cite{Wax}. The
particles here can gain a tremendous increase in energy,
equal to $\Gamma^2$, at a single reflection,
where $\Gamma$ is the Lorentz factor of the shock.
However, it is
known (see e.g. the simulation
for pulsar relativistic wind in \cite{Hosh}) that particles entering
the shock region are captured there or at least have a small probability
to escape (see discussion relevant for UHECR in ref.\cite{B99}
{\em Topological defects} (for a review see \cite{Book}) can naturally
produce particles of ultrahigh energies (UHE). The pioneering observation
of this possibility was made by Hill, Schramm and Walker \cite{HS} (for
a general analysis of TD as UHE CR sources see \cite {BHSS} and for a
review \cite{Sigl}).
In many cases TD become unstable and decompose to constituent fields,
superheavy gauge and Higgs bosons (X-particles), which then decay
producing UHECR. It could happen, for example, when two segments of
ordinary string, or monopole and antimonopole touch each other, when
electrical current in superconducting string reaches the critical value
and in some other cases.
In most cases the problem with UHECR from TD is not the maximal energy,
but the fluxes. One very general reason
for the low fluxes consists in the large distance between TD. A dimension
scale for this distance is the Hubble distance $H_0^{-1}$. However, in some
rather exceptional cases this scale is multiplied to a small
dimensionless value $r$. If a distance between TD is larger than
UHE proton attenuation length (due to the GZK effect \cite{GZK}), then
the UHE flux has an exponential cutoff.
The other general restriction to the flux of UHECR comes from the observed
by EGRET extragalactic flux of gamma-ray radiation at energies between
$10~MeV$ and $100~GeV$. UHE particles such as photons and electrons produce
e-m cascade in the Universe, colliding with photons of background
radiation (most notably microwave and radio background). Low energy part
of this cascade extends down to the EGRET measurements. The cascade
flux is below the EGRET limit if the energy density of the cascade photons
$\omega_{cas}<2\cdot 10^{-6}~eV/cm^3$. On the other hand $\omega_{cas}$
can be readily evaluated for TD from the total energy release assuming that
half of it transferred to e-m cascade. Since, on the other hand the same
energy release determines the UHECR flux, the latter is limited by
the value of $\omega_{cas}$ given above.
{\em Relic SH particles} have important property of clusterization in
the Galactic halo and thus UHECR produced at their decays do not have
the GZK cutoff \cite{BKV}. The same property have monopolonium \cite{BBV}.
The relic SH particles produce UHECR due to their decays. Their lifetime
must be larger (or much larger) than the age of the Universe. For particles
heavier
than $10^{13} - 10^{14}~GeV$ this is very restrictive condition.
Even dimension-5 gravitational interaction makes the lifetime of such
massive particles very short. A particle must be protected from this
fast decay by some symmetry which is broken extremely weakly, for example
by instanton effects \cite{KR} or worm hole effects \cite{BKV}.
{\em Production} of both relic SH particles and TD naturally occurs
in the end of inflation. Decays of inflatons most probably
start in the regime of broad parametric resonance \cite{KLS}. It is
accompanied by massive production of particles not in thermal equilibrium.
At this stage ("preheating") TD can be born due to phase transitions.
SH particles are produced in the varying gravitational field of
the oscillating inflaton and can have now critical or subcritical
density. In the end of preheating phase the thermalization of produced
particles occurs, and the Universe enters the stage of reheating, when the
temperature is expected to be large, $T_r \sim \sqrt{\Gamma M_p}$, where
$M_p$ is the Planck mass and $\Gamma$ is the width of the inflaton decay in
the regime of the broad parametric resonance. Due to large $\Gamma$ the
reheating temperature is expected to be as high as $T_r \sim 10^{11} - 10^{12}~
GeV$ or even higher. The superheavy relic particles can be born also at
the reheating phase.
{\em Spectrum} of UHE particles produced at the decays of a relic SH
particles or of heavy X particles to which TD decompose, is
basically the energy spectrum of QCD cascade. This spectrum, in contrast to
the case of accelerated particles, is essentially non-powerlaw. It
has the Gaussian peak, the maximum of which determines the multiplicity
\cite{QCD}. For the large masses at interest the supesymmetric effects
become important; they considerably change the QCD
spectrum \cite{SUSY-QCD}.
The generic feature of decays of SH particles is the dominance of
pion production over baryon production. It results in the dominance of
UHE photons over nucleons at production, roughly as $\gamma/N \sim 10$.
{\em Observational signature} of TD is the presence of UHE photons in
the primary radiation. At some energies this effect is suppressed by
strong absorption on radio background \cite{radio}. For the discussion and
references see \cite{BBV}. The GZK cutoff is present, but
it is weaker than in case of accelerator sources, due to
the shape of QCD energy spectrum if the space distribution of the sources is
the same (e.g. for necklaces).
In case of relic SH particles and monopolonia (both of them are concentrated
in the halo) the signature is
dominance {\em at observation} of UHE photons over nucleons. Another
signature is anisotropy caused by asymmetric position of the sun in the halo.
\section{Topological Defects}
The following TD have been discussed as potential sources of UHE particles:
superconducting strings \cite{HSW}, ordinary strings \cite{BR},
including the cusp radiation \cite{Bran}, networks of monopoles connected by
strings \cite{BMV}, necklaces \cite{BV}, magnetic monopoles, or more
precisely bound monopole-antimonopole pairs (monopolonium \cite{Hill,BS}),
and vortons. Monopolonia and vortons are clustering in the Galactic halo,
and UHECR production is thus similar to the case of relic SH particles
considered in the next section.
(i) {\em Superconducting strings}\\
As was first noted by Witten\cite{Witten}, in a wide class of elementary
particle models, strings behave like superconducting wires. Moving through
cosmic magnetic fields, such strings develop electric currents.
Superconducting strings produce X particles when the electric current
in the strings reaches the critical value. In some scenarios,
e.g. \cite{OTW} where the current is induced by primordial magnetic field,
the critical current produces strong magnetic field, in which all high
energy particles degrade catastrophically in energy \cite{BeRu}.
However, for {\em ac} currents there are portions of the string with large
electric charge and small current. High energy particles can escape from
there.
Large {\it ac} currents can be induced in string loops as they oscillate in
galactic or extragalactic magnetic fields. Even if the string current
is typically well below critical, super-critical currents can be
reached in the vicinity of cusps, where the string shrinks by a large
factor and density of charge carriers is greatly enhanced. In this
case, X particles are emitted with large Lorentz factors.
Loops can also acquire {\it dc} currents at the time of formation, when they
are chopped off the infinite strings. As the loops lose their energy
by gravitational radiation, they shrink, the {\it dc} currents grow, and
eventually become overcritical. There could be a variety of
astrophysical mechanisms for
excitation of the electric current in superconducting strings, but for
all mechanisms considered so far the flux of
UHE particles is smaller than the observed flux \cite{BeVi}. However,
the number of possibilities to be explored here is very large, and
more work is needed to reach a definitive conclusion.
(ii) {\em Ordinary strings}\\
There are several mechanisms by which ordinary strings can produce UHE
particles.
For a special choice of initial conditions, an ordinary loop can collapse to a
double line, releasing its total energy in the form of X-particles\cite{BR}.
However, the probability of this mode of collapse is
extremely small, and its contribution to the overall flux of UHE
particles is negligible.
String loops can also
produce X-particles when they self-intersect (e.g. \cite{Shell}).
Each intersection, however, gives only a few
particles, and the corresponding flux is very small \cite{GK}.
Superheavy particles with large Lorentz factors can be produced in
the annihilation of cusps, when the two cusp segments overlap \cite{Bran}.
The energy released in a single cusp event can be quite large, but
again, the resulting flux of UHE particles is too small to account for
the observations \cite {Bhat89,GK}.
It has been recently argued \cite{Vincent} that long
strings lose most of
their energy not by production of closed loops, as it is generally
believed, but by direct emission of heavy X-particles.
If correct, this claim will change dramatically
the standard picture of string evolution. It has been also
suggested that the decay products of particles produced in this
way can explain the observed flux of UHECR \cite{Vincent,ViHiSa}.
However, as it is argued in ref \cite{BBV}, numerical simulations described in
\cite{Vincent} allow an alternative interpretation not connected with
UHE particle production.
But even if the conclusions of \cite{Vincent} were correct, the
particle production mechanism suggested in that paper cannot explain
the observed flux of UHE particles. If particles are emitted directly
from long strings, then the distance between UHE particle sources $D$ is
of the order of the Hubble distance $H_0^{-1}$, $D \sim H_0^{-1} \gg R_p$,
where $R_p$ is the proton attenuation length in the microwave background
radiation. In this case UHECR flux has an exponential cutoff at energy
$E \sim 3\cdot 10^{10}~GeV$. In the case of accidental proximity of a
string to the observer, the flux is strongly anisotropic. A fine-tuning
in the position of the observer is needed to reconcile both
requirements.
(iii){\em Network of monopoles connected by strings}.\\
The sequence of phase transitions
\begin{equation}
G\to H\times U(1)\to H\times Z_N
\label{eq:symm}
\end{equation}
results in the formation of monopole-string networks in which each monopole
is attached to N strings. Most of the monopoles and most of the strings belong
to one infinite network. The evolution of networks is expected to be
scale-invariant with a characteristic distance between monopoles
$d=\kappa t$, where $t$ is the age of Universe and $\kappa=const$.
The production of UHE particles are considered in \cite{BMV}. Each
string attached
to a monopole pulls it with a force equal to the string tension, $\mu \sim
\eta_s^2$, where $\eta_s$ is the symmetry breaking vev of strings. Then
monopoles have a typical acceleration $a\sim \mu/m$, energy $E \sim \mu d$
and Lorentz factor $\Gamma_m \sim \mu d/m $, where $m$ is the mass of the
monopole. Monopole moving with acceleration can, in principle, radiate
gauge quanta, such as photons, gluons and weak gauge bosons, if the
mass of gauge quantum (or the virtuality $Q^2$ in the case of gluon) is
smaller than the monopole acceleration. The typical energy of radiated quanta
in this case is $\epsilon \sim \Gamma_M a$. This energy can be much higher
than what
is observed in UHECR. However, the produced flux (see \cite{BBV}) is much
smaller than the observed one.
(iv){\em Vortons}.\\
Vortons are charge and current carrying loops of superconducting
string stabilized by their angular momentum \cite{dash}. Although
classically stable, vortons decay by gradually losing charge carriers
through quantum tunneling. Their lifetime, however, can be greater
than the present age of the universe, in which case the escaping
$X$-particles will produce a flux of cosmic rays. The $X$-particle
mass is set by the symmetry breaking scale $\eta_X$ of string
superconductivity.
The number density of vortons formed in the early universe is rather
uncertain. According to the analysis in Ref.\cite{BCDT}, vortons are
overproduced in models with $\eta_X > 10^9 GeV$, so all such models
have to be ruled out. In that case, vortons cannot contribute to the
flux of UHECR. However, an alternative analysis \cite{mash} suggests
that the excluded range is $10^9 GeV <\eta_X < 10^{12}GeV$, while for
$\eta_X \gg 10^{12}GeV$ vorton formation is strongly suppressed. This
allows a window for potentially interesting vorton densities
with\footnote{These numbers assume that strings are formed in a
first-order phase transition and that $\eta_X$ is comparable to the
string symmetry breaking scale $\eta_s$. For a second-order phase
transition, the forbidden range widens and the allowed window moves
towards higher energies \cite{mash}.}
$\eta_X \sim 10^{12}-10^{13}GeV$.
Like monopolonia and SH relic particles, vortons are clustering in the
Galactic halo and UHECR production and spectra are similar in these three
cases.
(iv){\em Necklaces}.\\
Necklaces are hybrid TD corresponding to the case $N=2$ in
Eq.(\ref{eq:symm}), i.e. to the case when each monopole is attached to two
strings. This system resembles ``ordinary'' cosmic strings,
except the strings look like necklaces with monopoles playing the role
of beads. The evolution of necklaces depends strongly on the parameter
\begin{equation}
r=m/\mu d,
\end{equation}
where $d$ is the average separation between monopoles and antimonopoles
along the strings.
As it is argued in Ref. \cite{BV}, necklaces might evolve to
configurations with $r\gg 1$, though numerical simulations are needed to
confirm this conclusion.
Monopoles and antimonopoles trapped in the necklaces
inevitably annihilate in the end, producing first the heavy Higgs and
gauge bosons ($X$-particles) and then hadrons.
The rate of $X$-particle production can be estimated as \cite{BV}
\begin{equation}
\dot{n}_X \sim \frac{r^2\mu}{t^3m_X}.
\label{eq:xrate}
\end{equation}
Restriction due to e-m cascade radiation demands the cascade energy density
$\omega_{cas} \leq 2\cdot 10^{-6}~eV/cm^3$. The cascade energy density
produced by necklaces can be calculated as
\begin{equation}
\omega_{cas}=
\frac{1}{2}f_{\pi}r^2\mu \int_0 ^{t_0}\frac{dt}{t^3}
\frac{1}{(1+z)^4}=\frac{3}{4}f_{\pi}r^2\frac{\mu}{t_0^2},
\label{eq:n-cas}
\end{equation}
where $f_{\pi}\approx 0.5$ is a fraction of total energy release
transferred to the cascade.
The separation between necklaces is given by \cite{BV}
$D \sim r^{-1/2}t_0$ for large $r$. Since $r^2\mu$ is limited by cascade
radiation, Eq.(\ref{eq:n-cas}), one can obtain a lower limit on the
separation $D$ between necklaces as
\begin{equation}
D \sim \left( \frac{3f_{\pi}\mu}{4t_0^2\omega_{cas}}\right)^{1/4}t_0
>10(\mu/10^6~GeV^2)^{1/4}~kpc,
\label{eq:xi}
\end{equation}
Thus, necklaces can give a realistic example of the case when separation
between sources is small and the Universe can be assumed uniformly filled by
the sources.
The fluxes of UHE protons and photons are shown in Fig.1 according to
calculations of Ref.\cite{BBV}.
Due to absorption of UHE photons the
proton-induced EAS from necklaces strongly dominate over those induced by
photons at all
energies except $E> 3\cdot 10^{11}~GeV$ (see Fig.1), where photon-induced
showers can comprise an appreciable fraction of the total rate.
The dashed,
dotted and solid lines in Fig.1 correspond to the masses of X-particles
$10^{14}~GeV,\;\; 10^{15}~GeV$ and $10^{16}~GeV$, respectively. The values
of $r^2\mu$ used to fit these curves to the data are
$7.1\cdot 10^{27}~GeV^2,\;\;6.0\cdot 10^{27}~GeV^2$ and
$6.3\cdot 10^{27}~GeV^2$, respectively. They correspond to the cascade
density $\omega_{cas}$ equal to
$1.5\cdot 10^{-6}~eV/cm^3,\;\;
1.2\cdot 10^{-6}~eV/cm^3$ and $1.3 \cdot 10^{-6}~eV/cm^3$, respectively, all
less than the allowed cascade energy density for which we adopt the
conservative value $\omega_{cas}=2\cdot 10^{-6}~eV/cm^3$.
For energy lower than $1\cdot 10^{10}~GeV$, the presence of
another component with a cutoff at $E\sim 1\cdot 10^{10}~GeV$ is assumed. It
can be
generated, for example, by jets from AGN \cite{Bier}, which naturally
have a cutoff at this energy. \\
\vspace{-1cm}
\begin{figure}[h]
\epsfxsize=8truecm
\centerline{\epsffile{figure1.eps}}
\vspace{-1cm}
\caption{Proton and gamma-ray fluxes from necklaces. High ($\gamma$-high)
and low ($\gamma$-low) photon fluxes correspond to two extreme cases of
gamma-ray absorption. The fluxes are given for $m_X=1\cdot 10^{14}~GeV$
(dashed lines), $m_X=1\cdot 10^{15}~GeV$ (dotted lines) and
$m_X=1\cdot 10^{16}~GeV$ (solid lines).}
\end{figure}
{\section{Relic Superheavy Particles}
{\em Production} of relic SH particles occurs in time varying gravitational
field of oscillating inflaton \cite{Kolb,KuTk}. SH particle must be lighter
than inflaton, otherwise the relic density of SH particles is exponentially
suppressed. Since inflaton has to be lighter than $10^{13}~GeV$ to produce
the required spectrum of primordial density fluctuations, this scale gives
the upper limit to the mass of SH relic particle. In this scenario SH
particles can provide the critical density of the Universe.
Several other plausible mechanisms were identified in \cite{BKV}, including
thermal production at the reheating stage, production through the decay of
inflaton field at the end of the preheating phase
and through the decay of hybrid topological
defects, such as monopoles connected by strings or walls bounded by
strings.
We shall start our short description with the non-equilibrium thermal
production.
For the thermal production, temperatures comparable to $m_X$ are needed.
In the case of a heavy decaying gravitino,
the reheating temperature $T_R$ (which is the highest temperature
relevant for the considered problem)
is severely limited to value below $10^8- 10^{10}$~GeV, depending
on the gravitino mass (see Ref. \cite{ellis} and references therein).
On the other hand,
in models with dynamically broken supersymmetry, the lightest
supersymmetric particle is the gravitino. Gravitinos with mass
$m_{3/2} \leq 1$~keV interact relatively strongly with the thermal bath,
thus decoupling relatively late, and it can be the CDM particle \cite{grav}.
In this scenario all phenomenological
constraints on $T_R$ (including the decay of the second
lightest supersymmetric particle) disappear and one can assume
$T_R \sim 10^{11} - 10^{12}$~GeV. In this
range of temperatures, SH particles are not in thermal equilibrium.
If $T_R < m_X$, the density $n_X$ of $X$-particles produced during the
reheating phase at time $t_R$ due to $a+\bar{a} \to X+\bar{X}$ is easily
estimated as
\begin{equation}
n_X(t_R) \sim N_a n_a^2 \sigma_X t_R \exp(-2m_X/T_R),
\label{eq:dens}
\end{equation}
where $N_a$ is the number of flavors which participate in the production of
X-particles, $n_a$ is the density of $a$-particles and $\sigma_X$ is
the production cross-section. The density of $X$-particles at the
present epoch can be found by the standard procedure of calculating
the ratio $n_X/s$, where
$s$ is the entropy density. Then for $m_X = 1\cdot 10^{13}$~GeV
and $\xi_X$ in the wide range of values $10^{-8} - 10^{-4}$, the required
reheating temperature is $T_R \sim 3\cdot 10^{11}$~GeV.
In the second scenario mentioned above, non-equilibrium inflaton decay,
$X$-particles are usually overproduced and a second period of
inflation is needed
to suppress their density.
Finally, $X$-particles could be produced by TD such as strings or textures.
Particle production occurs at string intersections or in collapsing texture
knots. The evolution of defects is scale invariant, and roughly a constant
number of particles $\nu$ is produced per horizon volume $t^3$ per Hubble
time $t$. ($\nu \sim 1$ for textures and $\nu \gg 1$ for strings.) The main
contribution to to the X-particle density is given by the earliest epoch,
soon after defect formation, and we find
$\xi_X \sim 10^{-6} \nu (m_X/10^{13}~GeV)(T_f/10^{10}~GeV)^3$, where
$T_f$ is the defect formation temperature. Defects of energy scale
$\eta \geq m_X$ could be formed at a phase transition at or slightly
before the end of inflation. In the former case, $T_f \sim T_R$ , while in
the latter case defects should be considered as "formed" when their typical
separation becomes smaller than $t$ (hence $T_f < T_R$).
X-particles can also be produced
by hybrid topological defects: monopoles connected by strings or walls
bound by strings. The required values of $n_X/s$ can be obtained for a wide
range of defect parameters.
{\em Lifetime} of SH particle has to be larger (or much larger) than age of
the Universe. Even gravitational interactions, if unsuppressed, make the
lifetime of $X$-particle with mass $m_X \sim 10^{13} - 10^{14}~GeV$ much
shorter. Some (weakly broken) symmetry is needed to protect X-particle
from the fast decay. Such symmetries indeed exist, e.g. discrete gauge
symmetries. If such symmetry is very weakly broken by e.g. wormhole
effects \cite{BKV} or decay is caused by instanton effects \cite{KR},
X-particle can have the desired lifetime. The detailed analysis of the
gauge discrete symmetries was recently performed in \cite{Yanagida}. There
were found the cases when allowed non-renormalizable operators for a decay
of X-particle are suppressed by high power of the Planck mass. In this case
the lifetime of X-particle can be larger than the age of the Universe.
The realistic example of long-lived SH particle, the crypton, is given in
\cite{Benakli}. Like in the case above, decay of crypton is suppressed by
high power of the Planck mass.
{\em Energy spectrum} of decaying particles from DM halo has no GZK cutoff
\cite{BKV} and photons dominate in the flux. The energy spectrum was
calculated using QCD Monte Carlo simulation "Herwig" \cite{Sarkar} and
as limiting QCD spectrum with supersymmetric particles taken into account
\cite{BK}. The spectrum, as calculated in \cite{BBV}, is shown in Fig.2.
\begin{figure}[h]
\epsfxsize=8truecm
\centerline{\epsffile{figure2.eps}}
\vspace{-1cm}
\caption{Predicted fluxes from relic SH particles
($m_X=1\cdot 10^{14}~GeV$) or from monopolonia producing X-particles with
the same masses: nucleons from the halo (curves labelled as "protons"),
gamma-rays from halo (curves labelled "gammas") and extragalactic protons.
The solid, dotted and dashed curves correspond do different models of
DM distribution in the halo.}
\end{figure}
\section{Signatures}
In contrast to acceleration astrophysical sources, TD
production spectrum is enhanced by UHE photons. Though UHE photons are
absorbed stronger than UHE protons (antiprotons), the photons can
dominate at some energies or at least $\gamma/p$ ratio in case of TD
is much larger than in case of acceleration sources \cite{Aha,BBV}.
This signature can be discussed quantitatively for necklaces, probably
the only extragalactic TD which satisfy the observational constraints.
At large value of $r=m/\mu d >10^7$ necklaces have a small separation
$D < R_{\gamma}$, where $R_{\gamma}$ is an absorption length for UHE
photons. They are characterized by a small fraction of photon-induced
EAS at energies $10^{10} - 10^{11}~GeV$. However, this fraction increases
with energy and becomes considerable at the highest energies.
Relic SH particles, as well as monopolonia and vortons, have the enhanced
density in the Galactic halo. The signature of this relics is absence
of the GZK cutoff, dominance of UHE gamma radiation at observation and
anisotropy due to non-central position of the Sun in the DM halo.
{\em Anisotropy} is the strongest signature of the DM halo model. It is
most noticeable as the difference in fluxes between
directions to Galactic Center and Anticenter. Since
Galactic Center is not observed by any of existing now detectors,
anisotropy for them is less pronounced, but can be detected if the halo
component becomes dominant at $E \sim (1-3)\cdot 10^{19}~eV$. In case
the halo component is responsible only for the events at
$E\geq 1\cdot 10^{20}~eV$ as recent AGASA data suggest,
statistics is too small for the predicted anisotropy and this problem
will be left for the Auger detector in the southern hemisphere
\cite{Cronin}.
{\em UHE photons} as primaries can be also tested by the existing detectors
.
The search for photon induced showers is not an easy experimental task.
It is known (see e.g. Ref.\cite{AK}) that the muon content of the
photon-induced showers at very high energies
is very similar to that in proton-induced showers. However,
some difference in the muon content between these two
cases is expected and may be used to distinguish between them
observationally.
Fly's Eye detector is the most effective one in distinguishing between the
photon
and proton induced showers. This detector is able to reconstruct the
development of the shower in the atmosphere \cite{FE,FE1}, which is
different for photon and proton induced showers. The analysis \cite{Halzen}
of the highest energy shower $E \sim 3\cdot 10^{20}~eV$ detected by Fly's Eye
detector disfavors its photon production. The future HiRes detector
\cite{HiRes} will reliably distinguish the photon and proton induced showers
.
The Landau-Pomeranchuk-Migdal (LPM) effect \cite{LPM} and the absorption of
photons in the geomagnetic field are two other
important phenomena which affect the detection of UHE photons
\cite{AK,Kasa}; (see \cite{ps} for a recent discussion).
The LPM effect reduces the cross-sections
of electromagnetic interactions at very high energies. However, if a
primary photon approaches the Earth in a direction characterized by a large
perpendicular component of the geomagnetic field, the photon likely decays
into electron and positron \cite{AK,Kasa}. Each of them emits
a synchrotron photon,
and as a result a bunch of photons strikes the Earth atmosphere. The LPM
effect, which strongly depends on energy, is thus suppressed. If, on the other
hand,
a photon moves along the magnetic field, it does not decay, and LPM effect
makes shower development in the atmosphere very slow. At extremely
high energies the maximum of the showers can be so close to the Earth surface
that it becomes "unobservable" \cite{ps}.
\section{Conclusions}
Topological Defects and relic quasistable SH particles are effectively produced
in the post-inflationary Universe and can produce now UHE
particles (photons and (anti)nucleons) with energies higher than observed
now in UHECR.
The fluxes from most known TD are too small to explain the observations.
The plausible candidates are necklaces and monopolonia (the latter by
observational properties are similar to relic SH particles). The fluxes
from extragalactic TD are restricted by e-m cascade radiation. The energy
spectrum of UHE (anti)protons from TD have less pronounced GZK cutoff than
from acceleration sources, because of the QCD production spectrum, which
is much different from the power-law energy spectrum. The signature of
extragalactic TD is presence of UHE photons in the primary radiation.
Absorption of UHE photons on radiobackground considerably diminishes
the fraction of photon induced showers at observation.
Relic SH particles (and monopolonia) are concentrated in the Galactic halo
and their energy spectrum does not exhibit the GZK cutoff. UHE photon flux
is $\sim 10$ times higher than that of protons. Detectable anisotropy is
expected, especially the difference of fluxes between Galactic Center and
Anticenter.
Therefore, both sources, TD and relic SH particles, have very distinct
experimental predictions, which can be tested with help of present
detectors, but most reliably - with the future detectors, such as
the Auger detector in the south hemisphere \cite{Cronin} and
HiRes\cite{HiRes}.\\*[3mm]
This paper is based on the talk given at 10th Int. Symposium on Very High
Energy Cosmic Ray Interaction, July 12-17,1998. The new publications
which appeared since that time is not included in this review. Most important
of them is the new data of the
AGASA detector at energies higher than $1\cdot 10^{20}~eV$. As shown in
Fig.2 \cite{AGASA} they can be
interpreted as existence of two components of UHECR, one with the GZK
cutoff and another - without it and extending to energies
$(2-3)\cdot 10^{20}~eV$. In this case the DM halo component (relic SH
particles) have to be responsible only for $E> 1\cdot 10^{20}~eV$ part of
the spectrum.\\*[2mm]
{\bf Acknowledgements}
I am grateful to my collaborators P.Blasi, M.Kachelriess and A.Vilenkin
for the discussions and help. M.Hillas, P.Sokolsky and A.Watson are
thanked for stimulating conversations and correspondence.
| proofpile-arXiv_065-9050 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\object{VY\,CMa} (\object{HD\,58061}, \object{SAO\,173591}) is one of the most luminous red
supergiants in the Galaxy with $L \simeq 4\, 10^5\, \, L_\odot$ (Sopka et al. 1995,
Jura \& Kleinmann 1990) and is, therefore, an ideal candidate for the
study of the progenitor phases of a supernova.
Its distance is about 1500 pc, its
spectral type is M5~Iae, and it is variable with a period of $\sim$2200 days
(Jura \& Kleinmann \cite{jura}; Malyuto et al. \cite{malyuto}; Danchi et al.
\cite{danchi}; Knapp \& Morris \cite{knapp} ; Imai et al. \cite{imai}).
Herbig (\cite{herbig}) found that \object{VY\,CMa} is embedded
in an optical nebula with a size of
$8^{\prime\prime}\times 12^{\prime\prime}$ at 650\, nm.
The first high-resolution observations of the
dust shell of \object{VY\,CMa} have been reported by
McCarthy \& Low (\cite{mccarthy1}),
McCarthy et al. (\cite{mccarthy2}) and
Danchi et al. (\cite{danchi}).
\object{VY\,CMa} is a source of H$_2$O, OH and SiO maser emission
(e.g. Masheder et al. \cite{masheder},
Bowers et al. \cite{bowers1},
Bowers et al. \cite{bowers2},
Imai et al. \cite{imai},
Humphreys et al. \cite{humphreys},
Richards et al. \cite{richards}).
The H$_2$O masers are distributed in an east-west
direction, whereas OH masers are distributed in a north-south direction,
possibly
indicating a disk and a polar outflow.
HST FOC images of the envelope of \object{VY\,CMa} were obtained by Kastner and Weintraub
(\cite{kastner2}). They show an asymmetric flux distribution in an
approximately east-west direction with a brighter core of pure
scattered light elongated from SE to NW.
In this {\it Letter} we present diffraction-limited $\sim$\,0.8\,$\mu$m,
1.28\,$\mu$m, and 2.17\,$\mu$m bispectrum speckle interferometry observations
of the mass-loss
envelope of \object{VY\,CMa}. The observations are presented in Section~2, and
the non-spherical shape of the envelope
is discussed in Section~3. Clues for \object{VY\,CMa} 's evolutionary
state are derived in Section~4.
\section{Observations
The optical and NIR speckle interferograms were obtained with
the ESO 3.6\,m telescope at La Silla on February 6,7 and 8, 1996. The optical
speckle interferograms were recorded through the edge filter RG\,780
(center wavelength of the filter/image intensifier combination:
$\sim$\,0.8\,$\mu$m; effective filter width: $\sim$\,0.07\,$\mu$m) with our
optical speckle camera described by Hofmann et al. (\cite{hofmann}). The near
infrared speckle interferograms were recorded with our
NICMOS\,3 camera
through interference filters with center wavelength/FWHM bandwidth of
1.28\,$\mu$m/0.012\,$\mu$m and 2.17\,$\mu$m/0.021\,$\mu$m. The observational
parameters (number of frames, exposure time per frame, pixel scale and seeing)
are listed in Table \ref{obs}. Diffraction-limited images were reconstructed
from the speckle interferograms by the bispectrum speckle interferometry method
(Weigelt \cite{weigelt1}; Lohmann et al. \cite{lohmann};
Weigelt \cite{weigelt2}). The power spectrum of \object{VY\,CMa} was determined with the
speckle interferometry method (Labeyrie 1970).
The atmospheric speckle transfer functions were derived from speckle
interferograms of the unresolved stars H42071 (RG\,780, 2000 frames),
IRC 30098 (1.28\,$\mu$m, 600 frames) and 1 Pup (2.17\,$\mu$m, 800 frames).\\
\begin{table}[t]
\caption{Observational parameters.}
\begin{tabular}{l l l l l l}
filter & number & exposure & pixel scale & seeing\\
& of frames & time & & \\
\hline
RG\,780 & 2000 & $ 60\, {\rm ms}$ & $7.2\,{\rm mas} $ & 1\farcs 5\\
1.28\,$\mu$m & 800 & $150\, {\rm ms}$ & $ 23.8\,{\rm mas}$ & 1\farcs 0\\
2.17\,$\mu$m & 800 & $100\, {\rm ms}$ & $ 47.6\,{\rm mas}$ & 1\farcs 5\\ \hline
\end{tabular}
\label{obs}
\end{table}
Figure \ref{bilder} shows contour plots and intensity cuts of the
reconstructed bispectrum speckle interferometry images of \object{VY\,CMa}. The
resolutions of the 0.8\,$\mu$m, 1.28\,$\mu$m and 2.17\,$\mu$m images are
46\,mas, 70\,mas and 111\,mas, respectively. The envelope of \object{VY\,CMa} is
asymmetric at each of the three wavelengths.
The object parameters were determined by
two-dimensional model fits to the visibility functions.
The models consist of two-dimensional elliptical Gaussian flux distributions
plus an additional unresolved component.
The $\sim$\,0.8\,$\mu$m image is best described by two Gaussians while the
1.28\,$\mu$m and the 2.17\,$\mu$m images are best described by one
Gaussian and an additional unresolved component.
The best-fit parameters are listed in Table \ref{fits}.
We present the
azimuthally averaged visibility function of \object{VY\,CMa} together with the
corresponding azimuthally averaged two-dimensional fits in Fig. \ref{visis},
in order to show the wavelength-dependent relative flux contribution
of the unresolved component (dashed line).
\begin{figure*}
\begin{center}
\resizebox{0.75\hsize}{!}{\includegraphics{comp2.eps}}
\end{center}
\vspace{0.5cm}%
\caption{Top: Contour plots of the RG\,780 (left), 1.28\,$\mu$m (middle)
and 2.17\,$\mu$m filter (right) bispectrum speckle interferometry
reconstructions of \object{VY\,CMa} . The contours are plotted from 15\% to
100\% in 10 steps. North is at the top and east to the left.
Bottom: Intensity cuts through the centers of the reconstructed
images along the major and minor axes (solid lines). The position angles
of these axes (RG\,780: 153$^\circ$, 1.28\,$\mu$m: 176$^\circ$, 2.17\,$\mu$m:
160$^\circ$) are taken from the two-dimensional fits to the visibility
functions (see Tab. \protect\ref{fits}) and indicated in the contour plots.
The dashed curves are cuts through the reconstructed images of the reference
stars.}
\label{bilder}
\end{figure*}
\begin{table}[t]
\caption{\object{VY\,CMa} 's parameters derived from the model fits to the visibility
functions (one {\em two-dimensional} Gaussian flux distribution plus an
unresolved object for the 1.28\,$\mu$m and 2.17\,$\mu$m observations and two
{\em two-dimensional} Gaussian flux distributions for the RG\,780 data).
The parameters are the position angle of the major axis,
the axes ratio (major/minor axis), the FWHM of major and minor
axes, the azimuthally averaged FWHM diameter and the relative flux
contributions of the Gaussian and the unresolved object. We estimate
the errors of the position angles to $\sim\pm10^\circ$ and those of
the FWHM sizes to $\sim\pm15\%$.
}
\begin{tabular}{l l l l l l}
data set &RG\,780 & 1.28\,$\mu$m & 2.17\,$\mu$m \\
\hline
PA ($^\circ$) of the major axis & 153/120 & 176 & 160 \\
Axes ratio & 1.2/1.1 & 1.5 & 1.5 \\
Major axis (mas) & 83/360 & 116 & 205 \\
Minor axis (mas) & 67/280 & 80 & 138 \\
Average diameter (mas) & 74/320 & 96 & 166 \\
rel. flux of Gaussian & 0.75/0.25 & 0.91 & 0.50 \\
rel. flux of unres. comp. & 0.00 & 0.09 & 0.50 \\
\hline
\end{tabular}
\label{fits}
\end{table}
\begin{figure*}[btp]
\begin{center}
\resizebox{0.3\hsize}{!}{\includegraphics{VYCMa-RG780.vis.ps}}
\resizebox{0.3\hsize}{!}{\includegraphics{VYCMa-Pb.vis.ps}}
\resizebox{0.3\hsize}{!}{\includegraphics{VYCMa-Brg.vis.ps}}
\end{center}
\vspace{0.3cm}%
\caption{Azimuthally averaged visibility functions of \object{VY\,CMa} and of the
corresponding model flux distribution:
(left) filter RG\,780, (middle) 1.28\,$\mu$m filter and (right)
2.17\,$\mu$m filter.
The diamonds indicate the observations, the solid line the
azimuthally averaged fit curve of a two-dimensional Gaussian model flux
distribution plus an unresolved object, and the dashed line the constant
caused by just the unresolved component.
The RG\,780 visibility function is best described by two Gaussian flux
distributions.
For the fit parameters see Table \protect\ref{fits}. The visibility
errors are $\pm$ 0.1 up to 80\% of the telescope cut-off frequency and
$\pm$ 0.2 for larger frequencies.}
\label{visis}
\end{figure*}
\section{Interpretation}
Figures 1 and 2 and Tab.~\ref{fits} indicate that \object{VY\,CMa}
consists of both an unresolved component and a resolved asymmetric extended
component with an increasing average diameter at longer wavelengths.
The unresolved component is probably the star itself or an
additional compact circumstellar object. The relative intensity of this
unresolved component decreases towards shorter wavelengths.
The complete obscuration of the unresolved component close to the maximum of
the stellar spectral energy distribution indicates a high optical depth
of the circumstellar envelope, in agreement with the value from
Le~Sidaner and
Le~Bertre of $\tau_{\rm 10\,\mu m}=2.4$.
The visibility functions, the images and the best fit parameters
given in Tab.~\ref{fits} at the three different wavelengths
can be used as additional constraints in future two-dimensional radiation
transfer modeling of the dust distribution.
The resolved structures seen in Fig.~1 belong to the
circumstellar envelope of \object{VY\,CMa}. In fact, the size of the image is of the
order of the dust condensation radius $R_{\rm c}$ which has been
estimated by Le~Sidaner \& Le~Bertre within their spherically symmetric
model to $R_{\rm c} \simeq 12 R_{\star}$
with $R_{\star}\sim 4000\,R_\odot$.
They note, however, that their
model fit to the spectral energy distribution of \object{VY\,CMa} is not entirely
satisfactory and argue that the envelope of \object{VY\,CMa} may be non-spherical,
as indicated by the high level of polarisation (Kruszewski \& Coyne 1976).
Danchi et al. (1994) found $R_{\rm c} \simeq 5 R_{\star}$.
Several interpretations for the non-sphericity seen in all three images
(Fig.\,1) are possible.
The position angle of the major axes of the approximately
elliptical shapes is similar, although
not identical for all three cases (153$^{\circ}$ to 176$^{\circ}$;
see Table~2).
This position angle is approximately perpendicular to the major axis of the
H$_2$O maser distribution (Richards et al. \cite{richards}) and
similar to the distribution of the OH masers (e.g. Masheder et al.
\cite{masheder}).
Accordingly, we can interprete the structure of the
circumstellar envelope of \object{VY\,CMa} as a bipolar outflow in a north-south direction
caused by an obscuring equatorial disk in an east-west direction.
The existence of an obscuring disk is supported by the obscuration of the
central star at optical wavelengths.
Such geometry was also discussed for \object{IRC\,+10\,216} by
Weigelt et al. (1998) and Haniff \& Buscher (1998) and has already been
proposed for \object{VY\,CMa} due to maser observations by
Richards et al. (\cite{richards}).
Furthermore, we can not rule out that the unresolved component consists
of an optically thick torus hiding a close binary, as was proposed for the
Red\,Rectangle (e.g. Men'shchikov et al. 1998). This could lead to
an asymmetric outflow in a north-south direction.
The mass-loss mechanism of \object{VY\,CMa} could also be erratic or stochastic,
similar to the clumpy pulsation and dust-driven mass-loss events
recently detected in the prototype carbon star \object{IRC~+10\,216}
(see Weigelt et al. 1998, Haniff \& Buscher 1998).
Although the reason for this anisotropy is unknown, and the physics of
mass-loss in oxygen-rich red supergiants may differ from those in
carbon-rich stars, the common properties of \object{VY\,CMa} and \object{IRC~+10\,216}
--- both are pulsating cool luminous stars with extended convective
envelopes --- could result in similar mass-loss features.
But individual clumps are not observable because of the larger distance of
\object{VY\,CMa}, which leads to a regular asymmetric image elongated
in a north-south direction.
Finally, our results are also conceivable with a more regular geometry
of \object{VY\,CMa} 's circumstellar envelope, for example, a disk-like envelope which
appears elongated in a north-south direction due to the projection angle
(discussed in Sec.\,4 in more detail), which was proposed earlier for \object{VY\,CMa}
on the basis of optical, infrared (Herbig 1970, 1972; McCarthy 1979;
Efstathiou \& Rowan-Robinson 1990), and maser observations
(van Blerkom 1978, Morris \& Bowers 1980, Zhen-pu \& Kaifu 1984).
\section{Evolutionary status and conclusions}
With a distance of $\sim$\,1500\,pc, the luminosity of \object{VY\,CMa} amounts
to $\sim 4\, 10^5\, L_\odot$ (see Jura \& Kleinmann 1990).
Figure~3 compares this luminosity to a stellar evolutionary track
of a 40$\, M_\odot$ star in the HR diagram (see Langer 1991, for details
of the computational method). It constrains the initial mass of \object{VY\,CMa}
to the range $30 ... 40\, M_\odot$, in agreement with e.g. Meynet et al.
(1994), although models with rotation --- which are not yet available
for the post main sequence evolution at this mass ---
may lead to somewhat smaller values (cf. Heger et al. 1997).
Accordingly, it is likely that \object{VY\,CMa} will
transform into a Wolf-Rayet star during its further evolution.
This is supported by the very high
observed mass-loss rate of \object{VY\,CMa} of $\sim 10^{-4}\, \mso~{\rm yr}^{-1}$ (Jura \& Kleinmann 1990).
\begin{figure}[t]
\resizebox{0.85\hsize}{!}{\includegraphics{sn40.p4}}
\vspace{0.3cm}%
\caption{
Evolutionary track of a 40$\, M_\odot$ star of solar metallicity, from the
zero age main sequence to the red supergiant $\rightarrow$ Wolf-Rayet star
transition. The luminosity of VY CMa is indicated by an arrow.
}
\end{figure}
In order to obtain a disk-like structure (see Sec.\,3),
the most likely mechanisms
involve angular momentum.
There is no indication of a binary companion
to VY~CMa, either from the spectrum of VY~CMa (Humphreys et al. 1972) or
from high resolution imaging (see above). While this only excludes
massive companions, it appears to be viable
that the axial geometry is due to the star's
rotation.
Direct evidence for rotation being capable of producing a disk-like
structure around red supergiants --- possibly through the
Bjorkman-Cassinelli-mechanism of wind compression (see Ignace et al. 1996) ---
comes from bipolar structures found in the AGB star
\object{V~Hydrae} (Stanek et al. 1995), for which rapid rotation
($v \sin i \simeq 13\, {\rm km}\, {\rm s}^{-1}$) has been
directly inferred from photospheric line broadening (Barnbaum et al. 1995).
According to Heger \& Langer (1998), red supergiants drastically increase
their surface rotation rate shortly before and during
the evolution off the Hayashi line. Therefore,
a disk-like envelope surrounding \object{VY\,CMa}
may indicate that such a spin-up is currently in progress.
As strong mass-loss from a convective star
acts as a spin-down mechanism (Langer 1998, Heger \& Langer 1998),
a red supergiant must previously have lost the major part of its envelope
for the competing spin-up process to dominate.
This strongly supports the argument
that the remaining mass of \object{VY\,CMa} 's convective envelope
is in fact small, and that \object{VY\,CMa} is just about to leave the Hayashi line.
It is also consistent with the very long observed
pulsation period of \object{VY\,CMa} of about $6\,$yr
according to the pulsational analysis of Heger et al. (1997),
who showed that such large periods can be obtained in red supergiants
for small envelope masses (cf. their Fig.~2a).
With this interpretation, \object{VY\,CMa}
represents the immediate progenitor state of \object{IRC\,+10\,420},
currently a mid A~type supergiant evolving bluewards on human time scales
on its way from the red supergiant stage
to become a Wolf-Rayet star (Jones et al. 1993, Kastner \& Weintraub 1995).
A comparison with the 40$\, M_\odot$ model shown in Fig.~3
predicts a current mass of \object{VY\,CMa} of ~15$\, M_\odot$ and a surface helium mass fraction
of $Y\simeq 0.40$.
It is remarkable in the
present context that bipolar outflows are observed in \object{IRC\,+10\,420}
(Oudmaijer et al. 1994, 1996, Humphreys et al. 1997).
A disk-like structure of VY~CMa's envelope could be the basis for
such flows, which occur when a fast wind originating from the star
in a post-red supergiant stage interacts with a previously formed disk,
according to hydrodynamic simulations of interacting wind flows
(e.g., Mellema 1997, Garc\'{\i}a-Segura et al. 1998).
\begin{acknowledgements}
This work has been
supported by the Deutsche Forschungsgemeinschaft through grants
La~587/15-1 and 16-1.
\end{acknowledgements}
| proofpile-arXiv_065-9051 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{#1}\setcounter{equation}{0}}
\def\nappendix#1{\vskip 1cm\no{\bf Appendix
#1}\def#1} \setcounter{equation}{0}{#1} \setcounter{equation}{0}}
\newfam\dlfam \def\dl{\fam\dlfam\tendl}
\newfam\glfam \def\gl{\fam\glfam\tengl}
\def\number\month/\number\day/\number\year\ \ \ \hourmin {\number\month/\number\day/\number\year\ \ \ \hourmin }
\def\pagestyle{draft}\thispagestyle{draft{\pagestyle{draft}\thispagestyle{draft}
\global\def1}} \global\def\draftcontrol{0{1}} \global\def1}} \global\def\draftcontrol{0{0}
\catcode`\@=12 \def\widetilde} \def\hat{\widehat{\widetilde} \def\hat{\widehat}
\documentstyle[12pt]{article}
\input{psfig}
\def{\thesection.\arabic{equation}}{{#1} \setcounter{equation}{0}.\arabic{equation}}}
\setlength{\textwidth}{15.6cm} \setlength{\textheight}{23.12cm}
\hoffset -2cm \topmargin= -0.4cm \raggedbottom
\raggedbottom \renewcommand{\baselinestretch}{1.0}
\newcommand{\be}{\begin{eqnarray}} \newcommand{\en}{\end{eqnarray}\vskip} \newcommand{\hs}{\hspace
0.5 cm} \newcommand{\nonumber} \newcommand{\no}{\noindent}{\nonumber} \newcommand{\no}{\noindent}
\newcommand{\vskip} \newcommand{\hs}{\hspace}{\vskip} \newcommand{\hs}{\hspace}
\newcommand{\'{e}} \newcommand{\ef}{\`{e}}{\'{e}} \newcommand{\ef}{\`{e}}
\newcommand{\partial} \newcommand{\un}{\underline}{\partial} \newcommand{\un}{\underline}
\newcommand{\NR}{{{\bf R}}
\newcommand{\NA}{{{\bf A}}
\newcommand{\NB}{{{\bf B}}
\newcommand{\NP}{{{\bf P}}
\newcommand{\NC}{{{\bf C}}
\newcommand{\NT}{{{\bf T}}
\newcommand{\NZ}{{{\bf Z}}
\newcommand{\NH}{{{\bf H}}
\newcommand{\NM}{{{\bf M}}
\newcommand{\NN}{{{\bf N}}
\newcommand{\NS}{{{\bf S}}
\newcommand{\NW}{{{\bf W}}
\newcommand{\NV}{{{\bf V}}
\newcommand{{{\bf a}}} \newcommand{\Nb}{{{\bf b}}}{{{\bf a}}} \newcommand{\Nb}{{{\bf b}}}
\newcommand{{{\bf x}}} \newcommand{\Ny}{{{\bf y}}}{{{\bf x}}} \newcommand{\Ny}{{{\bf y}}}
\newcommand{{{\bf v}}} \newcommand{\Nw}{{{\bf w}}}{{{\bf v}}} \newcommand{\Nw}{{{\bf w}}}
\newcommand{{{\bf u}}} \newcommand{\Ns}{{{\bf s}}}{{{\bf u}}} \newcommand{\Ns}{{{\bf s}}}
\newcommand{{{\bf t}}} \newcommand{\Nz}{{{\bf z}}}{{{\bf t}}} \newcommand{\Nz}{{{\bf z}}}
\newcommand{{{\bf k}}} \newcommand{\Np}{{{\bf p}}}{{{\bf k}}} \newcommand{\Np}{{{\bf p}}}
\newcommand{{{\bf q}}} \newcommand{\Nr}{{{\bf r}}}{{{\bf q}}} \newcommand{\Nr}{{{\bf r}}}
\newcommand{\begin{eqnarray}} \newcommand{\de}{\bar\partial}{\begin{eqnarray}} \newcommand{\de}{\bar\partial}
\newcommand{\partial} \newcommand{\ee}{{\rm e}}{\partial} \newcommand{\ee}{{\rm e}}
\newcommand{{\rm Ker}} \newcommand{\qqq}{\end{eqnarray}}{{\rm Ker}} \newcommand{\qqq}{\end{eqnarray}}
\newcommand{\mbox{\boldmath $\lambda$}}{\mbox{\boldmath $\lambda$}}
\newcommand{\mbox{\boldmath $\alpha$}}{\mbox{\boldmath $\alpha$}}
\newcommand{\mbox{\boldmath $x$}}{\mbox{\boldmath $x$}}
\newcommand{\mbox{\boldmath $\xi$}}{\mbox{\boldmath $\xi$}}
\newcommand{\mbox{\boldmath $k$}} \newcommand{\tr}{\hbox{tr}}{\mbox{\boldmath $k$}} \newcommand{\tr}{\hbox{tr}}
\newcommand{\hbox{ad}} \newcommand{\Lie}{\hbox{Lie}}{\hbox{ad}} \newcommand{\Lie}{\hbox{Lie}}
\newcommand{{\rm w}} \newcommand{\CA}{{\cal A}}{{\rm w}} \newcommand{\CA}{{\cal A}}
\newcommand{{\cal B}} \newcommand{\CC}{{\cal C}}{{\cal B}} \newcommand{\CC}{{\cal C}}
\newcommand{{\cal D}} \newcommand{\CE}{{\cal E}}{{\cal D}} \newcommand{\CE}{{\cal E}}
\newcommand{{\cal F}} \newcommand{\CG}{{\cal G}}{{\cal F}} \newcommand{\CG}{{\cal G}}
\newcommand{{\cal H}} \newcommand{\CI}{{\cal I}}{{\cal H}} \newcommand{\CI}{{\cal I}}
\newcommand{{\cal J}} \newcommand{\CK}{{\cal K}}{{\cal J}} \newcommand{\CK}{{\cal K}}
\newcommand{{\cal L}} \newcommand{\CM}{{\cal M}}{{\cal L}} \newcommand{\CM}{{\cal M}}
\newcommand{{\cal N}} \newcommand{\CO}{{\cal O}}{{\cal N}} \newcommand{\CO}{{\cal O}}
\newcommand{{\cal P}} \newcommand{\CQ}{{\cal Q}}{{\cal P}} \newcommand{\CQ}{{\cal Q}}
\newcommand{{\cal R}} \newcommand{\CS}{{\cal S}}{{\cal R}} \newcommand{\CS}{{\cal S}}
\newcommand{{\cal T}} \newcommand{\CU}{{\cal U}}{{\cal T}} \newcommand{\CU}{{\cal U}}
\newcommand{{\cal V}} \newcommand{\CW}{{\cal W}}{{\cal V}} \newcommand{\CW}{{\cal W}}
\newcommand{{\cal X}} \newcommand{\CY}{{\cal Y}}{{\cal X}} \newcommand{\CY}{{\cal Y}}
\newcommand{{\cal Z}} \newcommand{\s}{\hspace{0.05cm}}{{\cal Z}} \newcommand{\s}{\hspace{0.05cm}}
\newcommand{\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}}{\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}}
\newcommand{{\rightarrow}} \newcommand{\mt}{{\mapsto}}{{\rightarrow}} \newcommand{\mt}{{\mapsto}}
\newcommand{{_1\over^2}}{{_1\over^2}}
\newcommand{{h\hspace{-0.23cm}^-}}{{h\hspace{-0.23cm}^-}}
\newcommand{{\slash\hs{-0.21cm}\partial}} \pagestyle{plain}{{\slash\hs{-0.21cm}\partial}} \pagestyle{plain}
\begin{document}
\title{\bf{Phase transition in the passive scalar advection}}
\author{Krzysztof Gaw\c{e}dzki \\C.N.R.S., I.H.E.S.,
91440 Bures-sur-Yvette, France\\
\\
Massimo Vergassola \\C.N.R.S., Observatoire de la C\^ote d'Azur, B.P. 4229,\\
06304 Nice, France}
\date{ }
\maketitle
\vskip 0.2cm
\begin{abstract}
\vskip 0.2cm
\noindent The paper studies the behavior of the trajectories
of fluid particles in a compressible generalization of the Kraichnan
ensemble of turbulent velocities. We show that, depending on the
degree of compressibility, the trajectories either explosively
separate or implosively collapse. The two behaviors are shown
to result in drastically different statistical properties of scalar
quantities passively advected by the flow. At weak compressibility,
the explosive separation of trajectories induces a familiar direct
cascade of the energy of a scalar tracer with a short-distance
intermittency and dissipative anomaly. At strong compressibility,
the implosive collapse of trajectories leads to an inverse cascade
of the tracer energy with suppressed intermittency and with the energy
evacuated by large scale friction. A scalar density whose advection
preserves mass exhibits in the two regimes opposite cascades
of the total mass squared. We expect that the explosive separation
and collapse of Lagrangian trajectories occur also in more
realistic high Reynolds number velocity ensembles and that the
two phenomena play a crucial role in fully developed
turbulence.
\vskip 0.3cm
\noindent PACS: 47.27 - Turbulence, fluid dynamics
\hfill
\end{abstract}
\vskip} \newcommand{\hs}{\hspace 1cm
\nsection{Introduction}
One of the main characteristic features of the high Reynolds number
turbulent flows is a cascade-like transfer of the energy injected by
an external source. In three dimensional flows, the injected energy is
transferred to shorter and shorter scales and is eventually dissipated
by the viscous friction. This direct cascade is in the first
approximation described by the Kolmogorov 1941 scaling theory
\cite{K41} but the observed departures from scaling (intermittency)
remain to be explained from the first principles. As discovered by
R.H.~Kraichnan in \cite{Kr67}, in two dimensions, the injected energy
is transferred to longer and longer distances in an inverse cascade
whereas this is the enstrophy that is transferred to shorter and
shorter scales. Experiments \cite{Tab} and numerical simulations
\cite{SY,Betal} suggest the absence of intermittency in the inverse
2-dimensional cascade. In the present paper, we shall put forward
arguments indicating that the occurrence and the properties of direct
and inverse cascades of conserved quantities in hydrodynamical flows
are related to different typical behaviors of fluid particle
trajectories. \vskip 0.3cm
Let us start by drawing some simple analogies between fluid dynamics
and the theory of dynamical systems which studies solutions of the
ordinary differential equations \begin{eqnarray}} \newcommand{\de}{\bar\partial {{dx}\over{dt}}\,=\, X(x)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.
\label{ode}
\qqq Let $x_{_{s,y}}(t)$ denote the solution of Eq.\,\,(\ref{ode})
passing at time $s$ by point $y$. In dynamical systems, where the
attention is concentrated on regular functions $X$, one encounters
different types of behavior of solutions\footnote{the following is not
a statement about the genericity of the listed behaviors} \vskip
0.2cm
1). \ {\bf integrable motions} (more common in Hamiltonian systems),
where the nearby trajectories stay close together forever: \begin{eqnarray}} \newcommand{\de}{\bar\partial
\label{one}
\vert x_{_{s,y_1}}(t)-\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} x_{_{s,y_2}}(t)\vert\ \sim\ \CO(\vert
y_1-y_2\vert)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},\ \qquad \qqq \vskip 0.1cm
2). \ {\bf chaotic motions} where the distance between the nearby
trajectories grows exponentially, signaling a sensitive dependence on
the initial conditions: \begin{eqnarray}} \newcommand{\de}{\bar\partial
\label{two}
\vert x_{_{s,y_1}}(t)-\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} x_{_{s,y_2}}(t)\vert\ \sim\
\CO(\ee^{\lambda\vert t-s\vert}\vert y_1-y_2\vert)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},\, \qqq with the
Lyapunov exponent $\lambda>0$\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}, \vskip 0.2cm
3). \ last but, by no means, least, {\bf dissipative motions} where
\begin{eqnarray}} \newcommand{\de}{\bar\partial
\label{three}
\vert x_{_{s,y_1}}(t)-\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} x_{_{s,y_2}}(t)\vert\ \sim\
\CO(\ee^{\lambda\vert t-s\vert}\vert y_1-y_2\vert)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}, \qqq with
$\lambda<0$. \vskip 0.2cm
\noindent Various of these types of motions may appear in
the same systems. \vskip 0.3cm
Analogies between dynamical systems and hydrodynamical evolution
equations, for example the Navier Stokes ones, are often drawn
by viewing the Eulerian evolution of velocities as a dynamical
system in infinite dimensions, see \cite{RuTak}. One
has, however, a more direct (although not unrelated) analogy between
Eq.\,\,(\ref{ode}) and the ordinary differential equation \begin{eqnarray}} \newcommand{\de}{\bar\partial
{{d{{\bf x}}} \newcommand{\Ny}{{{\bf y}}}\over{dt}}\ =\ {{\bf v}}} \newcommand{\Nw}{{{\bf w}}(t,{{\bf x}}} \newcommand{\Ny}{{{\bf y}})
\label{odeh}
\qqq for the Lagrangian trajectories of fluid particles in a given
velocity field ${{\bf v}}} \newcommand{\Nw}{{{\bf w}}(t,{{\bf x}}} \newcommand{\Ny}{{{\bf y}})$. As before, we shall denote by
${{\bf x}}} \newcommand{\Ny}{{{\bf y}}_{_{s,\Nr}}(t)$ the solution passing by $\Nr$ at time $s$.
Clearly, the system (\ref{odeh}) is time-dependent and the velocity
field is itself a dynamical variable. Nevertheless, one may ask
questions about the behavior of solutions of Eq.\,\,(\ref{odeh}) for
"typical" velocities. On the phenomenological level, such behavior
seems to be rather robust and to depend on few characteristics of the
velocity fields. One of them is the Reynolds number
$Re={L\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\vert\Delta_{_L} {{\bf v}}} \newcommand{\Nw}{{{\bf w}}\vert\over\nu}$, where
$\vert\Delta_{_L}{{\bf v}}} \newcommand{\Nw}{{{\bf w}}\vert$ is the (typical) velocity difference over
the distance $L$ of the order of the size of the system and $\nu$ is
the kinematic viscosity. Another important characteristic of velocity
fields is the degree of compressibility measured, for example, by the
ratio of mean values of $(\sum\limits_\alpha\nabla_\alpha
v^\alpha)^2\equiv (\nabla\cdot{{\bf v}}} \newcommand{\Nw}{{{\bf w}})^2$ and $\sum\limits_{\alpha,\beta}
(\nabla_\alpha v^\beta)^2\equiv(\nabla{{\bf v}}} \newcommand{\Nw}{{{\bf w}})^2$. \vskip 0.2cm
Reynolds numbers ranging up to $\CO(10^2)$ are the realm of
laminar flows and the onset of turbulence. Velocity fields in
(\ref{odeh}) are thus regular in space and the behaviors
(1) to (3) are observed for Lagrangian trajectories.
They seem to have limited bearing on the character of the Eulerian
evolution of velocities, see Chapter 8 of \cite{BJPV}.
This is a natural domain of applications of the theory of dynamical
systems to both Eulerian and Lagrangian evolutions.
When the Reynolds number is increased, however, fully
developed turbulent flows are produced in which the behavior of
trajectory separations becomes more dramatic. For incompressible
flows, for example, we claim (see also
\cite{slowm,fmv,nice}) that the regime of fully
developed turbulence is characterized by the
\vskip 0.2cm
2'). {\bf explosive separation of trajectories}:
\begin{eqnarray}} \newcommand{\de}{\bar\partial
\qquad\qquad\vert{{\bf x}}} \newcommand{\Ny}{{{\bf y}}_{_{s,\Nr_1}}(t)-\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}{{\bf x}}} \newcommand{\Ny}{{{\bf y}}_{_{s,\Nr_2}}(t)\vert \quad\
{\rm becomes\ }\CO(1)\ {\rm in\ finite\ time}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.
\label{2prim}
\qqq More precisely, the time of separation of trajectories to an
$\CO(1)$ distance is bounded when $y_2$ approaches $y_1$, provided
that the initial separation $\vert y_1-y_2\vert$ stays in the inertial
range where the viscous effects may be neglected. \vskip 0.3cm
Since the inertial range extends down to zero distance when
$Re\to\infty$, the fast separation of trajectories has a drastic
consequence in this limit: the very concept of individual Lagrangian
trajectories breaks down. Indeed, at $Re=\infty$, infinitesimally
close trajectories take finite time to separate\footnote{in contrast
to their behavior in the chaotic regime, see Sect.\,\,2.2 below}
and, as a result, there are many trajectories (in fact, a continuum)
satisfying a given initial condition. It should be still possible,
however, to give a statistical description of such ensembles of
trajectories in a fixed velocity field. Unlike for intermediate
Reynolds numbers, there seems to be a strong relation between
the behavior of the Lagrangian trajectories and the basic hydrodynamic
properties of developed turbulent flows: we expect the appearance
of non-unique trajectories for $Re\to\infty$ to be responsible
for the dissipative anomaly, the direct energy cascade,
the dissipation of higher conserved quantities and the pertinence
of weak solutions of hydrodynamical equations at $Re=\infty$.
\vskip 0.3cm
The breakdown of the Newton-Leibniz paradigm based on the uniqueness
of solutions of the initial value problem for ordinary differential
equations is made mathematically possible by the loss of small scale
smoothness of turbulent velocities when $Re\to\infty$. At $Re=\infty$,
the typical velocities are expected to be only H\"{o}lder continuous
in the space variables: \begin{eqnarray}} \newcommand{\de}{\bar\partial \vert{{\bf v}}} \newcommand{\Nw}{{{\bf w}}(t,{{\bf x}}} \newcommand{\Ny}{{{\bf y}})-{{\bf v}}} \newcommand{\Nw}{{{\bf w}}(t,{{\bf x}}} \newcommand{\Ny}{{{\bf y}}')\vert^2\ \sim\
\vert {{\bf x}}} \newcommand{\Ny}{{{\bf y}}-{{\bf x}}} \newcommand{\Ny}{{{\bf y}}'\vert^{\xi}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},
\label{hoel}
\qqq with the H\"{o}lder exponent ${\xi\over2}$ close to the Kolmogorov
value ${1\over3}$ \cite{K41}. The uniqueness of solutions of the initial
value problem for Eq.\,\,(\ref{odeh}) requires, on the other hand,
the Lipschitz continuity of ${{\bf v}}} \newcommand{\Nw}{{{\bf w}}(t,{{\bf x}}} \newcommand{\Ny}{{{\bf y}})$ in ${{\bf x}}} \newcommand{\Ny}{{{\bf y}}$, i.e. the behavior
(\ref{hoel}) with $\xi=2$. It should be stressed that for large but
finite $Re$, the chaotic behavior (2) of trajectories may still
persist for short separations of the order of the dissipative scale
(where the viscosity makes the velocities smooth) and the behavior
(2') is observed only on distances longer than that. However, it is
the latter which seems responsible for much of the observed physics of
fully developed turbulence and, thus, setting $Re=\infty$ seems to
be the right idealization in this regime. \vskip 0.3cm
For general velocity fields, one should expect that the poor spatial
regularity of velocities ${{\bf v}}} \newcommand{\Nw}{{{\bf w}}$ might lead to two opposite effects. On
one hand, the trajectories may branch at every time and coinciding
particles would split in a finite time as in (2'). Solving discretized
versions of Eq.\,\,(\ref{odeh}) randomly picks a branch of the
solution and generates some sort of a random walk whose average
reproduces the trajectory statistics. This is the effect previously
remarked \cite{slowm,fmv,nice} in the studies of the
incompressible Kraichnan model \cite{Kr94}. It should be dominant for
incompressible or weakly compressible flows. On the other hand, the
trajectories may tend to be trapped together. The most direct way to
highlight this phenomenon is to consider strongly compressible
velocity fields which are well known for depleting transport (see
\cite{VA}). An instance is provided by the one-dimensional equation
$\,{dx\over dt}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}=\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\beta(x)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},\,$ for $\beta(x)$ a Brownian motion in
$x$, whose solutions are trapped in finite time at the zeros of
$\beta$ on the right (left) ends of the intervals where $\beta>0$
($\beta<0$) \cite{sinai2}.
\vskip 0.3cm
What may then become typical in strongly compressible velocities is,
instead of the explosion (2'), the
\vskip 0.2cm
3'). {\bf implosive collapse of trajectories}:
\begin{eqnarray}} \newcommand{\de}{\bar\partial
\qquad\qquad\vert{{\bf x}}} \newcommand{\Ny}{{{\bf y}}_{_{s,\Nr_1}}(t)-\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}{{\bf x}}} \newcommand{\Ny}{{{\bf y}}_{_{s,\Nr_2}}(t) \vert\quad\
{\rm becomes \ equal\ to\ zero\ in\ finite\ time}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}, \qqq with the
time of collapse depending on the initial positions. This type of
behavior should lead to a domination at infinite $Re$ and strong
compressibility of shock-wave-type solutions of hydrodynamical
equations, as in the 1-dimensional Burgers problem \cite{burg}. Again,
strictly speaking, we should expect behavior (3') only for $Re=\infty$
whereas for finite $Re$, on distances smaller than the dissipative
scale, the approach of typical trajectories should become exponential
with a negative Lyapunov exponent. In simple Gaussian ensembles
of smooth compressible velocities the latter behavior and its
consequences for the direction of the cascade of a conserved quantity
have been discovered and extensively discussed in \cite{CKV1} and
\cite{CKV2}. \vskip 0.3cm
It is the main purpose of the present paper to provide some support
for the above, largely conjectural, statements about typical behaviors
of fluid-particle trajectories at high Reynolds numbers and about
the impact of these behaviors on physics of the fully turbulent
hydrodynamical flows. We study only simple
synthetic random ensembles of velocities showing H\"{o}lder continuity
in spatial variables. Although this is certainly insufficient to make
firm general statements, it shows, however, that the behaviors (2')
and (3') are indeed possible and strongly affect hydrodynamical properties.
In realistic flows, both behaviors might coexist. For the ensemble of
flows considered here, they occur alternatively, leading to
two different phases at large (infinite) Reynolds numbers, depending
on the degree of compressibility and the space dimension. The
occurrence of the collapse (3') is reflected in the suppression
of the short-scale dissipation and the inverse cascade of certain
conserved quantities. The absence of dissipative anomaly permits
an analytical understanding of the dynamics and to show
that the inverse cascade is self-similar. This strengthens
the conjectures on the general lack of intermittency for inverse
cascades \cite{Tab,SY,Betal,Bisk}. A {\it caveat} comes however
from the consideration of friction effects, indicating that the role
of infrared cutoffs might be subtle and anomalies might reappear
in terms of them.
\vskip 0.3cm
Synthetic ensembles of velocities are often used to study the problems
of advection in hydrodynamical flows. These problems become then
simpler to understand than the advection in the Navier-Stokes flows,
but might still render well some physics of the latter, especially in
the case of passive advection when the advected quantities do not
modify the flows in an important way. The simplest of the problems
concern the advection of scalar quantities. There are two types of
such quantities that one may consider. The first one, which we shall
call a tracer and shall denote $\theta(t,\Nr)$, undergoes the
evolution governed by the equation \begin{eqnarray}} \newcommand{\de}{\bar\partial \partial} \newcommand{\ee}{{\rm e}_t
\theta+{{\bf v}}} \newcommand{\Nw}{{{\bf w}}\cdot\nabla\theta-\kappa\nabla^2\theta =f\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},
\label{ps}
\qqq where $\kappa$ denotes the diffusivity and $f(t,\Nr)$ describes
an external forcing (a source). The second scalar quantity is of a
density type, e.g. the density of a pollutant, and we shall denote it
by $\rho(t,\Nr)$. Its evolution equation is \begin{eqnarray}} \newcommand{\de}{\bar\partial \partial} \newcommand{\ee}{{\rm e}_t
\rho+\nabla\cdot({{\bf v}}} \newcommand{\Nw}{{{\bf w}}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\rho)-\kappa\nabla^2\rho =f\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.
\label{psb}
\qqq For $f=0$ it has a form of the continuity equation so that,
without the source, the evolution of $\rho$ preserves the total mass
$\int\hspace{-0.08cm}\rho(t,\Nr)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} d\Nr$. Both equations coincide
for incompressible velocities but are different if the velocities
are compressible.
\vskip 0.3cm
A lot of attention has been attracted recently by a theoretical and
numerical study of a model of the passive scalar advection introduced
by R. H. Kraichnan in 1968 \cite{Kr68}. The essence of the Kraichnan
model is that it considers a synthetic Gaussian ensemble of velocities
decorrelated in time. The ensemble may be defined by specifying the
1-point and the 2-point functions of velocities. Following Kraichnan,
we assume that the mean velocity $\langle\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}{{\bf v}}} \newcommand{\Nw}{{{\bf w}}(t,{{\bf x}}} \newcommand{\Ny}{{{\bf y}})\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\rangle$
vanishes and that \begin{eqnarray}} \newcommand{\de}{\bar\partial \langle v^\alpha(t,\Nr)\,
v^\beta(t',\Nr')\rangle =\delta(t-t')\,[\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
d_0^{\alpha\beta}-d^{\alpha\beta}(\Nr-\Nr')\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}]
\label{vc}
\qqq with constant $d_0^{\alpha\beta}$ and with $d^{\alpha\beta}(\Nr)$
proportional to $r^\xi$ at short distances. The latter property
mimics the scaling behavior of the equal-time velocity correlators of
realistic turbulent flows in the $Re\to\infty$ limit. It leads to the
behavior (\ref{hoel}) for the typical realizations of ${{\bf v}}} \newcommand{\Nw}{{{\bf w}}$. The
parameter $\xi$ is taken between $0$ and $2$ so that the typical
velocities of the ensemble are not Lipschitz continuous.
\vskip 0.3cm
We shall study the behavior of Lagrangian trajectories in the velocity
fields of a compressible version of the Kraichnan ensemble and the
effect of that behavior on the advection of the scalars. The time
decorrelation of the velocity ensemble is not a very physical
assumption. It makes, however, an analytic study of the model much
easier. We expect the temporal behavior of the velocities to have less
bearing on the behavior of Lagrangian trajectories than the
spatial one, but this might be the weakest point of our arguments.
Another related weak point is that the Kraichnan ensemble is
time-reversal invariant whereas realistic velocity ensembles are not,
so that the typical behaviors of the forward- and backward-in-time
solutions of Eq.\,\,(\ref{odeh}) for the Lagrangian trajectories may
be different. It should be also mentioned that in our conclusions
about the advection of scalars we let $Re\to\infty$ before sending the
diffusivity $\kappa$ to zero, i.e. we work at zero Schmidt or Prandtl
number $\nu\over\kappa$. The qualitative picture should not be
changed, however, if $Re$ becomes very large but $\nu\over\kappa$
stays bounded. On the other hand, the situation when
${\nu\over\kappa}\to\infty$ should be better described by the
$\xi\to2$ limit of the Kraichnan model where the velocities become
smooth. \vskip 0.3cm
The paper is organized as follows. In Sect.\,\,2 we discuss the
statistics of the Lagrangian trajectories in the Kraichnan ensemble
and discover two different phases with typical behaviors (2') and
(3'), occurring, respectively, in the case of weak and strong
compressibility. In Sects.\,\,3 and 4 we discuss the advection of a
scalar tracer in both phases and show that it exhibits cascades of the
mean tracer energy. In the weakly compressible phase, the cascade is
direct (i.e. towards short distances) and it is characterized by an
intermittent behavior of tracer correlations, signaled by their
anomalous scaling at short distances. On the other hand, in the
strongly compressible phase, the tracer energy cascade inverts its
direction. In the latter case, we compute exactly the probability
distribution functions of the tracer differences over long distances
and show that, although non-Gaussian, they have a scale-invariant
form. This indicates that the inverse cascade directed towards long
distances forgets the integral scale where the energy is injected.
Conversely, the scale where friction extracts energy from the system
is shown in Section~6 to lead to anomalous scaling of certain
observables. Finally, in Section 7, we discuss briefly the advection
of a scalar density. Here, in the weakly compressible phase, we find
a cascade of the mean mass squared towards long distances and, on
short distances, the scaling of correlation functions in agreement
with the predictions of \cite{russ}. The strongly compressible
shock-wave phase, however, exhibits a drastically different behavior
with the inversion of the direction of the cascade of mean mass
squared towards short distances. In Conclusions we briefly summarize
our results. Some of the more technical material is
assembled in five Appendices.
\vskip 0.3cm
\nsection{Lagrangian flow}
The assumptions of isotropy and of scaling behavior on all scales fix
the functions $d^{\alpha\beta}(x)$ in the velocity 2-point function
(\ref{vc}) of the Kraichnan ensemble up to two parameters: \begin{eqnarray}} \newcommand{\de}{\bar\partial
d^{\alpha\beta}(\Nr)=[A+(d+\xi-1)B]\,\delta^{\alpha\beta} \hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
r^\xi\,+\,[A-B]\,\xi\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} r^\alpha r^\beta\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} r^{\xi-2}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},
\label{d}
\qqq with $A=0$ corresponding to the incompressible case where
$\nabla\cdot{{\bf v}}} \newcommand{\Nw}{{{\bf w}}=0$ and $B=0$ to the purely potential one with
${{\bf v}}} \newcommand{\Nw}{{{\bf w}}=\nabla\phi$. Positivity of the covariance requires that
$A,B\geq 0$. It will be convenient to relabel the constants $A$ and
$B$ by $\CS^2=A+(d-1)B$ and $\CC^2=A$. $\CS^2$ and $\CC^2$ are
proportional to, respectively, $\langle (\nabla{{\bf v}}} \newcommand{\Nw}{{{\bf w}})^2\rangle$ and
$\langle (\nabla\cdot{{\bf v}}} \newcommand{\Nw}{{{\bf w}})^2\rangle$ and they satisfy the inequalities
$\CS^2\geq\CC^2\geq0$. In one dimension, $\CS^2=\CC^2\geq0$. The ratio
\begin{eqnarray}} \newcommand{\de}{\bar\partial
\wp\equiv{\CC^2\over\CS^2},\quad 0\leq\wp\leq 1,
\label{iota}
\qqq
characterizes the
degree of compressibility. \vskip 0.3cm
The source $f$ in the evolution equations (\ref{ps}) and (\ref{psb})
for the scalars will be also taken random Gaussian, independent of
velocity, with mean zero and 2-point function \begin{eqnarray}} \newcommand{\de}{\bar\partial \langle f(t,\Nr)\,
f(t',\Nr')\rangle =\delta(t-t')\, \chi({\vert\Nr-{\Nr}'\vert})\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},
\label{fc}
\qqq where $\chi$ decays on the injection scale $L$. \vskip 0.3cm
In the absence of the forcing and diffusion terms in
Eq.\,\,(\ref{ps}), the tracer $\theta$ is carried by the flow:
$\theta(t,\Nr)\,=\,\theta(s,{{\bf x}}} \newcommand{\Ny}{{{\bf y}}_{_{t,\Nr}}(s))$, and the density
$\rho$ is stretched as the Jacobian $J={\partial} \newcommand{\ee}{{\rm e}({{\bf x}}} \newcommand{\Ny}{{{\bf y}}_{t,\Nr}(s))}/{\partial} \newcommand{\ee}{{\rm e}(\Nr)}$
of the map $\Nr\mapsto{{\bf x}}} \newcommand{\Ny}{{{\bf y}}_{t,\Nr}(s)$:
$\rho(t,\Nr)\,=\,\rho(s,{{\bf x}}} \newcommand{\Ny}{{{\bf y}}_{_{t,\Nr}}(s))\,J$.
Here, ${{\bf x}}} \newcommand{\Ny}{{{\bf y}}_{t,\Nr}(s)$ is the fluid
(Lagrangian) trajectory obeying $d{{\bf x}}} \newcommand{\Ny}{{{\bf y}}_{_{t,\Nr}}/ds=v(s,{{\bf x}}} \newcommand{\Ny}{{{\bf y}}_{_{t,\Nr}})$
and passing through the point $\Nr$ at time
$t$, i.e. ${{\bf x}}} \newcommand{\Ny}{{{\bf y}}_{_{t,\Nr}}(t)=\Nr$. The flows of the scalars may be
rewritten as the relations \begin{eqnarray}} \newcommand{\de}{\bar\partial
\theta(t,\Nr)=\int\delta(\Nr'-{{\bf x}}} \newcommand{\Ny}{{{\bf y}}_{_{t,{\bf r}}}(s))\,
\theta(s,\Nr')\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} d\Nr'\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},\qquad
\rho(t,\Nr)=\int\delta(\Nr-{{\bf x}}} \newcommand{\Ny}{{{\bf y}}_{_{s,{\bf r}'}}(t))\, \rho(s,\Nr')\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
d\Nr'\quad
\label{sc00}
\qqq which imply that the two flows are dual to each other:
$\,\int\theta(t,\Nr)\,\rho(t,\Nr)\,d\Nr$ does not change in time.
In the presence of forcing and diffusion, there are some slight
modifications. First, the sources create the scalars along the
Lagrangian trajectories. Second, the diffusion superposes Brownian
motions upon the trajectories. One has \begin{eqnarray}} \newcommand{\de}{\bar\partial \theta(t,\Nr)\ =\
\overline{\theta(s,{{\bf x}}} \newcommand{\Ny}{{{\bf y}}_{_{t,\Nr}}(s))\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} +\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\smallint\limits_s^t
f(\tau,{{\bf x}}} \newcommand{\Ny}{{{\bf y}}_{_{t,\Nr}}(\tau))\, d\tau}
\label{sc1}
\qqq for the tracer and \begin{eqnarray}} \newcommand{\de}{\bar\partial
\rho(t,\Nr)&=&\overline{\rho(s,{{\bf x}}} \newcommand{\Ny}{{{\bf y}}_{_{t,\Nr}}(s))
\,{_{\partial} \newcommand{\ee}{{\rm e}({{\bf x}}} \newcommand{\Ny}{{{\bf y}}_{t,\Nr}(s))}\over^{\partial} \newcommand{\ee}{{\rm e}(\Nr)}}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} +\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\smallint\limits_s^t
f(\tau,{{\bf x}}} \newcommand{\Ny}{{{\bf y}}_{_{t,\Nr}}(\tau))
\,{_{\partial} \newcommand{\ee}{{\rm e}({{\bf x}}} \newcommand{\Ny}{{{\bf y}}_{t,\Nr}(\tau))}\over^{\partial} \newcommand{\ee}{{\rm e}(\Nr)}}\, d\tau}
\label{sc2}
\qqq for the density, where \begin{eqnarray}} \newcommand{\de}{\bar\partial {d{{\bf x}}} \newcommand{\Ny}{{{\bf y}}_{_{t,\Nr}}\over
ds}=v(s,{{\bf x}}} \newcommand{\Ny}{{{\bf y}}_{_{t,\Nr}})\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} +\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\sqrt{2\kappa}\,{d{\bf{\beta}} \over
ds}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},\qquad{{\bf x}}} \newcommand{\Ny}{{{\bf y}}_{_{t,\Nr}}(t)=\Nr\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},
\label{tr1}
\qqq with the overbar in Eqs.\,\,(\ref{sc1}) and (\ref{sc2}) denoting
the average over the $d$-dimensional Brownian motions
${\bf{\beta}}(s)$. \vskip 0.3cm
Clearly, the statistics of the scalar fields reflects the statistics
of the Lagrangian trajectories or, for $\kappa>0$, of their
perturbations by the Brownian motions. In particular, it will be
important to look at the probability distribution functions (p.d.f.'s)
of the difference $\Nr'$ of time $s$ positions of two Lagrangian
trajectories (perturbed by independent Brownian motions if
$\kappa>0$), given their time $t$ positions $\Nr_1$ and $\Nr_2$, \begin{eqnarray}} \newcommand{\de}{\bar\partial
P_2^{t,s}(\Nr_1-\Nr_2,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\Nr')\,=\,
\langle\,\delta(\Nr'-{{\bf x}}} \newcommand{\Ny}{{{\bf y}}_{_{t,\Nr_{1}}}(s)+{{\bf x}}} \newcommand{\Ny}{{{\bf y}}_{_{t,\Nr_{2}}}(s))
\,\rangle\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.
\label{jdf2}
\qqq Note that $P_2^{t,s}$ is normalized to unity with respect to $\Nr'$
and the equivalent expressions \begin{eqnarray}} \newcommand{\de}{\bar\partial P_2^{t,s}(\Nr_1-\Nr_2,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\Nr')&=&
\int\langle\,\delta(\Nr'+\Nr-{{\bf x}}} \newcommand{\Ny}{{{\bf y}}_{_{t,{\bf r}_{1}}}(s))
\,\,\delta(\Nr-{{\bf x}}} \newcommand{\Ny}{{{\bf y}}_{_{t,{\bf r}_{2}}}(s))\,\rangle\,\,d\Nr \cr
&=&\int\langle\,\delta(\Nr'-{{\bf x}}} \newcommand{\Ny}{{{\bf y}}_{_{t,{\bf r}_{1}-{\bf r}}}(s))
\,\,\delta(-{{\bf x}}} \newcommand{\Ny}{{{\bf y}}_{_{t,{\bf r}_{2}-{\bf r}}}(s))\,\rangle\,\,d\Nr\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},
\label{alte}
\qqq where the last equality uses the homogeneity of the
velocities\footnote{${{\bf x}}} \newcommand{\Ny}{{{\bf y}}_{t,{\bf r}_i-{\bf r}}+{\bf r}\,$ coincides
with ${{\bf x}}} \newcommand{\Ny}{{{\bf y}}_{t,{\bf r}_i}$ in the velocity field shifted in space by
$\Nr$}. \vskip 0.3cm
In the Kraichnan model, the p.d.f.'s $P_2^{t,s}$ may be easily
computed\footnote{the calculation goes back, essentially, to
\cite{Kr68}}. They are given by the heat kernels $\ee^{-|t-s|\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
M_2^\kappa}(\Nr,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\Nr')$ of the $2^{\rm nd}$-order elliptic
differential operators $M_2^{\kappa}\,=\,-\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
d^{\alpha\beta}(\Nr)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\nabla_{\alpha}\nabla_{\beta}\,
-\,2\kappa\nabla^2$. What this means is that the Lagrangian
trajectories undergo, in their relative motion, an effective diffusion
with the generator $M_2^{\kappa}$, i.e. with a space-dependent
diffusion coefficient proportional to their relative distance to power
$\xi$ (for distances large enough that the contribution of the
$\kappa$-term to $M_2^{\kappa}$ may be neglected). Note that, due to
the stationarity and the time reflection symmetry of the velocity
distribution, \begin{eqnarray}} \newcommand{\de}{\bar\partial P_2^{t,s}(\Nr,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\Nr')\,=\,P_2^{s,t}(\Nr,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\Nr')
\label{tre2}
\qqq but that, in general,
$P_2^{t,s}(\Nr,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\Nr')\not=P_2^{s,t}(\Nr',\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\Nr)$, except for the
incompressible case where the operator $M_2^{\kappa}$ becomes
symmetric. \vskip 0.3cm
\subsection{Statistics of inter-trajectory distances}
For many purposes, it will be enough to keep track only of the distances
between two Lagrangian trajectories. We shall then restrict the p.d.f.'s
$P_2^{t,s}$ to the isotropic sector by defining
\begin{eqnarray}} \newcommand{\de}{\bar\partial
P_2^{t,s}(r,r')\,=\,\int\limits_{SO(d)}P_2^{t,s}(\Lambda\Nr,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\Nr')\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
d\Lambda\, =\,\,\int\limits_{SO(d)}
P_2^{t,s}(\Nr,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} \Lambda\Nr')\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} d\Lambda\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},
\qqq
where $d\Lambda$ stands for the normalized Haar measure on $SO(d)$.
$P_2^{t,s}(r,r')$ is the p.d.f. of the time $s$ distance $r'$ between
two Lagrangian trajectories, given their time $t$ distance $r$.
Clearly, \begin{eqnarray}} \newcommand{\de}{\bar\partial P_2^{t,s}(r,r')\ =\ \ee^{-|t-s|\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} M_2^\kappa}(r,r') \qqq
with the operator $M_2^\kappa$ restricted to the isotropic sector. In
the action on rotationally invariant functions,
\begin{eqnarray}} \newcommand{\de}{\bar\partial
M_2^\kappa\,=\,-\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
Z\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} r^{\xi-a}\partial} \newcommand{\ee}{{\rm e}_r r^a\partial} \newcommand{\ee}{{\rm e}_r\,-\,2\kappa\, r^{-d+1}\partial} \newcommand{\ee}{{\rm e}_r
r^{d-1}\partial} \newcommand{\ee}{{\rm e}_r\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},
\label{m2ri}
\qqq
where
\begin{eqnarray}} \newcommand{\de}{\bar\partial Z\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}=\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\CS^2+\xi\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\CC^2\qquad{\rm and}\qquad
a\,=\,[(d-1+\xi)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\CS^2-\xi\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\CC^2]\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z^{-1}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.
\label{a}
\qqq
The radial Laplacian which constitutes the $\kappa$-term
of $M_2^\kappa$ should be taken with the Neumann boundary
conditions at $r=0$ since the smooth rotationally invariant functions
on $\NR^d$ satisfy $\partial} \newcommand{\ee}{{\rm e}_rf(0)=0$. This is the term that dominates
at small $r$ and, consequently, we should choose the same
boundary condition\footnote{this corresponds to
the domination of the short distances behavior of the perturbed
trajectories by the independent Brownian motions with the diffusion
constants $\kappa$} for the complete operator $M_2^\kappa$.
The adjoint operator $(M_2^\kappa)^*$ with respect to the $L^2$
scalar product $\Vert f\Vert^2=\smallint\limits_0^\infty
\vert f(r)\vert^2 d\mu_d(r)$, where
$d\mu_d(r)=S_{d-1}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} r^{d-1}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} dr$ with $S_{d-1}$ standing for the
volume of the unit sphere in $d$-dimensions, should be taken with the
adjoint boundary conditions which make the integration by parts
possible. The diagonalization of $M_2^\kappa$ (in the isotropic
sector), if possible, would then permit to write \begin{eqnarray}} \newcommand{\de}{\bar\partial P_2^{t,s}(r,r')\,=
\,\int\ee^{-|t-s|\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} E}\,\phi_E(r)\,\psi_E(r') \,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} d\nu(E)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},
\label{spd}
\qqq where $\phi_E$ and $\psi_E$ stand for the eigen-functions of the
operators $M_2^\kappa$ and $(M^\kappa_2)^*$, respectively, and
$d\nu(E)$ for the spectral measure. We could naively expect that the
same picture remains true for $\kappa=0$ when \begin{eqnarray}} \newcommand{\de}{\bar\partial
M_2\,\equiv\,M_2^0\,=\,-Z\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} r^{\xi-a}\partial} \newcommand{\ee}{{\rm e}_r r^{a}\partial} \newcommand{\ee}{{\rm e}_r
\label{m2ri0}
\qqq in the rotationally invariant sector. The problem is that the
principal symbol of the operator $M_2$ vanishes at $r=0$ so
that the operator looses ellipticity there and more
care is required in the treatment of the boundary condition.
\vskip 0.3cm
We start by a mathematical treatment of the problem whose physics we
shall discuss later. It will be convenient to introduce the new
variable $u=r^{2-\xi\over2}$ and to perform the transformation \begin{eqnarray}} \newcommand{\de}{\bar\partial
(Uf)(u)\,=\,({_{2\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} S_{d-1}}\over^{2-\xi}})^{^{1\over2}}
\,u^{{d\over2-\xi}-{1\over2}}\, f(u^{2\over2-\xi})
\label{U}
\qqq mapping unitarily the space of square integrable rotationally
invariant functions on $\NR^d$ to $L^2(\NR_+, du)$. The
transformation $U$, together with a conjugation by a multiplication
operator, turns $M_2$ into the well known Schr\"{o}dinger operator on
the half-line: \begin{eqnarray}} \newcommand{\de}{\bar\partial N_2\,\equiv\, u^{-c}\, U\,M_2\, U^{-1}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} u^{c}\,
=\,{Z'}\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}[\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}-\partial} \newcommand{\ee}{{\rm e}_u^2\, +\,
{_{b^2-{_1\over^4}}\over^{u^{2}}}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}]\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},
\label{n2}
\qqq where
\begin{eqnarray}} \newcommand{\de}{\bar\partial
Z'\,=\,{_{(2-\xi)^2}\over^4}\,Z\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},\qquad
b\,=\,{_{1-a}\over^{2-\xi}}\qquad{\rm and}\qquad
c\,=\,b+{_d\over^{2-\xi}}-1\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.
\label{b}
\qqq $N_2$ becomes a positive self-adjoint operator in $L^2(\NR_+)$ if
we specify appropriately the boundary conditions at $u=0$. The theory
of such boundary conditions is a piece of rigorous mathematics
\cite{RS}. It says that for $|b|<1$ there is a one-parameter family
of choices of such conditions, among them two leading to the operators
$N_2^\mp$ with the (generalized) eigen-functions
\begin{eqnarray}} \newcommand{\de}{\bar\partial
\varphi^\mp_E(u)\,=\,u^{{_1\over^2}}\, J_{\mp{b}}
(\sqrt{E/Z'}\, u)
\label{egf}
\qqq
(for $b\not=0$) behaving at $u=0$ as $\CO(u^{{_1\over^2}\mp b})$,
respectively\footnote{the general boundary conditions are
$u^{|b|-{_1\over^2}}\varphi(u)|_{_{u=0}}=\lambda\, u^{1-2|b|}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\partial} \newcommand{\ee}{{\rm e}_u\,
u^{|b|-{_1\over^2}}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\varphi(u)|_{_{u=0}}$ with $0\leq\lambda\leq\infty$}.
We then obtain, fixing the spectral measure by the dimensional
consideration and e.g. the action of $N_2^\mp$ on functions $u^\mu$,
\begin{eqnarray}} \newcommand{\de}{\bar\partial \ee^{-|t-s|\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} N^\mp_2}(u,u')\,=\,{_1\over^{2\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z'}}
\int\limits_0^\infty \ee^{-|t-s|\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} E}\,
\varphi^\mp_E(u)\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\varphi^\mp_E(u')\,\, dE\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}. \qqq Note that the
flip of the sign of $b$ exchanges $N_2^-$ and $N_2^+$. Relating the
operators $N_2^\mp$ to $M_2^\mp$ by Eq.\,\,(\ref{n2}), we infer that
\begin{eqnarray}} \newcommand{\de}{\bar\partial
\ee^{-|t-s|\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} M_2^\mp}(r,r')\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}=\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}{_1\over^{2\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z'}}
\int\limits_0^\infty \ee^{-|t-s|\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} E} \,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
U^{-1}(u^{c}\varphi^\mp_E)(r)\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} U^{-1}(u^{-c}\varphi^\mp_E)(r')\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
dE\cr =\,{_{1}\over^{(2-\xi)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} S_{d-1}}}\int\limits_0^\infty
\ee^{-|t-s|\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} E}\,\, r^{1-a\over 2} \,\, J_{\mp{b}} (\sqrt{E/Z'}\,
{r^{2-\xi\over2}}) \,\hspace{1.2cm}\cr
\cdot\,\,J_{\mp{b}} (\sqrt{E/Z'}\,
{{r'}^{2-\xi\over2}})
\,\,{r'}^{-d+{3\over2}+{a\over 2}-\xi}\,\,dE\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.\hspace{1.2cm}
\label{expo}
\qqq These are explicit versions of the eigen-function
expansion (\ref{spd}) for $\kappa=0$.
\vskip 0.3cm
The eigen-functions of $M_2^\mp$ can be read of from the above
formula. They behave, respectively, as $\CO(1)$ and
$\CO(r^{1-a})$ at $r=0$. This is the first choice that corresponds to
the $\kappa\to0$ limit of the Neumann boundary condition for the
operator $M_2^\kappa$. In Appendix A, we analyze a simpler
problem, where the operator (\ref{m2ri}) is replaced by its $\kappa=0$
version (\ref{m2ri0}) made regular by considering it on the interval
$[r_0,\infty[$, with the Neumann boundary condition at $r_0>0$. We
show that, for $|b|<1$, this is the operator $M_2^-$ that emerges then
in the limit $r_0\searrow0$. The cutting of the interval at a non-zero
value has a similar effect as the addition of the $\kappa$-term to
$M_2$. We should then have the relation\footnote{the choices
of operators $M_2$ with the other boundary conditions
would describe the trajectories of particles with a tendency
to aggregate upon the contact and may also have
applications in advection problems}
\begin{eqnarray}} \newcommand{\de}{\bar\partial
P_2^{t,s}(r,r')\,=\,\ee^{-|t-s|\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} M_2^-}(r,r')
\label{pts}
\qqq in the $\kappa\to0$ limit, as long as $|b|<1$.
\vskip 0.3cm
For $|b|\geq 1$, there is only one way to make $N_2$ into a positive
self-adjoint operator. If $b\leq-1$, it is still the operator $N_2^-$
that survives and the relation (\ref{pts}) still holds in the
$\kappa\to0$ limit. For $b\geq1$, however, i.e. for the
compressibility degree $\wp\geq{d\over\xi^2}$, only the operator
$N_2^+$ survives. Its eigen-functions $\varphi^+_E$ behave as
$\CO(u^{b+{_1\over^2}})$ at $u=0$ which corresponds to the $\CO(r^{1-a})$
behavior of the eigen-functions of $M_2^+$. If we impose the Neumann
boundary condition for $M_2$ at $r=r_0$ then, as we show in Appendix
A, in the limit $r_0\searrow0$, the eigen-functions will still become
proportional to the ones obtained from $\varphi^+_E$, not to those
corresponding to $\varphi^-_E$ as it happens for $b<1$. The same
effect has to occur if we add and then turn off the diffusivity
$\kappa$. It seems then that the equality
$P_2^{t,s}(r,r')\,=\,\ee^{-|t-s|\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} M_2^+}(r,r')$ has to hold in the
$\kappa\to0$ limit when $b\geq1$. \vskip 0.3cm
There is, however, one catch. A direct calculation, see
Eqs.\,\,(\ref{norm-}) and (\ref{norm+}) of Appendix B, shows that the
expression (\ref{pts}) is normalized to unity with respect to $r'$,
but that
\begin{eqnarray}} \newcommand{\de}{\bar\partial
\int\limits_0^\infty\ee^{-|t-s|\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} M_2^+}(r,r')
\,\,d\mu_d({r'})\,=\,\gamma({b},\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}{_{r^{2-\xi}} \over^{4\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
Z'\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}|t-s|}})\,\,\Gamma({b})^{^{-1}}\ <\ 1\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},
\label{nor+}
\qqq where $\gamma(b,x)=\smallint\limits_0^xy^{b-1}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\ee^{-y}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} dy
={x^b\over b}\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}{}_{_1}\hspace{-0.03cm}F_{_1}(b,1+b;-x)$ is the
incomplete gamma-function. An alternative, but more instructive, way
to reach the same conclusion is to observe that the time derivative of
$\ee^{-t\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} M_2^\mp}(r,r')$ brings down the adjoint of $M_2^{\mp}$
acting on the $r'$ variable so that
\begin{eqnarray}} \newcommand{\de}{\bar\partial
{_{d}\over^{dt}}\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\int\limits_0^\infty\ee^{-t\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} M_2^\mp}(r,r')
\,\,{r'}^{d-1}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} dr'&=&
Z\int\limits_0^\infty\partial} \newcommand{\ee}{{\rm e}_{r'}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}{r'}^a\partial} \newcommand{\ee}{{\rm e}_{r'}
\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}{r'}^{d-1-a+\xi}\left(\ee^{-t\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} M_2^\mp}(r,r')\right)\cr
&=&Z\,{r'}^a\partial} \newcommand{\ee}{{\rm e}_{r'} \hspace{0.025cm}} \newcommand{\ch}{{\rm ch}{r'}^{d-1-a+\xi}\left(\ee^{-t\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
M_2^\mp}(r,r')\right) \bigg\vert^{_{r'=\infty}}_{^{r'=0}}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.
\qqq
The contribution from $r'=\infty$ vanishes. On the other hand,
$\,\ee^{-t\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} M_2^-}(r,r')\propto{r'}^{-d+1+a-\xi}$ for small $r'$
whereas $\,\ee^{-t\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} M_2^+}(r,r')\propto{r'}^{-d+2-\xi}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}$, \hspace{0.025cm}} \newcommand{\ch}{{\rm ch} with
the errors suppressed by an additional factor ${r'}^{2-\xi}$.
It follows that the contribution from $r'=0$ is zero
for $M_2^-$ if $1+a-\xi>0$, which is the same condition as
$b<1$ or $\wp<{d\over\xi^2}$, but it
is finite for $M_2^+$.
\vskip 0.3cm
The lack of normalization may seem strange since when we add the
diffusivity $\kappa$ and fix the Neumann boundary conditions then, by
a similar argument as for $M_2^-$ above, the normalization is
assured. The solution of the paradox is that for $b\geq1$, the
$\kappa\to0$ convergence of $\ee^{-t\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} M_2^\kappa}(r,r')$ to $\ee^{-t\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
M_2^+}(r,r')$ holds only for $r'\not=0$ and the defect of probability
concentrates at $r'=0$. For $\kappa=0$, we should then add to
$\ee^{-t\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} M_2^+}(r,r')$ a delta-function term carrying the missing
probability. \vskip 0.3cm
We infer this way that \begin{eqnarray}} \newcommand{\de}{\bar\partial \lim\limits_{\kappa\to0}\ P^{t,s}_2(r,r')\
=\ \cases{\hbox to 7.9cm{$\ee^{-|t-s|\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} M^-_2}(r,r')$\hfill} {\rm
for}\quad\wp<{d\over\xi^2}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},\cr\cr \ee^{-|t-s|\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} M^+_2}(r,r')\ +\
[\m1\,-\, \gamma({b},\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}{_{r^{2-\xi}}\over^{
4\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z'\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}|t-s|}})\,\,\Gamma({b})^{^{-1}}]\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} \delta(\Nr')\cr
\hbox to 7.9cm{{}\hfill}{\rm for}\quad\wp\geq {d\over\xi^2}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.}
\label{summ}
\qqq In the both cases, the p.d.f.'s $P_2^{t,s}$ satisfy the evolution
equation \begin{eqnarray}} \newcommand{\de}{\bar\partial \partial} \newcommand{\ee}{{\rm e}_t\, P_2^{t,s}(r,r')\ =\ \mp\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} M_2\, P_2^{t,s}(r,r')
\label{kzee}
\qqq where $M_2$, given by Eq.\,\,(\ref{m2ri0}), acts on the
$r$-variable and the sign $\mp$ corresponds to $t{_{>}\atop^{<}}s$. They
also have the composition property: \begin{eqnarray}} \newcommand{\de}{\bar\partial \int\limits_0^\infty
P_2^{t,t'}(r,r')\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} P_2^{t',t''}(r',r'') \hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\, d\mu_d(r')\ =\
P_2^{t,t''}(r,r'') \nonumber \qqq if $t<t'<t''$ or $t>t'>t''$. \vskip
0.3cm
It is instructive to note the long time behavior of the averaged
powers of the distance between the Lagrangian trajectories. As
follows from Eqs.\,\,(\ref{norm-}) and (\ref{norm+}) of Appendix B,
for $\mu>0$, \begin{eqnarray}} \newcommand{\de}{\bar\partial \int\limits_0^\infty
P^{t,s}_2(r,r')\,\,{r'}^{\mu}\,d\mu_d({r'}) \ \ \sim\ \ \cases{\hbox
to 3.1cm{$\vert t -s\vert^{^{\mu\over2-\xi}}$ \hfill}\qquad{\rm
for}\quad\wp<{d\over\xi^2}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},\cr \hbox to 3.1cm{$\vert
t-s\vert^{^{{\mu\over2-\xi}-b}}\, r^{1-a} $\hfill}\qquad{\rm
for}\quad\wp\geq{d\over\xi^2}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.}\quad
\label{ltmb}
\qqq \vskip 0.3cm
\subsection{Fully developed turbulence versus chaos}
The two cases $\wp<{d\over\xi^2}$ and $\wp\geq{d\over\xi^2}$
correspond to two physically very different regimes of the Kraichnan
model. Let us first notice a completely different typical behavior of
Lagrangian trajectories in the two cases. In the regime
$\wp<{d\over\xi^2}$, which includes the incompressible case
$\CC^2=0$ studied extensively before, see \cite{nice} and the
references therein, the p.d.f.'s $P_2^{t,s}(r,r')$ possess a
non-singular limit\footnote{a more detailed information on how this
limit is attained is given by Eq.\,\,(\ref{A12}) in Appendix B}: \begin{eqnarray}} \newcommand{\de}{\bar\partial
\lim\limits_{r\to0}\,\, P_2^{t,s}(r,r')\,=\,
{_{2-\xi}\over^{S_{d-1}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\Gamma(1-{b})\,\, (4\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z'\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} \vert t
-s\vert)^{1-{b}}}}\,\, r'^{-d+1+a-\xi}\,\,
\ee^{-{{r'}^{2-\xi}\over 4\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z'\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\vert t-s\vert}}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.
\label{toz-}
\qqq It follows that, when the time $t$ distance of the Lagrangian
trajectories tends to zero, the probability to find a non-zero
distance between the trajectories at time $s\not=t$ stays equal to
unity: {\bf infinitesimally close trajectories separate in finite time}.
This signals the "fuzzyness" of the Lagrangian trajectories
\cite{slowm,nice} forming a stochastic Markov process already
in a fixed typical realization of the velocity field,
with the transition probabilities of the process propagating
weak solutions of the passive scalar equation $\partial} \newcommand{\ee}{{\rm e}_t\theta
+{{\bf v}}} \newcommand{\Nw}{{{\bf w}}\cdot\nabla\theta=0$ \cite{lejan}. Such appearance
of stochasticity at the fundamental level seems to be an
essential characteristic of fully developed turbulence in the
incompressible or weakly compressible fluids. It is due to the
roughness of typical turbulent velocities which are only H\"{o}lder
continuous with exponent ${\xi\over 2}<1$ (in the limit of infinite
Rynolds number $Re$). One should stress an important difference
between this type of stochasticity and the stochasticity of chaotic
behaviors. In chaotic systems, the trajectories are uniquely
determined by the initial conditions but depend sensitively on the
latter. The nearby trajectories separate exponentially in time at
the rate given by a positive Lyapunov exponent. The exponential
separation implies, however, that infinitesimally close
trajectories take infinite time to separate. This type of behavior
is observed in flows with intermediate Reynold numbers but for large
Reynolds numbers it occurs only within the very short dissipative
range which disappears in the limit $Re=\infty$. In the Kraichnan
model, the exponential separation of trajectories characterizes the
$\xi\to 2$ limit of the fuzzy regime $\wp<{d\over\xi^2}$
\cite{kol,slowm}. \vskip 0.3cm
{\bf In short}, fully developed turbulence and chaos, are two different
things although both lead to stochastic behaviors. In a metaphoric
sense, the difference between the two occurrences of stochasticity is
as between that, more fundamental, in quantum mechanics and that in
statistical mechanics imposed by an imperfect knowledge of microscopic
states. \vskip 0.3cm
\subsection{Shock wave regime}
Let us discuss now the second regime of our system with
$\wp\geq{d\over\xi^2}$. In that interval,
$\lim\limits_{r\to0}\,\ee^{-t\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} M_2^+}(r,r')\,=\,0\,$ and \begin{eqnarray}} \newcommand{\de}{\bar\partial
\lim\limits_{r\to0}\,\, P_2^{t,s}(r,r')\,=\,\delta(\Nr')\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},
\label{toz+}
\qqq see the second of Eqs.\,\,(\ref{summ}). Here the uniqueness of
the Lagrangian trajectories passing at time $t$ by a given point is
preserved (in probability). However, with positive probability
tending to one with $\vert t-s\vert \to\infty$, two trajectories at
non-zero distance at time $t$, collapse by time $s$ to zero distance,
as signaled by the presence of the term proportional to $\delta(\Nr')$
in $P^{t,s}_2(r,r')$. The {\bf collapse of trajectories} exhibits the
trapping effect of compressible velocities. A similar behavior is
known from the Burgers equation describing compressible velocities
whose Lagrangian trajectories are trapped by shocks and then move
along with them. The trapping effect is also signaled by the decrease
with time of the averages of low powers of the distance between
trajectories ($<1-a$), see Eq.\,\,(\ref{ltmb}), Note, however, that
the averages of higher powers still increase with time
signaling the presence of large deviations from the typical
behavior. \vskip 0.3cm
Due to the inequalities $0\leq\wp\leq1$, the second regime,
characterized by the collapse of trajectories, is present only if
$\xi^2\geq d$, i.e. for $d\leq 4$. Its limiting case with $\xi=2$ and
$d\leq 4$ was first discovered and extensively discussed in
\cite{CKV1} and \cite{CKV2}. It appears when the largest Lyapunov
exponent of (spatially) smooth velocity fields becomes negative.
\vskip 0.3cm
\nsection{Advection of a tracer: direct versus inverse cascade}
\subsection{Free decay}
Let us study now the time $t$ correlation functions of the scalar
$\theta$ whose evolution is given by Eq.\,\,(\ref{ps}). Assume first
that we are given a homogeneous and isotropic distribution of
$\theta$ at time zero and we follow its free decay at later times. From
Eqs.\,\,(\ref{sc00}) and (\ref{alte}) we infer that \begin{eqnarray}} \newcommand{\de}{\bar\partial
F^\theta_2(t,r)&\equiv& \langle\,\theta(t,\Nr)\,\theta(t,{\bf
0})\,\rangle\ = \int\limits_0^\infty
P^{t,0}_2(r,r')\,\,F_2^\theta(0,r')\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} d\mu_d({r'})\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.
\label{ind}
\qqq In particular, to calculate the mean "energy" density
$e_{_\theta}(t)\equiv\langle\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}{_1\over^2}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\theta(t,\Nr)^2\rangle= {_1\over^2}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
F_2^\theta(t,\bf{0})$, the separation $r$ should be taken equal to
zero. For $\wp<{d\over\xi^2}$, the limit $P^{t,0}_2(0,r')$ is a
regular positive function and it stays such even for $\kappa=0$, see
Eq.\,\,(\ref{toz-}). Since $F_2^\theta(0,r')\leq F_2^\theta(0,0)$ as a
Fourier transform of a positive measure, it follows that the total
energy diminishes with time: $e_{_\theta}(t)<e_{_\theta}(0)$. On the
other hand, for $\wp\geq{d\over\xi^2}$,
$P^{t,0}_2(0,r')=\delta(\Nr')$, see Eq.\,\,(\ref{toz+}), and the total
energy is conserved: $e_{_\theta}(t)=e_{_\theta}(0)$. The loss of
energy in the regime $\wp<{d\over\xi^2}$ is not due to
compressibility\footnote{in temporally decorrelated velocity
fields, the mean energy $e_{_\theta}$ is conserved also
in compressible flows, in the absence of
forcing and diffusion}, but to the non-uniqueness of the Lagrangian
trajectories responsible for the persistence of the short-distance
dissipation in the $\kappa\to0$ limit. As is well known this
dissipative anomaly accompanies the direct cascade of energy
towards shorter and shorter scales in the (nearly) incompressible
flows. On the other hand, in the strongly compressible regime
$\wp\geq{d\over\xi^2}$, the scalar $\theta$ is transported
along unique trajectories and its energy is conserved in mean. The
short distance dissipative effects disappear in the limit
$\kappa\to0$: there is no dissipative anomaly and no direct cascade of
energy. As we shall see, the energy injected by the source of $\theta$
is transferred instead to longer and longer scales in an inverse
cascade process. \vskip 0.3cm
\subsection{Forced state for weak compressibility}
The direction of the energy cascade may be best observed if we keep
injecting the energy into the system at a constant rate. Let us then
consider the advection of the tracer in the presence of stationary
forcing. From Eqs.\,\,(\ref{sc1}) and (\ref{fc}), assuming that
$\theta$ vanishes at time zero, we obtain, \begin{eqnarray}} \newcommand{\de}{\bar\partial
F^\theta_2(t,r)=\langle\,\smallint\limits_0^t f(s,{{\bf x}}} \newcommand{\Ny}{{{\bf y}}_{t,\Nr}(s))\,ds
\smallint\limits_0^t f(s',{{\bf x}}} \newcommand{\Ny}{{{\bf y}}_{t,0}(s'))\,ds'\,\rangle=
\int\limits_0^tds\int\limits_0^\infty P^{t,s}_2(r,r') \chi({r'})\,
d\mu_d({r'}),\quad
\label{dif}
\qqq which is a solution of the evolution equation
\begin{eqnarray}} \newcommand{\de}{\bar\partial
\partial} \newcommand{\ee}{{\rm e}_t
F^\theta_2\,=\,-M_2^{\kappa}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} F^\theta_2\,+\,\chi\quad
\label{diffe}
\qqq
with the operator $M_2^{\kappa}$ given by Eq.\,\,(\ref{m2ri}).
\vskip 0.3cm
When $\wp<{_{d-2+\xi}\over^{2\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\xi}}$ (i.e. for $a>1$ or $b<0$), which
implies that we are in the weakly compressible phase with
$\wp<{d\over\xi^2}$, and for $\kappa=0$,
\begin{eqnarray}} \newcommand{\de}{\bar\partial
F^\theta_2(t,r)&=&\ \smallint\limits_0^tds\smallint\limits_0^\infty
\ee^{-s\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} M_2^-}(r,r')\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\chi({r'}) \,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} d\mu_d({r'})\ \
\mathop{\rightarrow}\limits_{t\to\infty}\ \
\smallint\limits_0^\infty(M_2^-)^{-1}(r,r')\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\chi({r'}) \,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
d\mu_d({r'})\cr
&=&\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}{_1\over^{(a-1)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
Z}}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\smallint\limits_r^\infty {r'}^{1-\xi}\,\chi({r'})\,
dr'\,+\,{_1\over^{(a-1)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z}}\,
r^{1-a}\smallint\limits_0^r{r'}^{a-\xi}\,\chi({r'})\, dr'
\ \,\equiv\ \,F_2^\theta(r)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},
\label{35}
\qqq see Eq.\,\,(\ref{a24}) of Appendix C. Thus for
$\wp<{_{d-2+\xi}\over^{2\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\xi}}$, when $t\to\infty$, the 2-point
function of $\theta$ attains a stationary limit $F^\theta_2(r)$
with a finite mean energy density $e_{_\theta}=\langle\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}{_1\over^2}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\theta^2\rangle
={_1\over^2}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} F^\theta_2(0)$.
The corresponding stationary 2-point structure function is \begin{eqnarray}} \newcommand{\de}{\bar\partial
&&S^\theta_2(r)\ =\ \langle\,(\theta(\Nr)-\theta({\bf 0}))^2\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\rangle
\,=\,2\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}(F^\theta_2(0)-F^\theta_2(r))\,=\,{_2\over^Z}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
\smallint\limits_0^r\zeta^{-a}\,d\zeta\smallint\limits_0^\zeta
{r'}^{a-\xi}\,\chi({r'})\, dr'\cr\cr&&\ \cong\ \cases{
\hbox to 10.2cm{${_{2\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\chi(0)}\over^{(2-\xi)(1+a-\xi)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z}}
\,r^{2-\xi}$\hfill}{\rm for\ }r\ {\rm small}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},\cr \hbox to
10.2cm{${_2\over^{(a-1)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z}}\,
\smallint\limits_0^\infty{r'}^{1-\xi}\,\chi({r'})\,
dr'\,-\,{_2\over^{(a-1)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z}}\, r^{-(a-1)}\,
\smallint\limits_0^\infty{r'}^{a-\xi}\,\chi({r'})\,
dr'$\hfill}{\rm for\ }r\ {\rm large}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.}
\label{stf}
\qqq Thus $S^\theta_2(r)$ exhibits a normal scaling at $r$ much
smaller than the injection scale $L$ whereas at $r\gg L$ the approach
to the asymptotic value $2\langle\,\theta^2\,\rangle$ is controlled by
the scaling zero mode $r^{1-a}$ of the operator $M_2$.
In Appendix D, we give the explicit form of
the stationary 2-point function $F^\theta_2$ in the presence of
positive diffusivity $\kappa$. \vskip 0.3cm
\subsection{Dissipative anomaly}
Let us recall how the dissipative anomaly manifests itself in this
regime. The stationary 2-point function of the tracer solves the
stationary version of Eq.\,\,(\ref{diffe}). When we let in the latter
$r\to0$ for positive $\kappa$, only the contribution of the dissipative
term in $M_2^\kappa$ survives and we obtain the energy balance equation
$\epsilon_{_\theta}\,\equiv\,\kappa\,\langle\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}(\nabla\theta)^2\rangle
\,=\,{_1\over^2}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\chi(0)$, i.e. the equality of the mean dissipation rate
$\epsilon_{_\theta}$ and the mean energy injection rate
${_1\over^2}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\chi(0)$. Taking first $\kappa\to0$ and $r\to0$ next, instead,
we obtain the analytic expression of the dissipative anomaly: \begin{eqnarray}} \newcommand{\de}{\bar\partial
\lim\limits_{\kappa\to0}\,\,\epsilon_{_\theta}\,
=\,{_1\over^2}\,\lim\limits_{r\to0}\,M_2\,
\lim\limits_{\kappa\to0}\,F^\theta_2(r) \,=\,{_1\over^2}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\chi(0)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.
\label{da}
\qqq Thus, in spite of the explicit factor $\kappa$ in its definition,
the mean dissipation rate does not vanish in the limit $\kappa\to0$,
which explains the name: anomaly. \vskip 0.3cm
For ${_{d-2+\xi}\over^{2\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\xi}}<\wp<{d\over\xi^2}$ (i.e.
for $0<b<1$, see Eq.\,\,(\ref{b})), the 2-point function
$F_2^\theta(t,r)$, still given for $\kappa=0$ by the left
hand side of the relation (\ref{35}),
diverges with time as $t^{b}$. More exactly, as we show in Appendix C,
it is the expression
\begin{eqnarray}} \newcommand{\de}{\bar\partial
&&F^\theta_2(t,r)\ -\ {_{(4\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z')^{b}}\over^{(1-a)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z\,
\Gamma({1-b})}}\,\, t^{b}\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
\smallint\limits_0^\infty\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}{r'}^{a-\xi}\,\chi({r'})\, dr'
\label{toA}
\qqq that tends to the right hand side of the relation (\ref{35}).
Finally, for $\wp={_{d-2+\xi}\over^{2\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\xi}}$, there is a
constant contribution to $F^\theta_2(t,r)$ logarithmically divergent
in time. For
${_{d-2+\xi}\over^{2\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\xi}}\leq\wp<{d\over\xi^2}$
the system still dissipates energy
at short distances with the rate $\epsilon_{_\theta}$ that becomes
equal to the injection rate asymptotically in time, but it also builds
up the energy $e_{_\theta}(t)$ in the constant mode with the rate
decreasing as $t^{-(1-b)}$. Note that in spite of the divergence
of the 2-point correlation function, the 2-point structure function
of the tracer still converges as $t\to\infty$ to a stationary form
given by Eq.\,\,(\ref{stf}). Now, however, $S^\theta_2(r)$
is dominated for large $r$ by the growing zero mode $\propto r^{1-a}$
of $M_2$.
\vskip 0.5cm
\subsection{Forced state for strong compressibility}
Let us discuss now what happens under steady forcing in the strongly
compressible regime $\wp\geq{d\over\xi^2}$ (i.e. for $1+a-\xi\leq0$
or $b\geq1$). Here the 2-point function (\ref{dif}), which still
evolves according to Eq.\,\,(\ref{diffe}), is for $\kappa=0$ given
by the relation
\begin{eqnarray}} \newcommand{\de}{\bar\partial
F^\theta_2(t,r)&=&\smallint\limits_0^tds\smallint\limits_0^\infty
\ee^{-s\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} M_2^+}(r,r')\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\chi({r'}) \,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} d\mu_d({r'})\cr
&+&\chi(0)\,\smallint\limits_0^t [\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} 1\,-\,
\gamma({b},\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}{_{r^{2-\xi}}\over^{4\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z'\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} s}})\,\,
\Gamma({b})^{^{-1}}]\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\,ds\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.
\label{2pf}
\qqq
When $t\to\infty$, the first term on the right hand side tends to
\begin{eqnarray}} \newcommand{\de}{\bar\partial
&&\hspace{-0.7cm}\smallint\limits_0^\infty(M^+_2)^{-1}(r,r')\,\chi({r'})\,
d\mu_d({r'})\cr
&&\hspace{1.5cm}={_1\over^{(1-a)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z}}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} \smallint\limits_0^r{r'}^{1-\xi}
\,\chi({r'})\,dr'+{_1\over^{(1-a)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z}}\,
r^{1-a}\smallint\limits_r^\infty
{r'}^{a-\xi}\,\chi({r'})\,dr'\quad\label{gf0}\\
&&\hspace{1.5cm}\cong\ \cases{\hbox to 7.4cm{$
-\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}{_{\chi(0)}\over^{(2-\xi)(1+a-\xi)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z}}\, r^{2-\xi}
\,-\,{_{\smallint\limits_0^\infty {r'}^{1+a-\xi}\,\chi'({r'})\,
dr'} \over^{(1-a)(1+a-\xi)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z}}\, r^{1-a}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} $\hfill}
\quad{\rm for\ }r\ {\rm small}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},\cr\cr
\hbox to 7.4cm{${_1\over^{(1-a)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z}}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
\smallint\limits_0^\infty{r'}^{1-\xi}\,\chi({r'})\, dr'
$\hfill}\quad{\rm for\ }r\ {\rm large}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},}
\label{gf}
\qqq where the asymptotic expressions hold for $1+a-\xi<0$, i.e.
for $\wp>{d\over\xi^2}$.
On the other hand \begin{eqnarray}} \newcommand{\de}{\bar\partial
{_{\chi(0)}\over^{\Gamma({b})}}\,\smallint\limits_0^t
\gamma({b},\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}{_{r^{2-\xi}}\over ^{4\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z'\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}(t-s)}})\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\,ds\
\ \mathop{\rightarrow}\limits_{t\to\infty}
\ \ -\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}{_{\chi(0)}\over^{(2-\xi)(1+a-\xi)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z}}\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} r^{2-\xi}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},
\label{tin}
\qqq except for $\wp={d\over\xi^2}$ when it diverges as
${\chi(0)\,r^{2-\xi}\over4\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z'}\,\ln t$. Hence, for
$\wp>{d\over\xi^2}$, the quantity $F^\theta_2(t,r)\,-\,\chi(0)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} t$
converges when $t\to\infty$ and the limit is proportional
to the zero mode $r^{1-a}$ of $M_2$ for small $r$
(up to $\CO(r^{4-\xi})$ terms). As we see, the energy injected into
the system by the external source is accumulating in the constant mode
with the constant rate equal to the injection rate ${_1\over^2}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\chi(0)$.
The dissipative anomaly is absent in this phase. Indeed,
\begin{eqnarray}} \newcommand{\de}{\bar\partial
\epsilon_{_\theta}\,=\,{_1\over^2}\,\lim\limits_{r\to0}\,\lim\limits_{t\to\infty}
\,M_2\, F^\theta_2(t,r)\ =\ 0
\qqq
and the same is true at finite times since $F^\theta_2(t,r)$
becomes proportional to the zero modes of $M_2$ at short distances.
These are clear signals of the inverse cascade of energy towards large
distances, identified already in the $\xi\to2$ limit of the
$\wp\geq{d\over\xi^2}$ regime in \cite{CKV1,CKV2}.
\vskip 0.3cm
The 2-point structure function \begin{eqnarray}} \newcommand{\de}{\bar\partial
S^\theta_2(t,r)\,=\,2\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}(F_2^\theta(t,0)-F_2^\theta(t,r))\,=\, 2\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\chi(0)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
t\,-\,2\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} F_2^\theta(t,r)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},
\label{2psf}
\qqq which satisfies the evolution equation \begin{eqnarray}} \newcommand{\de}{\bar\partial \partial} \newcommand{\ee}{{\rm e}_t S^\theta_2\ =\
-\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} M_2\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} S^\theta_2\,+\, 2\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}(\chi(0)-\chi)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},
\label{2pfe}
\qqq reaches for $\wp>{d\over\xi^2}$ the stationary limit
whereas it diverges logarithmically in time for
$\wp={d\over\xi^2}$. Note that it is now at large $r$ that
$S^\theta_2(r)$ scales normally $\propto r^{2-\xi}$ and at small $r$
that it becomes proportional to the zero mode $r^{1-a}$ of $M_2$.
\vskip 0.3cm
\nsection{Intermittency of the direct cascade}
The higher correlation functions of the convected scalars involve
simultaneous statistics of several Lagrangian trajectories.
To probe deeper into the statistical properties of the trajectories,
it is convenient to consider the joint p.d.f.'s
$P^{t,s}_{_N}(\Nr_1,\dots,\Nr_{_N};\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\Nr'_1,\dots,\Nr'_{_N})$ of the
time $s$ differences of the positions $\Nr'_1,\dots\Nr'_{N}$ of $N$
Lagrangian trajectories passing at time $t$ through points
$\Nr_1,\dots,\Nr_{_N}$. In the notation of Section 2, \begin{eqnarray}} \newcommand{\de}{\bar\partial
P^{t,s}_{_N}(\Nr_1,\dots,\Nr_{_N};\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\Nr'_1,\dots,\Nr'_{_N}) \ =\
\int\langle\,\prod\limits_{n=1}^N
\delta(\Nr'_n-{{\bf x}}} \newcommand{\Ny}{{{\bf y}}_{_{t,\Nr_n}}(s)+\Nr)\,\rangle\,\, d\Nr\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.
\qqq
Clearly, the functions $P_N^{t,s}(\un{\Nr};\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\un{\Nr'})$
are translation-invariant separately in the variables $\un{\Nr}=
(\Nr_1,\dots,\Nr_{_N})$ and in $\un{\Nr'}=(\Nr'_1,\dots,\Nr'_{_N})$.
In the Kraichnan model, the p.d.f.'s $P^{t,s}_{_N}$ are again
given by heat kernels of degree two differential operators
\cite{schlad}
\begin{eqnarray}} \newcommand{\de}{\bar\partial
P^{t,s}_{_N}(\un{\Nr};\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\un{\Nr'}) \ =\ \ee^{-\vert
t-s\vert\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} M^\kappa_{_N}}(\un{\Nr};\un{\Nr'})\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},
\qqq
where the operators
\begin{eqnarray}} \newcommand{\de}{\bar\partial
M^\kappa_{_N}\ =\ \sum\limits_{1\leq n<m\leq N}
d^{\alpha\beta}(\Nr_n-\Nr_m)
\,\nabla_{r_n^\alpha}\nabla_{r_m^\beta}\ -\ \kappa \sum\limits_{1\leq
n\leq N}\nabla_{\Nr_n}^2
\qqq
should be restricted to the
translation-invariant sector, which enforces the separate
translation-invariance of their heat kernels. The relations
$P^{t,s}_{_N}(\un{\Nr};\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\un{\Nr'}) =P^{s,t}_{_N}(\un{\Nr};\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\un{\Nr'})$
generalize Eq.\,\,(\ref{tre2}). As for $N=2$,
$P^{t,s}_{_N}(\un{\Nr};\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\un{\Nr'}) =P^{s,t}_{_N}(\un{\Nr'};\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\un{\Nr})$
only in the incompressible case.
\vskip 0.3cm
Strong with the lesson we learned for two trajectories,
we expect completely different behavior of the p.d.f's
$P^{t,s}_{_N}(\un{\Nr};\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\un{\Nr'})$ for $\Nr_n$'s close to each other
in the two phases, resulting in different short-distance
statistics of convected quantities. Let us start by discussing
the weakly compressible case $\wp<{d\over\xi^2}$. Here
we have little to add to the incompressible story,
see e.g. \cite{slowm,nice}. We expect the limit
$\lim\limits_{\un{\Nr}\to\un{\bf 0}}\ P^{t,s}_{_N}(\un{\Nr};
\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\un{\Nr'})\ \equiv\ P^{t,s}_{_N}(\un{\bf 0};\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\un{\Nr'})$ to exist
and to be a continuous function (except, possibly at
$\un{\Nr'}=\un{\bf 0}$) decaying with $\vert t-s\vert$ and at large
distances, just as for $P^{t,s}_{2}$, see Eq.\,\,(\ref{toz-}). More
exactly, we expect \cite{slowm} an asymptotic expansion generalizing
the expansion (\ref{A12}) of Appendix B for $P^{t,s}_2$:
\begin{eqnarray}} \newcommand{\de}{\bar\partial
P^{t,s}_{_N}(\lambda\un{\Nr};\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\un{\Nr'})\ \ =\ \
\sum\limits_{{i\atop j=0,1,\dots}}\lambda^{\sigma_i+(2-\xi)j}
\,\,\phi_{i,j}(\un{\Nr})\,\,\overline{\psi_{i,j}(\vert t-s\vert,
\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\un{\Nr'})}
\qqq
for $\lambda$ small, where $\phi_{i,0}$ are
scaling zero modes of the operator $M^0_{_N}\equiv M_{_N}$
with scaling dimensions
$\sigma_i\geq0$ and $\phi_{i,p}$ are "slow modes", of scaling
dimension $\sigma_i+(2-\xi)j$, satisfying the descent equations
$M_{_N}\phi_{i,j}= \phi_{i,j-1}$. The constant zero mode
$\phi_{0,0}=1$ (corresponding to $\overline{\psi_{0,0}}=P_{_N}(\un{\bf
0}; \hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\,\cdot\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\,)$) gives the main contribution for small $\lambda$,
but drops out if
we consider combinations of $P^{t,s}_{_N}
(\lambda\un{\Nr};\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\un{\Nr'})$ with different configurations
$\un{\Nr}$ which eliminate the terms that do not depend on all
(differences) of $\Nr_n$'s. Such combinations are dominated by the
zero modes depending on all $\Nr_n$'s. For small $\xi$, there is one
such zero mode $\phi_{i_0,0}$ for each even $N$. A perturbative
calculation of its scaling dimension done as in \cite{BKG}, where the
incompressible case was treated, gives \begin{eqnarray}} \newcommand{\de}{\bar\partial \sigma_{i_0}\ =\
N\,-\,(\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}{_N\over^2}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}+\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}{_{N(N-2)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}(1+2\wp)}
\over^{2(d+2)}}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch})\,\xi\ +\ \CO(\xi^2)\ \equiv \
{_N\over^2}(2-\xi)\,+\,\Delta_{_N}^\theta\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.
\label{Ncf}
\qqq \vskip 0.3cm
In the absence of forcing, the $N$-point
correlation functions $\,F^\theta_{_N}(t,\un{\Nr})=\langle\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\prod
\limits_{n=1}^N\theta(t,\Nr_n)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\rangle\,$ of the tracer
are propagated by the p.d.f.'s $P^{t,s}_{_N}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}$:
\begin{eqnarray}} \newcommand{\de}{\bar\partial
F^\theta_{_N}(t,\un{\Nr})\ =\ \int P^{t,s}_{_N}(\un{\Nr};
\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\un{\Nr'})\,\,F^\theta_{_N}(s,\un{\Nr})\,\,d'\un{\Nr}'
\label{fdn}
\qqq
where $d'\un{\Nr'}\equiv d\Nr'_2\cdots d\Nr'_{_N}$,
compare to Eq.\,\,(\ref{ind}).
In the presence of forcing, $F_{_N}^\theta$ obey
recursive evolution equations \cite{ss,schlad}.
If $F^\theta_{_N}$ vanish at time zero then
the odd correlation functions
vanish at all times and the even ones may be computed iteratively:
\begin{eqnarray}} \newcommand{\de}{\bar\partial
F^\theta_{_N}(t,\un{\Nr})\ =\
\int\limits_0^t ds\int P^{t,s}_{_N}(\un{\Nr};\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\un{\Nr'})
\,\sum\limits_{n<m}F^\theta_{_{N-2}}(s,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\Nr'_1,
\mathop{\dots\dots}\limits_{\hat{n}\ \ \hat{m}}, \Nr'_{_N})\
\chi({\vert \Nr'_n-\Nr'_m\vert})\ d'\un{\Nr'}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.
\label{Nc1}
\qqq
We expect that for small $\xi$ or weak compressibility,
$F^\theta_{_N}(t,\un{\Nr})$ tend at large times
to the stationary correlation functions
$F^\theta_{_N}(\un{\Nr})$ whose parts depending on all $\Nr_n$'s are
dominated at short distances by the zero modes of $M_{_N}$. In
particular, this scenario leads to the anomalous scaling of the
$N$-point structure functions $S^\theta_{_N}(r)=\langle\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}(\theta(\Nr)
-\theta({\bf 0}))^N\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\rangle$ which pick up the contributions to
$F^\theta_{_N}$ dependent on all $\Nr_n$'s. Naively, one could
expect that $S^\theta_{_N}(r)$ scale for small $r$ with powers
${N\over2}(2-\xi)$, i.e. ${N\over2}$ times the 2-point function
exponent. Instead, they scale with smaller exponents, which signals
the small scale intermittency:
\begin{eqnarray}} \newcommand{\de}{\bar\partial
S^\theta_{_N}(r)\ \ \propto\ \
r^{(2-\xi){_N\over^2} +\Delta^\theta_{_N}}
\label{ansc}
\qqq with the anomalous (part of the) exponent $\Delta^\theta_{_N}$
given for small $\xi$ by \begin{eqnarray}} \newcommand{\de}{\bar\partial \Delta^\theta_{_N}\ =\
-\,{_{N(N-2)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}(1+2\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\wp)} \over^{2(d+2)}}\,\xi\ +\
\CO(\xi^2)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},
\label{aex}
\qqq see Eq.\,\,(\ref{Ncf}). We infer that the {\bf direct cascade is
intermittent}. \vskip 0.3cm
\nsection{Absence of intermittency in the inverse cascade}
\subsection{Higher structure functions of the tracer}
For $\wp\geq{d\over\xi^2}$, i.e. in the strongly
compressible phase, we expect a completely different behavior of the
p.d.f.'s $P^{t,s}_{_N}(\un{\Nr}; \hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\un{\Nr'})$ when the points $\Nr_n$
become close to each other. The (differences of) Lagrangian
trajectories in a fixed realizations of the velocity field are
uniquely determined in this phase if we specify their time $t$
positions. The p.d.f.'s for $N$ trajectories should then reduce to
those of $N-1$ trajectories if we let the time $t$ positions of two
of them coincide: \begin{eqnarray}} \newcommand{\de}{\bar\partial \lim\limits_{\Nr_{_N}\to\Nr_{_{N-1}}} \,
P^{t,s}_{_N}(\un{\Nr};\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\un{\Nr'}) \ =\
P^{t,s}_{_{N-1}}(\Nr_1,\dots,\Nr_{_{N-1}};\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\Nr'_1,\dots,
\Nr'_{_{N-1}})\,\,\delta(\Nr'_{_{N-1}}-\Nr'_{_N})\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.
\label{contr}
\qqq
Applying this relation $N$ times, we infer, that
$\,P^{t,s}_{_N}(\Nr,\dots,\Nr;\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\un{\Nr'})\ =\ \prod\limits_{n=2}^N
\delta(\Nr'_1-\Nr'_n)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.$ \,Since the p.d.f.'s $P_{_N}^{t,s}$ propagate
the $N$-point functions of the tracer in the free decay,
see Eq.\,\,(\ref{fdn}), it follows that, in the strongly compressible
phase, such a decay preserves all the higher mean quantities
$\langle\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\theta(t,\Nr)^N\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\rangle=F^\theta_N(t,\Nr,\dots,\Nr)$.
In the presence of forcing, however, all these quantities
are pumped by the source. Indeed, Eq.\,\,(\ref{Nc1}) implies now
that
\begin{eqnarray}} \newcommand{\de}{\bar\partial
\langle\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\theta(t,\Nr)^N\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\rangle\,=\,{_{N(N-1)}\over^2}\,\chi(0)
\smallint\limits_0^t\langle\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\theta(s,\Nr)^{N-2}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\rangle\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} ds\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},
\label{thn}
\qqq
from which it follows that, for even $N$, $\,\langle\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\theta(t,\Nr)^N\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
\rangle=(N-1)!!\,(\chi(0)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} t)^{^{N\over2}}$.
\vskip 0.3cm
The relation (\ref{contr}) permits also to calculate effectively the higher
structure functions $S^\theta_{_N}(t,r)$ in the strongly compressible
phase. We prove in Appendix E that for $N$ even,
\begin{eqnarray}} \newcommand{\de}{\bar\partial
S^\theta_{_N}(t,r)\ =\ N(N-1)\,\smallint\limits_0^tds
\smallint\limits_0^\infty P_2^{t,s}(r,r')\,\,S^\theta_{_{N-2}}(s,r')
\,\,(\chi(0)-\chi({r'}))\,\,d\mu_d({r'})\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.
\label{rrsn}
\qqq Note that $S^\theta_{_N}$ satisfies the evolution equation
\begin{eqnarray}} \newcommand{\de}{\bar\partial
\partial} \newcommand{\ee}{{\rm e}_t\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} S^\theta_{_N}\ =\ -\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} M_2\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} S^\theta_{_N}\,+\,
N(N-1)\,S^\theta_{_{N-2}}\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}(\chi(0)-\chi)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.
\label{rrde}
\qqq
This is the same equation that would have been obtained directly
from Eq.\,\,(\ref{ps}) neglecting the viscous term and averaging
with respect to the Gaussian fields ${{\bf v}}} \newcommand{\Nw}{{{\bf w}}$ and $f$, e.g.
by integration by parts \cite{UF}.
The situation should be contrasted with that in the weakly
compressible case where the evolution equations for the structure
functions do not close due to the dissipative anomaly
which adds to Eq.\,\,(\ref{rrde}) terms that are not directly
expressible by the structure functions \cite{Kr94}, see also \cite{BKG}.
\vskip 0.3cm
Substituting into Eq.\,\,(\ref{rrsn}) the expression (\ref{summ}) for
$P^{t,s}_2$ in the strongly compressible phase, we obtain \begin{eqnarray}} \newcommand{\de}{\bar\partial
S^\theta_{_N}(t,r)\ =\
N(N-1)\,\smallint\limits_0^tds\smallint\limits_0^\infty \ee^{-\vert
t-s\vert\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} M_2^+}(r,r')
\,\,S^\theta_{_{N-2}}(s,r')\,\,(\chi(0)-\chi({r'}))
\,\,d\mu_d({r'})\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.\quad
\label{taf}
\qqq The above formula implies, by induction, that $S^\theta_{_N}$
are positive functions (no surprise), growing in time. Suppose that
$S^\theta_{_{N-2}}(t,r)$ reaches a stationary form
$S^\theta_{_{N-2}}(r)$ which behaves proportionally to the zero mode
$r^{1-a}$ for small $r$ and which exhibits the normal scaling $\propto
r^{({_N\over^2}-1)(2-\xi)}$ for large $r$ ($S^\theta_2$ behaves this
way for $\wp>{d\over\xi^2}$, i.e. for $b>1$). Then \begin{eqnarray}} \newcommand{\de}{\bar\partial
\smallint\limits_0^tds\smallint\limits_0^\infty \ee^{-\vert t-s\vert\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
M_2^+}(r,r')
\,\,S^\theta_{_{N-2}}(s,r')\,\,\chi({r'})\,\,d\mu_d({r'}) \qqq
converges when $t\to\infty$ to a function bounded by \begin{eqnarray}} \newcommand{\de}{\bar\partial
\smallint\limits_0^\infty \hspace{0.025cm}} \newcommand{\ch}{{\rm ch}(M_2^+)^{-1}(r,r')
\,\,S^\theta_{_{N-2}}(r')\,\,\chi({r'})\,\,d\mu_d({r'})\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}, \qqq which
behaves as $r^{1-a}$ for small $r$ and as a constant for large $r$,
compare to the estimate (\ref{gf}). On the other hand, the dominant
contribution to the $\chi(0)$ term in Eq.\,\,(\ref{taf}) is
proportional to \begin{eqnarray}} \newcommand{\de}{\bar\partial \smallint\limits_0^tds\smallint\limits_0^\infty
\ee^{-s\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} M_2^+}(r,r')\,\,{r'}^{({_N\over^2}-1) (2-\xi)}\,\,
d\mu_d({r'})\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.
\label{mcon}
\qqq Since, by Eq.\,\,(\ref{norm+}) of Appendix B,
$\smallint\limits_0^\infty\ee^{-s\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} M_2^+}(r,r')
\,\,{r'}^{({_N\over^2}-1)(2-\xi)}\,\, d\mu_d({r'})$ vanishes at $s=0$
and behaves as $s^{{_N\over^2}-1-b}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} r^{1-a}$ for large $s$, we infer
that the integral (\ref{mcon}) stabilizes when $t\to\infty$ only if
\begin{eqnarray}} \newcommand{\de}{\bar\partial {_N\over^2}\,<\,b\,=\,{_{1-a}\over^{2-\xi}}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.
\label{stri}
\qqq This condition becomes more and more stringent with increasing
$N$. If it is not satisfied, then the contribution (\ref{mcon}), and,
consequently, $S^\theta_{_N}(t,r)$, diverge as $t^{{_N\over^2}-b}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
r^{1-a}$. If it is satisfied, the contribution (\ref{mcon}) reaches a
limit when $t\to\infty$ which is proportional to
$r^{{_N\over^2}(2-\xi)}$. It then dominates for large $r$ the
stationary $N$-point structure function $S^\theta_{_N}(r)$ which for
small $r$ behaves as $r^{1-a}$, reproducing our inductive assumptions.
\vskip 0.3cm
{\bf Summarizing}: The even $N$-point structure functions become
stationary at long times only if the conditions (\ref{stri}) are
satisfied and they exhibit then the normal scaling at distances much
larger than the injection scale $L$, i.e. in the inverse energy
cascade. At the distances much shorter than $L$, however, the existing
stationary structure functions are very intermittent: they scale with
the fixed power $1-a$. \vskip 0.3cm
\subsection{Generating function and p.d.f. of scalar differences}
It is convenient to introduce the generating function for the
structure functions of the scalar defined by \begin{eqnarray}} \newcommand{\de}{\bar\partial
{\cal Z}} \newcommand{\s}{\hspace{0.05cm}^\theta(\lambda,t,r)\ =\ \langle\,\ee^{\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} i\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\lambda
\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}(\theta(t,\Nr)-\theta(t,{\bf 0}))}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\rangle\ =\
\sum\limits_{n=0}^\infty{_{(-1)^n\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\lambda^{2n}}\over^{(2n)!}} \,\,
S^\theta_{2n}(t,r)\,.
\label{gfsd}
\qqq We shall take $\lambda$ real. Note that the evolution equation
Eq.\,\,(\ref{rrde}) implies that \begin{eqnarray}} \newcommand{\de}{\bar\partial \partial} \newcommand{\ee}{{\rm e}_t\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} {\cal Z}} \newcommand{\s}{\hspace{0.05cm}^\theta\ =\
-\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}[M_2\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}+\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\lambda^2\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}(\chi(0)-\chi)]\, {\cal Z}} \newcommand{\s}{\hspace{0.05cm}^\theta\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.
\label{gfee}
\qqq At time zero, ${\cal Z}} \newcommand{\s}{\hspace{0.05cm}^\theta\equiv 1$. Since $M_2+\lambda^2
(\chi(0)-\chi)$ has a similar boundary condition problem at $r=0$ as
$M_2$, one should be careful writing down the solution of the
parabolic equation (\ref{gfee}). It is not difficult to see
that ${\cal Z}} \newcommand{\s}{\hspace{0.05cm}^\theta$ is given by a Feynman-Kac type formula:
\begin{eqnarray}} \newcommand{\de}{\bar\partial
{\cal Z}} \newcommand{\s}{\hspace{0.05cm}^\theta(\lambda,t,r)\ =\ E_{_r}\Big(\ee^{-\lambda^2
\smallint\limits_0^t(\chi(0)-\chi(r(s)))\, ds}\Big),
\label{cz}
\qqq
where $E_{_r}$ is the expectation value w.r.t. the Markov process
$r(s),\ s\geq 0$, with transition probabilities
$P_2^{t,s}(r,r')$, starting at time zero at $r$. Due to the
delta-function term in the transition probabilities, almost all
realizations $r(s)$ of the process arrive at finite time at $r=0$
and then do not move. Note that ${\cal Z}} \newcommand{\s}{\hspace{0.05cm}^\theta(0,t,r)
={\cal Z}} \newcommand{\s}{\hspace{0.05cm}^\theta(\lambda,t,0)=1$ and that ${\cal Z}} \newcommand{\s}{\hspace{0.05cm}^\theta(\lambda,t,r)$
decreases in time. Moreover, ${\cal Z}} \newcommand{\s}{\hspace{0.05cm}^\theta(\lambda,s,r)$ for $s\geq t$
is bounded below by an expectation similar to that of Eq.\,\,(\ref{cz})
but with the additional restriction that $r(t)=0$ (and hence that
$r(s')=0$ for all $s'\geq t$). The latter is positive since the
probability that $r(t)=0$ is non-zero (it even tends to one when
$t\to\infty$). Thus a non-trivial limit $\lim\limits_{t\to\infty}\,
{\cal Z}} \newcommand{\s}{\hspace{0.05cm}^\theta(\lambda,t,r)\equiv{\cal Z}} \newcommand{\s}{\hspace{0.05cm}^\theta(\lambda,r)$ exists. It
satisfies the stationary version of Eq.\,\,(\ref{gfee}):
\begin{eqnarray}} \newcommand{\de}{\bar\partial
[M_2\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}+\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\lambda^2\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}(\chi(0)-\chi)]\,{\cal Z}} \newcommand{\s}{\hspace{0.05cm}^\theta\ =\ 0\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.
\qqq
In particular, for $r$ large, for which we may drop $\chi(r)$,
${\cal Z}} \newcommand{\s}{\hspace{0.05cm}^\theta(\lambda,r)$ is an eigen-function of the operator $M_2$
given by Eq.\,\,(\ref{m2ri0}) with the eigen-value $-\lambda^2\chi(0)$.
This permits to find the analytic form of the generating
function ${\cal Z}} \newcommand{\s}{\hspace{0.05cm}^\theta(\lambda,r)$ in this regime, a rather rare
situation in the study of models of turbulence. We have
\begin{eqnarray}} \newcommand{\de}{\bar\partial
{\cal Z}} \newcommand{\s}{\hspace{0.05cm}^\theta(\lambda,r)\ \cong\ {_{2^{1-b}}\over^{\Gamma(b)}}
\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\left(\sqrt{\chi(0)/Z'}\,\vert\lambda\vert \hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
r^{_{2-\xi}\over^2}\right)^{\hspace{-0.03cm}b}\,K_b
({\sqrt{\chi(0)/Z'}} \,\vert\lambda\vert\,
r^{\xi-2\over2})\,\equiv\,{\cal Z}} \newcommand{\s}{\hspace{0.05cm}^\theta_{\rm sc} (\lambda^2\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
r^{2-\xi})\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.\ \quad
\qqq
The Bessel function $K_b(z)$ decreases exponentially
at infinity. We have chosen the normalization so that
${\cal Z}} \newcommand{\s}{\hspace{0.05cm}^\theta(0,r)=1$. Since $z^b\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} K_b(z)$ has an expansion around
$z=0$ in terms of $z^{2n}$ and $z^{2b+2n}$ with $n\geq0$, it is
$N$-times differentiable at zero only if ${N\over2}<b$. Not
surprisingly, this is the same condition that we met above as the
requirement for the existence of stationary limits of the structure
functions $S^\theta_{_N}(t,r)$. \vskip 0.3cm
{\bf In short}: in the strongly compressible phase
$\wp\geq{d\over\xi^2}$, the generating function
${\cal Z}} \newcommand{\s}{\hspace{0.05cm}^\theta(\lambda,t,r)$ has a stationary limit ${\cal Z}} \newcommand{\s}{\hspace{0.05cm}^\theta(\lambda,r)$
which for large distances takes the scaling form ${\cal Z}} \newcommand{\s}{\hspace{0.05cm}^\theta_{\rm sc}
(\lambda^2 r^{2-\xi})$. Although ${\cal Z}} \newcommand{\s}{\hspace{0.05cm}^\theta(\lambda,r)$ is non-Gaussian
and even not a smooth function of $\lambda$, its normal scaling in
the large $r$ regime, responsible for the normal scaling of the
existing structure functions, signals the {\bf suppression
of intermittency in the inverse cascade}.
\vskip 0.3cm
The same behavior may be seen in the Fourier transform of the
generating function ${\cal Z}} \newcommand{\s}{\hspace{0.05cm}^\theta(\lambda,t,r)$ giving the p.d.f. of
scalar differences:
\begin{eqnarray}} \newcommand{\de}{\bar\partial
\label{distribuzione}
{\cal P}} \newcommand{\CQ}{{\cal Q}^\theta(t,\vartheta,r)\ \equiv\ \langle\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\delta(\vartheta
-\theta(t,\Nr)+\theta(t,{\bf 0}))\,\rangle\ =\ {_1\over^{2\pi}}\,
\int\ee^{-i\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\lambda\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\vartheta}\,{\cal Z}} \newcommand{\s}{\hspace{0.05cm}^\theta(\lambda,t,r)\,
d\lambda\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.
\qqq
The $t\to\infty$ limit ${\cal P}} \newcommand{\CQ}{{\cal Q}^\theta(\vartheta,r)$ of the finite-time
p.d.f. satisfies the partial-differential
equation $\,[\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} M_2\,-\,(\chi(0)-\chi)\,\partial} \newcommand{\ee}{{\rm e}_{\vartheta}^2\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}]\,
{\cal P}} \newcommand{\CQ}{{\cal Q}^\theta\,=\, 0\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.$ \,For $r$ large, the latter reduces to the
ordinary differential equation
\begin{eqnarray}} \newcommand{\de}{\bar\partial
\partial} \newcommand{\ee}{{\rm e}_x\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}[\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}(\chi(0)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}+\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z'\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} x^2)\,\partial} \newcommand{\ee}{{\rm e}_x\,+
\,(2b+1)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z'\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} x\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}]\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\, p^\theta\ =\ 0\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},
\qqq
where $\,{\cal P}} \newcommand{\CQ}{{\cal Q}^\theta(\vartheta,r)=r^{-{_{2-\xi}\over^2}}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
p^\theta(r^{-{_{2-\xi}\over^2}}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\vartheta)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.$ \,The normalized,
smooth at $x=0$ solution is
\begin{eqnarray}} \newcommand{\de}{\bar\partial
p^\theta(x)\ =\ {_{\sqrt{Z'}\,\chi(0)^b\, \Gamma(2b)}
\over^{2^{2b-1}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\Gamma(b)^2}}\ [\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\chi(0)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
+\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z'\, x^2\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}]^{^{-b-{1\over2}}}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.
\label{ppp}
\qqq
It is the Fourier transform of the generating function
${\cal Z}} \newcommand{\s}{\hspace{0.05cm}^\theta_{sc}(\lambda^2)$. Note that the condition ${N\over2}<b$
becomes now the condition for the existence of the $N$-th moment of
$p^\theta$. The slow decay of $p^\theta(x)$ at infinity renders most
of the moments divergent.
\nsection{Infrared cutoffs and the inverse cascade}
As shown in the previous section, in the strongly compressible phase,
the asymptotic behavior of the scalar $\theta$ is quasi-stationary:
due to the excitation of larger and larger scales, observables might
or might not reach a stationary form. It is therefore of fundamental
interest to analyze how the inverse cascade properties are affected
by the presence of an infrared cutoff at the very large scales.
This has also a practical importance as such cutoffs are always present
in concrete situations in one form or another \cite{Tab,SY,Betal,Bisk}.
The simplest modification of the dynamics that introduces an infrared
cutoff is to add to (\ref{ps}) a friction term:
\begin{eqnarray}} \newcommand{\de}{\bar\partial
\partial} \newcommand{\ee}{{\rm e}_t
\theta+{{\bf v}}} \newcommand{\Nw}{{{\bf w}}\cdot\nabla\theta+\alpha\,\theta-\kappa\nabla^2\theta=f\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},
\label{friction}
\qqq
where $\alpha$ is a positive constant. We shall be interested in
studying the limit when $\alpha\to 0$. For flows smooth in space and
$\delta$-correlated in time, the case considered in \cite{MC}, the
advection and the friction terms have the same dimensions. For
non-smooth flows, this is not the case and friction and advection
balance at the friction scale
$\eta_{_f}\sim\CO\left(\alpha^{-1/(2-\xi)}\right)$ which becomes arbitrarily
large when $\alpha$ tends to zero. Roughly speaking, advection and
friction dominate at scales much smaller and much larger than $\eta_{_f}$,
respectively. The hierarchy of scales is therefore the mirror image
of the one for the direct cascade\,: the energy is injected at the
integral scale $L$, transferred upwards by the advection term
in the convective range of the inverse cascade and finally extracted
from the system at very large scales. We are interested in the influence
of the infrared cutoff scale on the inverse cascade convective range
properties and we shall therefore assume that $\wp\geq{d\over\xi^2}$
throughout this section.
\vskip 0.3cm Heuristically, it is {\it a priori} quite clear that
the friction will make the system reach a stationary state.
Specifically, the friction term in Eq.\,\,(\ref{friction}) is
simply taken into account by remarking that the field
$\widetilde} \def\hat{\widehat\theta(t,\Nr)=\exp(\alpha\,t)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\theta(t,\Nr)$ satisfies the
equation (\ref{ps}) with a forcing $\ee^{\alpha\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} t}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} f(t,\Nr)$. We can
then carry over the Lagrangian trajectory statistics from the previous
Sections and we just have to calculate the averages with the
appropriate weights. For example, the recursive equation (\ref{Nc1})
for the $N$-point function in the presence of forcing becomes
\begin{eqnarray}} \newcommand{\de}{\bar\partial
F^\theta_{_N}(t,\un{\Nr})\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} =
\int\limits_0^t ds\int\ee^{-(t-s)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} N\alpha}\,
P^{t,s}_{_N}(\un{\Nr};\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\un{\Nr'})
\sum\limits_{n<m}F^\theta_{_{N-2}}(s,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\Nr'_1,
\mathop{\dots\dots}\limits_{\hat{n}\ \ \hat{m}}, \Nr'_{_N})\
\chi({\vert \Nr'_n-\Nr'_m\vert})\ d'\un{\Nr'}.\ \quad
\label{Nc1a}
\qqq
Similarly, the expressions (\ref{thn}) and (\ref{rrsn})
for $\langle\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\theta(t,\Nr)^N\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\rangle$ and $S^\theta_{_N}(t,r)$
are modified by inclusion of the factor $\ee^{-(t-s)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} N\alpha}$
under the time integrals. This renders them convergent
in the limit $t\to\infty$, in contrast with the $\alpha=0$ case.
As a result,
$\langle\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\theta(t,\Nr)^N\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\rangle$ and $S^\theta_{_N}(t,r)$
reach when $t\to\infty$ the limits that are the solutions
of the stationary versions of the evolution equations
\begin{eqnarray}} \newcommand{\de}{\bar\partial
&&\partial} \newcommand{\ee}{{\rm e}_t\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\langle\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\theta^N\rangle\,=\,{_{N(N-1)}\over^2}\,\chi(0)\,
\langle\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\theta^{N-2}\rangle\,-\, N\alpha\,\langle\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\theta^N\rangle\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},
\label{etha}\\
&&\partial} \newcommand{\ee}{{\rm e}_t\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} S^\theta_{_N}\ =\ -\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} M_2\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} S^\theta_{_N}\,+\,
N(N-1)\,S^\theta_{_{N-2}}\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}(\chi(0)-\chi)\,-\,N\alpha\, S^\theta_N\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.
\label{rrda}
\qqq
We obtain then in the stationary state: $\langle\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\theta(\Nr)^N\rangle=
(N-1)!!\,({\chi(0)\over2\alpha})^{^{N\over2}}$ and
\begin{eqnarray}} \newcommand{\de}{\bar\partial
\label{soluzione}
S^\theta_{_N}(r)\,=\,\left({_{\chi(0)}\over^\alpha}
\right)^{^{\hspace{-0.1cm}{N\over 2}}}(N-1)!!
\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\Big[1+{_N\over^{2^b\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\Gamma(b)}}\sum_{k=1}^{N\over 2}{_{(-1)^k}\over^k}
\left({{}_{{N\over 2}-1}\atop{}^{k-1}}\right)
z_k^b\,K_b\left(z_k\right)\Big],
\qqq where the variables $z_k$ are defined as
\begin{eqnarray}} \newcommand{\de}{\bar\partial
\label{etaf}
z_k\equiv 2\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} k^{{_1\over^2}}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}({_r\over^{\eta_{_f}}})^{^{2-\xi\over2}}\qquad
{\rm with}\qquad
\eta_{_f}\equiv ({_{2\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z'}\over^{\alpha}})^{^{1\over2-\xi}}
\qqq
being the friction scale. For $r\gg\eta_{_f}$, all the Bessel
functions tend to zero and $S^\theta_{_N}$
reaches the constant asymptotic value $2^{^{{N\over 2}}}\hspace{-0.05cm}
(N-1)!!\,({_{\chi(0)}\over^{2\alpha}})
^{^{N\over 2}}$ which agrees with the stationary value of
$2^{^{{N\over 2}}}\hspace{-0.05cm}\langle\,\theta^N\,\rangle$.
The expansion of $K_b$ for small
arguments gives the following dominant behaviors in
the inverse cascade convective range $L\ll r\ll \eta_{_f}$:
\begin{eqnarray}} \newcommand{\de}{\bar\partial
\label{risposta}
S^\theta_{_N}(r)\ \cong\ \left\{ \begin{array}{ll}
c_1\,\,r^{^{{N\over2}(2-\xi)}} & \mbox{if\quad$b>{_N\over^2}$}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},\\
c_2\,\,r^{^{{N\over2}(2-\xi)}}\log\left({\eta_{_f}\over r}\right)
& \mbox{if\quad$b={_N\over^2}$}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},\\
c_3\,\,r^{^{{N\over2}(2-\xi)}} \left({\eta_{_f}\over r}\right)
^{(2-\xi)({N\over2}-b)}&
\mbox{if\quad$b<{N\over2}$}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},
\end{array}\right\}
\qqq
where the constants $c_i$ are given by
\begin{eqnarray}} \newcommand{\de}{\bar\partial
\label{costanti}
\begin{array}{ll}
c_1=\left(\chi(0)\over 4\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z'\right){N!\over(N/2)!}
{\Gamma(b-N/2)\over \Gamma(b)}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},\quad\qquad
c_2=\left(\chi(0)\over 4\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z'\right){N!\over(N/2)!}
{2-\xi \over \Gamma(b)}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},\\
\\
c_3=\left(\chi(0)\over 4\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z'\right){N!\over(N/2-1)!}
{\Gamma(1-b)\over \Gamma(1+b)}
\left(-\sum\limits_{k=1}^{N/2}{(-1)^k\over k}\left(
{{N/2-1}\atop{k-1}}\right)k^b\right).
\end{array}
\qqq
The threshold $b={N\over2}$ in Eq.\,\,(\ref{risposta}) is the
same as in the inequality (\ref{stri}), discriminating the
moments that do not converge at large times in the absence of
friction. The converging moments are not
modified by the presence of friction.
Conversely, those that were diverging are now
tending to finite values but they pick an
anomalous scaling behavior in the cutoff scale
$\eta_{_f}$. Note that the moments with ${N\over2}>b$
scale all with the scaling exponent $1-a$.
\begin{figure}
\begin{center}
\vspace{-0.6cm}
\mbox{\hspace{0.0cm}\psfig{file=pdf.ps,height=8cm,width=9cm}}
\end{center}
\vspace{-0.6cm}
\caption{{\it The probability distribution function of scalar differences with
and without friction (solid and dashed lines). The specific parameters
are $d=1$, $\xi=1$, ${\cal S}^2={\cal C}^2={_1\over^2}$, $\chi(0)=1$, $\alpha=2$,
and $r=0.01$.}}
\label{fig1}
\end{figure}
\vskip 0.3cm
It is interesting to look at this saturation
from the point of view of the p.d.f.
${\cal P}} \newcommand{\CQ}{{\cal Q}^\theta(\vartheta,r)$, defined
as in the relation (\ref{distribuzione}). The equation for
${\cal P}} \newcommand{\CQ}{{\cal Q}^\theta$ can be derived by the same
procedure as in the previous section. Its
stationary version reads
\begin{eqnarray}} \newcommand{\de}{\bar\partial
\label{proba}
-M_2{\cal P}} \newcommand{\CQ}{{\cal Q}^\theta\,+\,\alpha\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\,\partial} \newcommand{\ee}{{\rm e}_{\vartheta}\left(\vartheta{\cal P}} \newcommand{\CQ}{{\cal Q}^\theta\right)
+\,(\chi(0)-\chi)\,\partial} \newcommand{\ee}{{\rm e}_{\vartheta}^2\, {\cal P}} \newcommand{\CQ}{{\cal Q}^\theta\ =\ 0\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.
\qqq
For the scales $r\gg L$ of interest to us here, $\chi$ can be neglected
with respect to $\chi(0)$.
The relevant informations on the p.d.f. ${\cal P}} \newcommand{\CQ}{{\cal Q}^\theta$ are conveniently
extracted by expanding the function ${\cal P}} \newcommand{\CQ}{{\cal Q}^\theta$ in a series
\begin{eqnarray}} \newcommand{\de}{\bar\partial
\label{espansione}
{\cal P}} \newcommand{\CQ}{{\cal Q}^\theta(\vartheta,r)=\sqrt{{_\alpha\over^{2\pi\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\chi(0)}}}\,
\ee^{-{\alpha\vartheta^2\over 2\chi(0)}}\sum_{k=0}^\infty d_{2k}(r)\,
H_{2k}\left(\sqrt{{_\alpha\over^{2\chi(0)}}}\,\vartheta\right)
\qqq
of the Hermite polynomials $H_{2k}$. The coefficients $d_{2k}$ can be
obtained by plugging the expansion (\ref{espansione}) into
Eq.\,\,(\ref{proba}) and using well-known properties of Hermite
functions. One obtains
\begin{eqnarray}} \newcommand{\de}{\bar\partial
\label{coefficienti}
d_0(r)=1\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},\qquad d_{2k}(r)={_{(-1)^k}\over^{\Gamma(b)}}{_1\over^{k!\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
2^{b-1+2k}}}\,z_k^b\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\,K_b(z_k)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},
\qqq where $z_k$ is defined in Eq.\,\,(\ref{etaf}). At
scales $r\gg\eta_{_f}$, friction dominates and the p.d.f. tends to a
Gaussian form. In the inverse cascade convective range $L\ll r\ll\eta_{_f}$,
the solution (\ref{ppp}) remains valid as long as $\vartheta^2\ll
{\chi(0)\over\alpha}$, while the power-law tails are cut by an
exponential fall off. The situation is exemplified in Fig.~1. The
scaling behavior (\ref{risposta}) of the structure functions is then easy
to grasp: for ${N\over2}<b$, the dominant contribution comes from the scale
invariant part of the p.d.f., resulting in the normal scaling, whereas for
${N\over2}\geq b$, the leading contribution comes from the region
around $\theta^2={\chi(0)\over\alpha}$, with the tails cut out by friction.
The dominant behavior can be captured by simply calculating the moments
with the scale-invariant p.d.f. (\ref{ppp}) cut at
$\vartheta^2={\chi(0)\over\alpha}$.
\nsection{Advection of a density}
The time $t\,$ 2-point function of the scalar $\rho$, whose advection is
governed by Eq.\,\,(\ref{psb}), may be studied similarly as for the
tracer $\theta$, see \cite{russ,mazverg}. For the free decay
of the 2-point function, we obtain \begin{eqnarray}} \newcommand{\de}{\bar\partial F^\rho_2(t,r)\equiv
\langle\,\rho(t,\Nr)\,\rho(t,{\bf 0})\,\rangle\ = \
\int\limits_0^\infty P^{0,t}_2(r',r) \,\,F_2^\rho(0,r')\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
d\mu_d({r'})\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},
\label{ind1}
\qqq where we have used Eqs.\,\,(\ref{sc00}) and (\ref{alte}), compare
to Eq.\,\,(\ref{ind}). The evolution (\ref{psb}) of the scalar $\rho$
preserves the total mass $\int\hspace{-0.08cm}\rho(t,\Nr) \hspace{0.025cm}} \newcommand{\ch}{{\rm ch} d\Nr$.
As a consequence,
the mean ``total mass squared'' per unit volume
$m^2_\rho(t)\equiv\int\langle\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\rho(t,\Nr)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} \rho(t,{\bf
0})\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\rangle\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} d\Nr=\int\limits_0^\infty F^\rho_2(t,r) \, d\mu_d(r)$
does not change in time in both phases. \vskip 0.3cm
In the presence of the stationary forcing, the 2-point function of
$\rho$ computed with the use of Eqs.(\ref{sc2}) and (\ref{alte}) becomes
\begin{eqnarray}} \newcommand{\de}{\bar\partial
F^\rho_2(t,r)\ =\
\int\limits_0^tds\int\limits_0^\infty P^{s,t}_2(r',r)
\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\chi({r'})\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} d\mu_d({r'})\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},
\label{dif1}
\qqq
if $\rho=0$ at time zero. It evolves according to the equation
\begin{eqnarray}} \newcommand{\de}{\bar\partial
\partial} \newcommand{\ee}{{\rm e}_t F^\rho_2\,=\,-(M_2^{\kappa})^*\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} F^\theta_2\,+\,\chi\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},
\label{diffd}
\qqq
i.e. similarly to $F^\theta_2$, see Eq.\,\,(\ref{diffe}), but with
$M_2^{\kappa}$ exchanged for its adjoint
$(M_2^{\kappa})^*$, a signature of the duality between
the two scalars noticed before.
\vskip 0.3cm
For $\wp<{_{d-2+\xi}\over^{2\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\xi}}$, the 2-point function
$F^\rho_2(t,r)$ attains a stationary form given by
Eq.\,\,(\ref{2pkar}) of Appendix D for $\kappa>0$ and reducing for
$\kappa=0$ to the expression
\begin{eqnarray}} \newcommand{\de}{\bar\partial F^\rho_2(r)\ =\ {_1\over^{(a-1)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
Z}}\, r^{-d+1+a-\xi}\smallint\limits_r^\infty
{r'}^{d-a}\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\chi(r')\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} dr'\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}+\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}{_1\over^{(a-1)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z}}
\,r^{-d+2-\xi}\smallint\limits_0^r{r'}^{d-1}\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\chi(r')
\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} dr'\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.\ \quad
\label{36}
\qqq
In particular, $F_2^\rho(r)$ becomes proportional to $r^{-d+1+a-\xi}$
for small $r$ and diverges at $r=0$, except for the
incompressible case when the two scalars coincide. The small $r$
behavior agrees with the result of \cite{russ} and, in one dimension,
with that of \cite{mazverg}. For large distances $r$,
the function $\,F_2^\rho(r)$ is proportional to $r^{-d+2-\xi}$.
In the upper interval
${_{d-2+\xi}\over^{2\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\xi}}<\wp<{_d\over^{\xi^2}}$
of the weakly compressible phase, it is
$$F^\rho_2(t,r)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}-\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} {_{(4\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z')^{b}}\over^{(1-a)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
Z\,\Gamma({1-b})}}\,\,t^{b}\,r^{-d+1+a-\xi}\smallint\limits_0^\infty
{r'}^{d-1}\,\chi(r')\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} dr'$$
that reaches the stationary limit still given for $\kappa=0$
by the right hand side of Eq.\,\,(\ref{36}). Thus the 2-point function
$F_2^\rho(t,r)$ is pumped now into the zero mode $r^{-d+1+a-\xi}$
of the operator $M_2^*$.
\vskip 0.3cm
The higher order correlation functions of $\rho$,
$\,F^\rho_N(t,\un{\Nr})\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\equiv\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\langle\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\prod\limits_{n=1}^N
\rho(t,\Nr_n)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\rangle$, \,are expected to converge for long times to
a stationary form for sufficiently small $\xi$ and/or compressibility
and to be dominated at short distances by a scaling zero mode $\psi_0$
of the operator $M_N^*$, \begin{eqnarray}} \newcommand{\de}{\bar\partial \psi_0(\un{\Nr})\ =\
1\,-\,d\,{\wp}\, \xi\sum\limits_{1\leq n<m\leq N}
\ln{\vert\Nr_n-\Nr_m\vert}\ +\ \CO(\xi^2)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.
\label{zmrho}
\qqq The zero mode $\psi_0$ becomes equal to 1 when $\xi\to0$. The
scaling dimension of $\psi_0$ may be easily calculated to the first
order in $\xi$ by applying the dilation generator to the left hand
side of Eq.\,\,(\ref{zmrho}). It is equal to \begin{eqnarray}} \newcommand{\de}{\bar\partial -\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}{_{N(N-1)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} d}
\over^{2}}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} \wp\,\xi\ +\CO(\xi^2) \ \equiv\
{_N\over^2}(2-\xi)\,+\,\Delta^\rho_{_N}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}, \qqq which again agrees
with the result
$$\Delta^\rho_{_N}\,=\, -\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} N\,
+\,{_{N\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}(1-(N-1)\,\wp\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} d)}\over^{2}}\,\xi\
+\ \CO(\xi^2)$$
of \cite{russ} and with the exact result $\,\Delta^\rho_2=a-1-d\,$
obtained above. Note the singular behavior of
$\psi_0(\un{\Nr})$ at the origin, at least for small $\xi$.
\vskip 0.4cm
Finally, in the strongly compressible phase
$\wp\geq{d\over\xi^2}$, where the second of the
expressions (\ref{summ}) has to be used for $P_2^{s,t}$, we obtain
\begin{eqnarray}} \newcommand{\de}{\bar\partial
F^\rho_2(t,r)&=&\smallint\limits_0^tds\smallint\limits_0^\infty
\ee^{-s\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} M^+_2}(r',r)\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\chi(r')\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} d\mu_d(r')\cr &&+\
\delta(\Nr)\,\smallint\limits_0^tds\smallint\limits_0^\infty
[\m1\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}-\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\gamma(b,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}{_{{r'}^{2-\xi}}\over^{4\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z'\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
s}})\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\Gamma(b)^{-1}]\,\,\chi(r')\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} d\mu_d(r')\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.
\qqq
For large times, $F_2^\rho(t,r)$ is pumped into
the singular mode proportional to the delta-function
with a constant rate\footnote{the pumping disappears if
$\smallint_0^\infty\chi(r)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} d\mu_d(r)=0$, which is the case
considered in \cite{mazverg,russ}, but even then $F_2^\rho$
picks up a singular contribution in the limit $\kappa\to0$}:
\begin{eqnarray}} \newcommand{\de}{\bar\partial
&&F^\rho_2(t,r)\ -\ \delta(\Nr)\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
t\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\smallint\limits_0^\infty \chi(r')\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} d\mu_d(r')\ \
\mathop{\rightarrow}\limits_{t\to\infty} \ \ {_1\over^{(1-a)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z}}\,
r^{-d+1+a-\xi}\smallint\limits_0^r {r'}^{d-a}\,\chi(r')\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} dr'\cr
&&+\ {_1\over^{(1-a)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z}}\,r^{-d+2-\xi}\smallint\limits_r^\infty
{r'}^{d-1}\,\chi(r')\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} dr'\ +\ {_{S_{d-1}}\over^{(2-\xi)(1+a-\xi)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
Z}}\,\delta(\Nr)\smallint\limits_0^\infty{r'}^{d+1-\xi}
\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\chi(r')\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} dr'
\qqq
(except for $\wp={d\over\xi^2}$), compare to Eqs.\,\,(\ref{gf0})
and (\ref{tin}). Its non-singular part, however, stabilizes
and becomes proportional to $r^{-d+2-\xi}$ for small $r$ and
to $r^{-d+1+a-\xi}$ for large $r$. Note the inversion
of the powers as compared with the weakly compressible phase.
\vskip 0.3cm
The mean total mass squared of $\rho$ per unit volume, $m^2_\rho$,
exhibits in the presence of forcing a position-space cascade analogous
to the wavenumber-space cascade of the energy $e_{_\theta}$. Let us
localize $m^2_\rho$ in space by defining its amount between the radii
$r$ and $R$ as \begin{eqnarray}} \newcommand{\de}{\bar\partial m^2_{\rho;\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} r,R}(t)\ =\ \smallint\limits_r^R
F^\rho_2(t,r') \,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} d\mu_d(r')\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.
\label{eloc}
\qqq Integrating the evolution equation (\ref{diffd}) for
$F^\rho_2$ with the radial
measure $d\mu_d$ from $r$ to $R$, and using the explicit form of
$\,M_2^*=-\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z\, r^{-d+1}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\partial} \newcommand{\ee}{{\rm e}_r\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} r^a\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\partial} \newcommand{\ee}{{\rm e}_r\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
r^{d-1-a+\xi}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},$ \,we obtain the relation
\begin{eqnarray}} \newcommand{\de}{\bar\partial
\partial} \newcommand{\ee}{{\rm e}_t\,m^2_{\rho;\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} r,R}(t)\ =\ Z\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} S_{d-1}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} {r'}^a\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\partial} \newcommand{\ee}{{\rm e}_{r'}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
{r'}^{d-1-a+\xi}\,F^\rho_2(t,r')\Big\vert_{r}^{R}\
+\ \smallint\limits_r^R\chi(r')\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} d\mu_d(r')
\qqq
expressing the local balance of the total mass
squared, provided that we interpret $\smallint\limits_r^R\chi(r')\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
d\mu_d(r')$ as the injection rate of $m^2_\rho$ in the radii between
$r$ and $R$ and $$Z\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} S_{d-1}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} r^{a}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\partial} \newcommand{\ee}{{\rm e}_r\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} r^{d-1-a+\xi}\,
F^\rho_2(r,t)\,\equiv\,\Phi(r)$$
as the flux of $m^2_\rho$ into the radii $\leq r$.
In the weakly compressible phase $\wp<{_d\over^{\xi^2}}$
and in the stationary state, $\Phi(r)=-\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
\smallint\limits_0^r\chi(r')\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} d\mu_d(r')$
so that the flux is constant for $r$ much larger than the injection
scale $L$ and it is directed towards bigger radii. On the other hand, in
the strongly compressible phase $\wp>{_d\over^{\xi^2}}$,
one has the equality $\Phi(r)= \smallint\limits_r^\infty\chi(r')
\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} d\mu_d(r')$ so that the flux is directed towards smaller
distances. It eventually feeds into the
singular mode all of $m^2_\rho$ injected by the source. As we see,
the two phases differ also by the direction of the cascade of the
total mass squared of $\rho$.
\nsection{Conclusions}
We have studied the Gaussian ensemble of compressible $d$-dimensional
fluid velocities decorrelated in time and with spatial behavior
characterized by the fractional H\"{o}lder exponent ${\xi\over2}$. We
have shown that the Lagrangian trajectories of fluid particles in such
an ensemble exhibit very different behavior, depending on the degree
of compressibility $\wp$ of the velocity field. For
$\wp<{_d\over^{\xi^2}}$, i.e. $b$ defined in (\ref{b}) smaller than
unity, the infinitesimally close trajectories separate in finite
time, implying that the dissipation remains finite in the limit
when the molecular diffusivity $\kappa\to 0$ and that
the energy is transferred towards small scales
in a direct cascade process. The constancy of the
flux at small scales leads to a normal scaling behavior $r^{2-\xi}$ of
the second order structure function $S^\theta_2(r)$ for $r\ll L$
(the typical scale where the energy is injected). For $b$ negative
(which includes the incompressible case), as the system evolves,
the dissipation rate tends to the injection rate rapidly enough
to ensure that the energy $\langle\theta^2\rangle$ remains finite.
The non-constant zero mode $r^{(2-\xi)b}$ controls the decay
of $S^\theta_2(r)$ to its finite asymptotic value
$2\langle\theta^2\rangle$ at large $r$. Conversely, for $0\le b<1$,
the dissipation rate tends to the injection rate very slowly,
$\propto t^{-(1-b)}$, and the energy is thus increasing with time as
$t^b$. The structure function $S^\theta_2(r)$ grows now at large
distances as the zero mode $r^{(2-\xi)b}$. For $b\ge 1$, coinciding
particles do not separate and, in fact, separated particles collapse
in a finite time. The consequences are that the dissipative anomaly is
absent and that the energy is entirely transferred toward larger and
larger scales in an inverse cascade process. The threshold $b=1$
corresponds to the crossing of the exponents: $(2-\xi)b$
of the non-constant zero mode and $2-\xi$ of the constant-flux-of-energy
solution. The picture is the mirror image of the one for the direct
cascade, with the first exponent controlling now the small scale
behavior and the second one appearing at the large scales.
A sketch of the three possible situations is presented in Fig.~2.
\begin{figure}
\begin{center}
\vspace{-0.6cm}
\mbox{\hspace{0.0cm}\psfig{file=strf.ps,height=8cm,width=9cm}}
\end{center}
\vspace{-0.6cm}
\caption{{\it The second-order structure function $S^\theta_2(r)$ {\it vs}
$r$ for $\xi=1.8$ and $d=3$ in the three different regimes $b=-2$
(dot-dashed line), $b=0.5$ (dashed line) and $b=2$ (solid line).}}
\label{fig2}
\end{figure}
\vskip 0.3cm
Concerning higher order correlations in the strongly compressible
phase $b\ge 1$, we have shown that the inverse energy cascade is
self-similar, i.e. without intermittency. The effects of a large
scale friction reintroduce, however, an anomalous scaling of the
structure functions that do not thermalize without friction.
They were exhibited in the explicit expressions for
the p.d.f.'s of the tracer differences. As for
the scalar density, the different behaviors of the Lagrangian
trajectories were shown to result in the inverse or in the direct
cascade of the total mass squared in, respectively, the weakly and the
strongly compressible phase. As explained in Introduction, we expect
the explosive separation of the Lagrangian trajectories and/or their
collapse to persist in more realistic ensembles of fully turbulent
velocities and to play a crucial role in the determination
of statistical properties of the flows at high
Reynolds numbers. \vskip 0.5cm
\noindent{\bf Acknowledgements}. \ M. V. was partially supported by
C.N.R.S. GdR ``M\'{e}canique des Fluides G\'{e}ophysiques et
Astrophysiques''.
\vskip 0.6cm
\nappendix{A}
\vskip 0.5cm
Let us consider the operator $M_2$ of Eq.\,\,(\ref{m2ri0}) on the
half-line $[r_0,\infty[$, $r_0>0$, with the Neumann boundary condition
$\partial} \newcommand{\ee}{{\rm e}_r\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} f(r_0)=0$. By the relation (\ref{n2}), this means that we
have to consider the operator $N_2$ on $[u_0,\infty[$ with the
boundary condition \begin{eqnarray}} \newcommand{\de}{\bar\partial \partial} \newcommand{\ee}{{\rm e}_u\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} u^{b-{_1\over^2}}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\varphi(u)|_{_{u=u_0}}=0
\label{cbc}
\qqq for $u_0=r_0^{2-\xi\over 2}$. For non-integer $b$, the
corresponding eigen-functions of $N_2$ are
\begin{eqnarray}} \newcommand{\de}{\bar\partial
\varphi_{E,u_0}(u)\ =\
C_1(u_0)\, u^{_1\over^2}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} J_{-b}(\widetilde} \def\hat{\widehat u)\,+\, C_2(u_0)\,
u^{_1\over^2}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} J_{b}(\widetilde} \def\hat{\widehat u)
\qqq
with $\widetilde} \def\hat{\widehat u\equiv{\sqrt{E/Z'}}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} u\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},$
\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} see Eq.\,\,(\ref{egf}). Since \begin{eqnarray}} \newcommand{\de}{\bar\partial J_{{b}}(z)\ =\
{_{1}\over^{2^b\,\Gamma(1+{b})}}\,z^b\,(1+\CO(z^2))\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},
\label{asex}
\qqq the boundary condition (\ref{cbc}) implies that \begin{eqnarray}} \newcommand{\de}{\bar\partial
C_1(u_0)\,\CO(\widetilde} \def\hat{\widehat u_0) \,+\, C_2(u_0)\,\CO({\widetilde} \def\hat{\widehat u_0}^{2b-1})\ =\
0\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}. \qqq As a result, for (non-integer) $b<1$, $\,\lim\limits_{u_0\to0}
\,{_{C_2(u_0)}\over^{C_1(u_0)}}=\,0\,$ so that in the limit we obtain
the eigen-functions of the operator $N_2^-$.
For (non-integer) $b>1$, however, $\,\lim\limits_{u_0\to0}\,
{_{C_1(u_0)}\over^{C_2(u_0)}}=\,0\,$ and the eigen-functions tend
to those of $N_2^+$. The extension to the case of integer $b$
is equally easy.
\vskip 0.3cm
\nappendix{B}
\vskip 0.5cm
Let us give here the explicit form of the integrals
of the kernels $\ee^{-t\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} M^\mp_2}(r,r')$ against
powers of $r'$. A direct calculation shows that for $\mu\geq 0$
and, respectively, $b<1$ and $b>-1$,
\begin{eqnarray}} \newcommand{\de}{\bar\partial
\int\limits_0^\infty\ee^{-t\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} M_2^-}(r,r')
\,\,{r'}^{\mu}\,d\mu_d({r'})&=&
{_{\Gamma(1+{\mu\over2-\xi}-b)}
\over^{\Gamma(1-b)}}\,\left(4\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z'\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} t\right)^{\mu\over2-\xi}\cr
&&\cdot\,\,{}_{_1}F_{_1}(-{_\mu\over^{2-\xi}},
\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} 1-b,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}-{_{r^{2-\xi}}\over^{4\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z'\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
t}})\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},
\label{norm-}\\
\int\limits_0^\infty\ee^{-t\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} M_2^+}(r,r')
\,\,{r'}^{\mu}\,d\mu_d({r'})&=&
{_{\Gamma(1+{\mu\over2-\xi})}
\over^{\Gamma(1+b)}}\,\left(4\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z'\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
t\right)^{{\mu\over2-\xi}-b}\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} r^{1-a}\cr
&&\cdot\,\,_{_1}F_{_1}(-{_{\mu}
\over^{2-\xi}}+b,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} 1+b,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}-{_{r^{2-\xi}}
\over^{4\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z'\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} t}})\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.
\label{norm+}
\qqq
A direct calculation gives also the asymptotic
expansion of the kernels $\ee^{-t\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} M_2^\mp}(r,r')$
at small $r$:
\begin{eqnarray}} \newcommand{\de}{\bar\partial
&&\ee^{-t\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} M_2^{\mp}}(r,r')\ \ \ =\ \ \
{_{2-\xi}\over^{S_{d-1}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\Gamma(1\mp b)}}\,\left\{\matrix{
{r'}^{-d+2-\xi}\cr r^{1-a}\,{r'}^{-d+1+a-\xi}}\right\}\cr
&&\hspace{2.5cm}\cdot\ \sum\limits_{j=0}^\infty {_{(-1)^j\,
r^{(2-\xi)j}}
\over^{j!\,(1\mp b)\cdots(j\mp b)\,(4\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z'\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
t)^{1+j}}}\ \,{_d\over^{(dz)^j}}\,(\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} z^{j\mp b}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
\ee^{-z}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch})\bigg\vert_{\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} z\,=\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}{{r'}^{2-\xi}
\over 4\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z'\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} t}}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.\hspace{1.5cm}
\label{A12}
\qqq
An expansion around $r'=0$ may be obtained similarly.
\vskip 0.3cm
\nappendix{C}
\vskip 0.5cm
Here we shall consider the long time behavior of the integral
of the heat kernel of the operator $M_2^-$:
\begin{eqnarray}} \newcommand{\de}{\bar\partial
X(t,r,r')\ \equiv\ \smallint\limits_0^t
\ee^{-s\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} M_2^-}(r,r')\, ds\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.
\qqq
Using the explicit form (\ref{expo}) of the heat kernel
$\ee^{-s\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} M_2^-}(r,r')$, we may rewrite the last definition as
\begin{eqnarray}} \newcommand{\de}{\bar\partial
X(t,r,r')\ =\ \smallint\limits_0^tds\smallint\limits_0^\infty
\ee^{-sE}\, E^{-{b}}\,G(E;r,r')\, dE\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},
\label{a21}
\qqq
where
\begin{eqnarray}} \newcommand{\de}{\bar\partial
G(E,r,r')\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}=\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
{_{1}\over^{(2-\xi)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} S_{d-1}}}\,r^{1-a\over 2}\,
E^{{b}}\, J_{-{b}}({\sqrt{E/Z'}}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}{r^{2-\xi\over2}})
\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} J_{-{b}}({\sqrt{E/Z'}}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}{{r'}^{2-\xi\over2}})
\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} {r'}^{-d+{3\over2}+{a\over 2}-\xi}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.\ \quad
\qqq
Note that, by virtue of the relation (\ref{asex}),
$\,G(0,r,r')={_{(2-\xi)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}(4\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z')^{{b}-1}}
\over^{\Gamma(1-{b})^2
\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} S_{d-1}}}\,{r'}^{-d+1+a-\xi}\equiv G_0(r')\,$
and is independent of $r$. For finite times,
the integration by parts, accompanier by the changes
of variable $sE\leftrightarrow E$ gives the following identity:
\begin{eqnarray}} \newcommand{\de}{\bar\partial
{b}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} X(t,r,r')\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}=\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} t^{b}\smallint\limits_0^\infty
\ee^{-E}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} E^{-{b}}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} G(t^{-1}E,r,r')\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} dE\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}+\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
\smallint\limits_0^tds\smallint\limits_0^\infty
\ee^{-sE}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} E^{1-{b}}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} \partial} \newcommand{\ee}{{\rm e}_{_E}G(E,r,r')\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} dE\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.\quad
\label{intbp}
\qqq
For $b<0$, the $t\to\infty$ limit of $X(t,r,r')$
exists and defines the kernel $(M_2^-)^{-1}(r,r')$:
\begin{eqnarray}} \newcommand{\de}{\bar\partial
\lim\limits_{t\to\infty}\,\,{b}\, X(t,r,r')&=&{b}
\smallint\limits_0^\infty E^{-{b}-1}\, G(E,r,r')\, dE\cr
&=&-\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}{_1\over^{(2-\xi)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} Z\,S_{d-1}}}\cases{
\hbox to 4cm{${r'}^{-d+2-\xi}$\hfill}{\rm for\ }
r\leq r'\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},\cr
\hbox to 4cm{$r^{(2-\xi)b}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}{r'}^{-d+(2-\xi)(1-b)}$\hfill}
{\rm for\ }r\geq r'\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},}
\label{a24}
\qqq
compare to Eq.\,\,(\ref{35}). The last expression has been obtained
by the direct integration and the condition $b<0$ was required
by the convergence at zero of the $E$-integral (the kernels
$(M_2^\pm)^{-1}(r,r')$ may be also found easily by gluing
the zero modes of $M_2^\pm$). Note that the right
hand side is a real analytic function of ${b}=
{{1-a}\over{2-\xi}}$. Now Eq.\,\,(\ref{intbp})
implies that
\begin{eqnarray}} \newcommand{\de}{\bar\partial
\lim\limits_{t\to\infty}\,\left[\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} {b}\,X(t,r,r')\,-\,
\Gamma(1-{b})\, G_0(r')\, t^{b}\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\right]
\ =\ \smallint\limits_0^\infty
E^{-{b}}\, \partial} \newcommand{\ee}{{\rm e}_{_E}G(E,r,r')\, dE
\qqq
exists for ${b}<1$ and is also real analytic in ${b}$.
On the other hand, it coincides with the long time limit in Eq.\,\,(\ref{a24})
for ${b}<0$ and, consequently, must be given by the right hand side
of this equation for all ${b}<1$. The same arguments work
after the integration of the above expressions against
$\chi(r')\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} d\mu_d(r')$ and hence the convergence of the expression
(\ref{toA}) when $t\to\infty$.
\vskip 0.3cm
\nappendix{D}
\vskip 0.5cm
It is easy to give the exact expressions for the stationary
2-point functions of the scalars $\theta$ and $\rho$
for $\wp<{_{d-2+\xi}\over^{2\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\xi}}$ in the presence
of positive diffusivity $\kappa$. They are
\begin{eqnarray}} \newcommand{\de}{\bar\partial
F^\theta_2(r)&=&{_1\over^{Z\, S_{d-1}}}\smallint\limits_r^\infty
f(r')\, g(r')\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\chi(r')\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} d\mu_d(r')
\,+\,{_1\over^{Z\, S_{d-1}}}\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} g(r)\smallint\limits_0^r
f(r')\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\chi(r')\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} d\mu_d(r')\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},\label{2pkat}\\
F^\rho_2(r)&=&{_1\over^{Z\, S_{d-1}}}\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} f(r)
\smallint\limits_r^\infty g(r')\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\chi(r')\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} d\mu_d(r')
\,+\,{_1\over^{Z\, S_{d-1}}}\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} f(r)\, g(r)\smallint\limits_0^r
\chi(r')\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch} d\mu_d(r')\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},
\label{2pkar}
\qqq
where
\begin{eqnarray}} \newcommand{\de}{\bar\partial
f(r)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}=\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}(r^\xi+{_{2\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\kappa}\over^Z})^{^{-d+1+a-\xi\over\xi}}\qquad
{\rm and}\qquad g(r)\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}=\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\smallint\limits_r^\infty f(\zeta)^{-1}
\,{_{\zeta^{-d+1}}\over^{\zeta^\xi+{2\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\kappa\over Z}}}\, d\zeta\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.
\qqq
In the limit $\kappa\to0$ these expressions pass into
Eq.\,\,(\ref{35}) and (\ref{36}), respectively.
For ${_{d-2+\xi}\over^{2\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\xi}}<\wp<
{_d\over^{\xi^2}}$, similarly as at $\kappa=0$,
the 2-point functions $F_2^\theta(t,r)$ and $F_2^\rho(t,r)$
are pumped into the constant and the $f(r)$ zero modes of $M_2^\kappa$
and $(M_2^\kappa)^*$, respectively, and do not reach stationary limits,
although the 2-point structure function of the tracer does.
For $\wp\geq{d\over\xi^2}$ the pumping into
the zero modes reaches a constant rate. In the limit
$\kappa\to0$, the zero mode $f(r)$ into which $F_2^\rho$
is pumped goes to $r^{-d+1+a-\xi}$ for
$\wp<{d\over\xi^2}$ but becomes the delta-function
$\delta(\Nr)$ for $\wp>{d\over\xi^2}$.
\eject
\nappendix{E}
\vskip 0.5cm
Let us prove the explicit expression (\ref{rrsn}) for
the even structure functions in the strongly compressible
regime. Note that
\begin{eqnarray}} \newcommand{\de}{\bar\partial
S^\theta_{_N}(t,r)\ =\ \sum\limits_{Q\subset\{1,\dots,N\}}
(-1)^{\vert Q^c\vert}\,\, F^\theta_{_N}(t,(\Nr)_{_{q\in Q}},
({\bf 0})_{_{q\in Q^c}})
\label{efsn}
\qqq
where $Q^c$ stands for the complement of $Q$.
In the strongly compressible phase,
by the multiple application of Eq.\,\,(\ref{contr}),
we infer that
\begin{eqnarray}} \newcommand{\de}{\bar\partial
P^{t,s}_{_N}((\Nr)_{_{q\in Q}},({\bf 0})_{_{q\in Q^c}};\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
\un{\Nr'})&=&\int P^{t,s}_2(\Nr,{\bf 0};\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\Nr''_1,\Nr''_2)\cr
&&\cdot\ \prod\limits_{q\in Q}\delta(\Nr''_1-\Nr'_q)\,
\prod\limits_{q\in Q^c}\delta(\Nr''_2-\Nr'_q)\,\,d\Nr''_1\,\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}
d\Nr''_2\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}.
\label{mcontr}
\qqq
Substituting Eq.\,\,(\ref{Nc1}) into the expression
(\ref{efsn}) and using the relations (\ref{mcontr}), we
obtain
\begin{eqnarray}} \newcommand{\de}{\bar\partial
S^\theta_{_N}(t,r)&=&\sum\limits_{1\leq n<m\leq N}
\int\limits_0^tds\int P_2^{t,s}(\Nr,{\bf 0};\hspace{0.025cm}} \newcommand{\ch}{{\rm ch}\Nr''_1,\Nr''_2)\cr
&&\cdot\,\bigg(\sum\limits_{\{n,m\}\subset Q\subset\{1,\dots,N\}}
(-1)^{\vert Q^c\vert}\ F^\theta_{_{N-2}}(s,
(\Nr''_1)_{_{q\in Q\setminus\{n,m\}}},
(\Nr''_2)_{_{q\in Q^c}})\ \chi(0)\cr
&&\ \ +\sum\limits_{n\in Q\subset\{1,\dots,N\}\setminus\{m\}}
(-1)^{\vert Q^c\vert}\ F^\theta_{_{N-2}}(s,
(\Nr''_1)_{_{q\in Q\setminus\{n\}}},
(\Nr''_2)_{_{q\in Q^c\setminus\{m\}}})
\ \chi({\vert\Nr''_1-\Nr''_2\vert})\cr
&&\ \ +\sum\limits_{m\in Q\subset\{1,\dots,N\}\setminus\{n\}}
(-1)^{\vert Q^c\vert}\ F^\theta_{_{N-2}}(s,
(\Nr''_1)_{_{q\in Q\setminus\{m\}}},
(\Nr''_2)_{_{q\in Q^c\setminus\{n\}}})
\ \chi({\vert\Nr''_1-\Nr''_2\vert})\cr
&&\ \ +\sum\limits_{Q\subset\{1,\dots,N\}\setminus\{n,m\}}
(-1)^{\vert Q^c\vert}\ F^\theta_{_{N-2}}(s,(\Nr''_1)_{_{q\in Q}},
(\Nr''_2)_{_{q\in Q^c\setminus\{n,m\}}})\ \chi(0)\bigg)\,d\Nr''_1\,
d\Nr''_2\cr
\cr\cr
&=&\ N(N-1)\, \smallint\limits_0^tds\smallint\limits_0^\infty
P_2^{t,s}(r,r')\,\,S^\theta_{_{N-2}}(s,r')\,\,(\chi(0)-\chi({r'}))\,\,
d\mu_d({r'})\hspace{0.025cm}} \newcommand{\ch}{{\rm ch},
\qqq
which is the sought relation.
\vskip1.5cm
| proofpile-arXiv_065-9059 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{INTRODUCTION}
\label{sec:intro}
We are measuring the peculiar motions of galaxy clusters in the
Hercules-Corona Borealis (HCB) and Perseus-Pisces-Cetus (PPC) regions at
distances between 6000 and 15000\hbox{\,km\,s$^{-1}$}\ using the global properties of
elliptical galaxies. This study (the EFAR project) has as primary goals:
(i)~characterising the intrinsic properties of elliptical galaxies in
clusters by compiling a large and homogeneous sample with high-quality
photometric and spectroscopic data; (ii)~testing possible systematic
errors, such as environmental dependence, in existing elliptical galaxy
distance estimators; (iii)~deriving improved distance estimators based
on a more comprehensive understanding of the properties of ellipticals
and how these are affected by the cluster environment; and
(iv)~determining the peculiar velocity field in regions that are
dynamically independent of the mass distribution within 5000\hbox{\,km\,s$^{-1}$}\ of our
Galaxy in order to test whether the large-amplitude coherent flows seen
locally are typical of bulk motions in the universe.
The background and motivation of this work are discussed in Paper~I of
this series (Wegner \hbox{et~al.}\ 1996), which also describes in detail the
choice of regions to study, the sample of clusters and groups, and the
selection procedure and selection functions of the programme galaxies.
In earlier papers we reported the photoelectric photometry for 352
programme galaxies which underpins the transformation of our CCD data to
the standard $R$ magnitude system (Colless \hbox{et~al.}\ 1993), and described
our technique for correcting for the effects of seeing on our estimates
of length scales and surface brightnesses (Saglia \hbox{et~al.}\ 1993). This
paper (Paper~II) describes the spectroscopic observations and gives
redshifts, velocity dispersions and linestrength indices for the
programme galaxies. The CCD imaging observations of these galaxies, and
their photometric parameters, are described in Paper~III (Saglia \hbox{et~al.}\
1997), while descriptions of the profile fitting techniques used to
determine these parameters (along with detailed simulations establishing
the uncertainties and characterising the systematic errors) are given in
Paper~IV (Saglia \hbox{et~al.}\ 1997). The \hbox{Mg--$\sigma$}\ relation and its implications
are discussed in Paper~V (Colless \hbox{et~al.}\ 1998). Subsequent papers in the
series will explore other intrinsic properties of the galaxies and their
dependence on environment, derive an optimal distance estimator, and
discuss the peculiar motions of the clusters in each of our survey
regions and their significance for models of the large-scale structure
of the universe.
The structure of the present paper is as follows. In \S\ref{sec:obsvns}
we describe the observations and reductions used in obtaining the 1319
spectra in our dataset (1250 spectra for 666 programme galaxies and 69
spectra for 48 calibration galaxies) and discuss the quality of the
data. We explain the techniques by which redshifts, velocity dispersions
and linestrength indices were estimated from the spectra in
\S\ref{sec:analysis}, including the various corrections applied to the
raw values. In \S\ref{sec:results} we describe the method used to
combine data from different runs and evaluate the internal precision of
our results using the large number of repeat measurements in our
dataset. We then give the final values of the spectroscopic parameters
for each galaxy in our sample: we have redshifts for 706 galaxies,
dispersions and \hbox{Mg$b$}\ linestrengths for 676 galaxies and \hbox{Mg$_2$}\
linestrengths for 582 galaxies. We compare our results to previous
studies in the literature to obtain external estimates of our random and
systematic errors. In \S\ref{sec:clusass} we combine our redshifts with
those from ZCAT in order to assign sample galaxies to physical clusters,
and to estimate the mean redshifts and velocity dispersions of these
clusters. Our conclusions are summarised in \S\ref{sec:conclude}.
This paper presents the largest and most homogeneous sample of velocity
dispersions and linestrengths for elliptical galaxies ever obtained. The
precision of our measurements is sufficiently good to achieve the goal
of measuring distances via the Fundamental Plane out to 15000\hbox{\,km\,s$^{-1}$}.
\section{OBSERVATIONS}
\label{sec:obsvns}
The spectroscopic observations for the EFAR project were obtained over a
period of seven years from 1986 to 1993 in a total of 33 observing runs
on 10 different telescopes. In this section we describe the
spectroscopic setups, the observing procedures, the quality of the
spectra and the data-reduction techniques. Further detail on these
points is given by Baggley (1996).
\subsection{Spectroscopic Setups}
\label{ssec:setups}
Table~\ref{tab:obsruns} gives the spectroscopic setup for each run,
including the run number, date, telescope, spectrograph and detector,
wavelength range, spectral dispersion (in \AA/pixel), effective
resolution (in \hbox{\,km\,s$^{-1}$}), and the effective aperture size. Note that two
runs (116 and 130) produced no useful data and are included in
Table~\ref{tab:obsruns} only for completeness. Three runs utilised fibre
spectrographs: runs 127 and 133 used Argus on the CTIO 4m and run 131
used MEFOS on the ESO 3.6m. All the other runs employed longslit
spectrographs, mostly on 2m-class telescopes (MDM Hiltner 2.4m, Isaac
Newton 2.5m, Kitt Peak 2.1m, Siding Spring 2.3m, Calar Alto 2.2m)
although some 4m-class telescopes were also used (Kitt Peak 4m, William
Herschel 4m, the MMT).
\begin{table*}
\centering
\caption{The Spectroscopic Observing Runs}
\label{tab:obsruns}
\begin{tabular}{lccllccclr}
Run & Date & Tele- $^a$ & Spectrograph & Detector &
$\lambda\lambda$ & $\Delta\lambda$ $^b$ &
$\sigma_i$ $^c$ & Aperture $^d$ & $N^e$ \\
~\# & & scope & + grating & & (\AA) & (\AA/pix) & (km/s) & (arcsec) &
\vspace*{6pt} \\
101 & 86/12 & MG24 & MarkIIIa+600B & GEC & 4912--6219 & 2.27 & 145 & 1.9$\times$10\% & 22 \\
102 & 87/03 & MG24 & MarkIIIa+600B & RCA & 4787--6360 & 3.10 & 145 & 1.9$\times$10\% & 58 \\
103 & 87/05 & MG24 & MarkIIIa+600B & RCA & 4809--6364 & 3.07 & 145 & 1.9$\times$10\% & 37 \\
104 & 88/04 & MG24 & MarkIIIa+600V & RCA & 5025--6500 & 2.90 & 125 & 1.9$\times$10\% & 12 \\
105 & 88/06 & MG24 & MarkIIIa+600V & RCA & 5055--6529 & 2.90 & 125 & 1.9$\times$10\% & 37 \\
106 & 88/09 & MG24 & MarkIIIa+600V & Thompson & 5041--6303 & 2.21 & 130 & 1.9$\times$10\% & 23 \\
107 & 88/10 & MG24 & MarkIIIa+600V & RCA & 5048--6522 & 2.90 & 130 & 1.9$\times$10\% & 27 \\
108 & 88/07 & MMTB & BigBlue+300B & Reticon & 3700--7200 & 1.14 & 135 & 2.5 & 10 \\
109 & 88/11 & KP4M & RC+UV-Fast+17B & TI2 & 4890--5738 & 1.07 & 100 & 2.0$\times$3.9 & 104 \\
110 & 88/11 & KP2M & GoldCam+\#240 & TI5 & 4760--5879 & 1.52 & 105 & 2.0$\times$3.9 & 72 \\
111 & 88/11 & MMTB & BigBlue+300B & Reticon & 3890--7500 & 1.34 & 135 & 2.5 & 20 \\
112 & 89/04 & MG24 & MarkIIIb+600V & RCA & 5066--6534 & 2.91 & 130 & 1.7$\times$10\% & 34 \\
113 & 89/06 & MG24 & MarkIIIb+600V & TI4849 & 5126--6393 & 2.18 & 150 & 2.4$\times$10\% & 23 \\
114 & 89/06 & MMTB & BigBlue+300B & Reticon & 3890--7500 & 1.34 & 135 & 2.5 & 12 \\
115 & 89/08 & WHT4 & ISIS-Blue+R600B & CCD-IPCS & 4330--4970 & 0.45 &\n95 & 2.0$\times$3.9 & 8 \\
116$^f$ & 89/10 & MMTR & RedChannel+600B & TI & 4750--5950 & 1.50 & 125 & 1.5$\times$10\% & 7 \\
117 & 89/10 & MG24 & MarkIIIb+600V & Thompson & 5031--6300 & 2.21 & 130 & 1.7$\times$10\% & 61 \\
118 & 89/11 & MG24 & MarkIIIb+600V & RCA & 5018--6499 & 2.89 & 170 & 1.7$\times$10\% & 14 \\
119 & 90/05 & MMTR & RedChannel+600B & TI & 4750--5950 & 1.50 & 125 & 1.5$\times$10\% & 17 \\
120 & 90/10 & IN25 & IDS+235mm+R632V & GEC6 & 4806--5606 & 1.46 & 100 & 1.9$\times$7.2 & 40 \\
121 & 91/05 & IN25 & IDS+235mm+R632V & GEC3 & 4806--5603 & 1.46 & 100 & 1.9$\times$7.2 & 87 \\
122 & 91/10 & MG24 & MarkIIIb+600V & Thompson & 5018--6278 & 2.20 & 125 & 1.7$\times$10\% & 43 \\
123 & 91/11 & IN25 & IDS+235mm+R632V & GEC3 & 4806--5603 & 1.46 & 100 & 1.9$\times$7.2 & 29 \\
124 & 92/01 & MG24 & MarkIIIb+600V & Thompson & 5038--6267 & 2.15 & 125 & 1.7$\times$10\% & 35 \\
125 & 92/06 & MG24 & MarkIIIb+600V & Loral & 4358--7033 & 1.31 & 125 & 1.7$\times$10\% & 57 \\
126 & 92/06 & CA22 & B\&C~spec+\#7 & TEK6 & 4800--6150 & 1.40 & 100 & 5.0$\times$4.2 & 39 \\
127$^g$ & 92/09 & CT4M & Argus+KPGL\#3 & Reticon II& 3877--6493 & 2.19 & 145 & 1.9 & 199 \\
128 & 93/05 & MG24 & MarkIIIb+600V & Loral & 5090--7050 & 1.40 & 105 & 1.7$\times$10\% & 24 \\
129 & 93/06 & MG24 & MarkIIIb+600V & TEK & 4358--6717 & 2.31 & 135 & 1.7$\times$10\% & 3 \\
130$^f$ & 93/06 & SS23 & DBS-blue+600B & PCA & 5015--5555 & 0.80 &\n85 & 2.0$\times$10\% & 0 \\
131$^g$ & 93/10 & ES36 & MEFOS+B\&C+\#26 & TEK512CB & 4850--5468 & 1.22 & 105 & 2.6 & 128 \\
132 & 93/10 & SS23 & DBS-blue+600B & Loral & 4820--5910 & 1.10 &\n80 & 2.0$\times$10\% & 14 \\
133$^g$ & 93/10 & CT4M & Argus+KPGL\#3 & Reticon II& 3879--6485 & 2.19 & 145 & 1.9 & 193 \\
\end{tabular}\vspace*{6pt}
\parbox{\textwidth}{
$^a$ Telescopes: MG24=MDM 2.4m; KP4M/KP2M=KPNO 4m/2m;
WHT4=William Herschel 4m (LPO); IN25=Isaac Newton 2.5m (LPO);
MMTB/MMTR=MMT (blue/red); CA22=Calar Alto 2.2m; CT4M=Cerro Tololo
4m; SS23=Siding Spring 2.3m; ES36=ESO 3.6m. \\
$^b$ Spectral dispersion in \AA/pixel. \\
$^c$ Instrumental resolution ($\sigma$, not FWHM) in \hbox{\,km\,s$^{-1}$}, as
determined from the cross-correlation analysis calibration curves (see
\S\ref{ssec:czsig}). \\
$^d$ The aperture over which the galaxy spectrum was extracted:
diameter for circular apertures and fibres, width$\times$length for
rectangular slits (10\% means the spectrum was extracted out to the
point where the luminosity had fallen to 10\% of its peak value). \\
$^e$ The number of spectra taken in the run. \\
$^f$ These runs produced no useful data. \\
$^g$ These runs used fibre spectrographs. }
\end{table*}
The spectra from almost all runs span at least the wavelength range
5126--5603\AA, encompassing the MgI\,$b$ 5174\AA\ band and the FeI
5207\AA\ and 5269\AA\ features in the restframe for galaxies over the
sample redshift range $cz$$\approx$6000--15000\hbox{\,km\,s$^{-1}$}. The exceptions are
the spectra from runs 115 and 131. Run 115 comprises 8 spectra obtained
at the WHT with the blue channel of the ISIS spectrograph which have a
red wavelength limit of 4970\AA\ (\hbox{i.e.}\ including H$\beta$ but not
\hbox{Mg$b$}). Since we have spectra for all these galaxies from other runs we
do {\em not} use the redshifts and dispersions from run 115. Run 131
comprises 128 spectra obtained at the ESO 3.6m with the MEFOS fibre
spectrograph to a red limit of 5468\AA, including \hbox{Mg$b$}\ and FeI 5207\AA\
over the redshift range of interest, but not FeI 5269\AA\ beyond
$cz$$\approx$11000\hbox{\,km\,s$^{-1}$}. For most of the runs the spectra also encompass
H$\beta$, and several span the whole range from CaI~H+K 3933+3969\AA\ to
NaI~D 5892\AA.
The effective instrumental resolution of the spectra, $\sigma_i$, was
measured from the autocorrelation of stellar template spectra (see
\S\ref{ssec:czsig} below), and ranged from 80 to 170\hbox{\,km\,s$^{-1}$}, with a median
value of 125\hbox{\,km\,s$^{-1}$}. Both longslit and circular entrance apertures were
used. Slits were typically 1.7--2.0~arcsec wide and the spectra were
extracted to the point where the galaxy fell to about 10\% of its peak
value. Circular apertures (in the fibre spectrographs and the MMT Big
Blue spectrograph) were between 1.9 and 2.6~arcsec in diameter. Further
details of the observing setup for each telescope/instrument combination
are given in Appendix~A.
\subsection{Observing Procedures}
\label{ssec:obsproc}
The total integration times on programme galaxies varied considerably
depending on telescope aperture, observing conditions and the magnitude
and surface brightness of the target (our programme galaxies have R band
total magnitudes in the range 10--16). On 2m-class telescopes (with
which the bulk of the spectroscopy was done), exposure times were
usually in the range 30--60~min, with a median of 40~min; on 4m-class
telescopes, exposure times were generally 15--20~min (up to 60~min for
the faintest galaxies) with single-object slit spectrographs, but 60 or
120~min with the fibre spectrographs (where the aim was high $S/N$ and
completeness). Slit orientations were not generally aligned with galaxy
axes. The nominal goal in all cases was to obtain around 500
photons/\AA\ at \hbox{Mg$b$}, corresponding to a $S/N$ per 100\hbox{\,km\,s$^{-1}$}\ resolution
element of about 30. In fact our spectra have a median of 370
photons/\AA\ at \hbox{Mg$b$}, corresponding to a $S/N$ per 100\hbox{\,km\,s$^{-1}$}\ of 26 (see
\S\ref{ssec:quality}).
In each run several G8 to K5 giant stars with known heliocentric
velocities were observed. These `velocity standard stars' are used as
spectral templates for determining redshifts and velocity dispersions.
In observing these standards care was taken to ensure that the
illumination across the slit was uniform, in order both to remove
redshift zeropoint errors and to mimic the illumination produced by a
galaxy, thereby minimising systematic errors in velocity dispersion
estimates. This was achieved in various ways: by defocussing the
telescope slightly, by moving the star back and forth across the slit
several times, or by trailing it up and down the slit. Such procedures
were not necessary for standards obtained with fibre spectrographs, as
internal reflections in the fibres ensure even illumination of the
spectrograph for all sources. Very high $S/N$ (typically $>$10,000
photons/\AA) were obtained in order that the stellar templates did not
contribute to the noise in the subsequent analysis.
The normal calibration exposures were also obtained: bias frames,
flatfields (using continuum lamps internal to the spectrographs or
illuminating the dome) and spectra of wavelength calibration lamps
before and/or after each galaxy or star exposure. In general we did not
make use of spectrophotometric standards as fluxed spectra were not
necessary and we wished to minimise overheads as much as possible.
The calibration procedures were slightly different for the three large
datasets taken using fibre-fed spectrographs at CTIO (runs 127 and 133)
and ESO (run 131). Because of the need to calibrate the relative
throughput of the fibres in order to perform sky subtraction, fibre
observations always included several twilight sky flatfield
exposures. Each velocity standard star was observed through several
fibres by moving the fibres sequentially to accept the starlight.
\subsection{Reductions}
\label{ssec:reduce}
The reductions of both the longslit and fibre observations followed
standard procedures as implemented in the IRAF\footnote{IRAF is
distributed by the National Optical Astronomy Observatories which is
operated by the Association of Universities for Research in Astronomy,
Inc. under contract with the National Science Foundation.}, MIDAS and
Starlink Figaro software packages. We briefly summarise the main steps
in the reduction of our longslit and fibre data below; further details
can be found in Baggley (1996).
The first stage of the reductions, common to all observations, was to
remove the CCD bias using a series of bias frames taken at the start or
end of the night. These frames were median-filtered and the result,
scaled to the mean level of the CCD overscan strip, was subtracted from
each frame in order to remove both the spatial structure in the bias
pedestal and temporal variations in its overall level. We also took long
dark exposures to check for dark current, but in no case did it prove
significant. Subsequent reductions differed somewhat for longslit and
fibre observations.
For longslit data, the next step was the removal of pixel-to-pixel
sensitivity variations in the CCD by dividing by a sensitivity map. This
map was produced by median-filtering the flatfield exposures (of an
internal calibration lamp or dome lamp) and dividing this by a smoothed
version of itself (achieved by direct smoothing or 2D surface fitting)
in order to remove illumination variations in the `flat' field. If
necessary (because of a long exposure time or a susceptible CCD), cosmic
ray events were identified and interpolated over in the two-dimensional
image using either algorithmic or manual methods (or both).
The transformation between wavelength and pixel position in longslit
data was mapped using the emission lines in the comparison lamp
spectra. The typical precision achieved in wavelength calibration, as
indicated by the residuals of the fit to the calibration line positions,
was $\ls$\,0.1\,pixel, corresponding to 0.1--0.3\AA\ or 5--15\hbox{\,km\,s$^{-1}$},
depending on the spectrograph setup (see Table~\ref{tab:obsruns}). The
spectra were then rebinned into equal intervals of $\log\lambda$ so that
each pixel corresponded to a fixed velocity interval, $\Delta v \equiv
c\Delta z = c(10^{\Delta\log\lambda}-1)$, chosen to preserve the full
velocity resolution of the data.
The final steps in obtaining longslit spectra are sky-subtraction and
extraction. The sky level was measured from two or more regions along
the slit sufficiently far from the target object to be uncontaminated by
its light. To account for variations in transmission along the slit, the
sky under the object was interpolated using a low-order curve fitted to
the slit illumination profile. A galaxy spectrum was then extracted by
summing along the profile, usually over the range where the object's
luminosity was greater than $\sim$10\% of its peak value, but sometimes
over a fixed width in arcsec (see Table~\ref{tab:obsruns}). Standard
star spectra were simply summed over the range along the slit that they
had been trailed or defocussed to cover.
For the fibre runs the individual object and sky spectra were extracted
first, using a centroid-following algorithm to map the position of the
spectrum along the CCD. The extraction algorithm fitted the spatial
profile of the fibre, in order to remove cosmic ray events and pixel
defects, and then performed a weighted sum over this fit out to the
points where the flux fell to $\sim$5\% of the peak value. Next, the
dome-illumination flatfield spectra were median-combined and a
sensitivity map for each fibre constructed by dividing each fibre's
flatfield spectrum by the average over all fibres and normalising the
mean of the result to unity. The pixel-to-pixel variations in the CCD
response were then removed by dividing all other spectra from that fibre
by this sensitivity map. Wavelength calibration was accomplished using
the extracted comparison lamp spectra, giving similar precision to the
longslit calibrations, and the object spectra were rebinned to a
$\log\lambda$ scale. Using the total counts through each fibre from the
twilight sky flatfield to give the relative throughputs, the several sky
spectra obtained in each fibre exposure were median-combined (after
manually removing `sky' fibres which were inadvertently placed on faint
objects). The resulting high-$S/N$ sky spectrum, suitably normalised to
each fibre's throughput, was then subtracted from each galaxy or
standard star spectrum.
The final step in the reductions for both longslit and fibre data was to
manually clean all the one-dimensional spectra of remaining cosmic ray
events or residual sky lines (usually only the 5577\AA\ line) by
linearly interpolating over affected wavelengths.
\subsection{Spectrum Quality}
\label{ssec:quality}
We have two methods for characterising the quality of our spectra. One
is a classification of the spectra into 5 quality classes, based on our
experience in reducing and analysing such data. Classes A and B indicate
that both the redshift and the velocity dispersion are reliable (with
class A giving smaller errors than class B); class C spectra have
reliable redshifts and marginally reliable dispersions; class D spectra
have marginally reliable redshifts but unreliable dispersions; class E
spectra have neither redshifts nor dispersions. The second method is
based on the $S/N$ ratio per 100\hbox{\,km\,s$^{-1}$}\ bin, estimated approximately from
the mean flux over the restframe wavelength range used to determine the
redshifts and dispersions (see \S\ref{ssec:czsig}) under the assumption
that the spectrum is shot-noise dominated. These two measures of
spectral quality are complementary: the $S/N$ estimate is objective but
cannot take into account qualitative problems which are readily
incorporated in the subjective classifications. Figure~\ref{fig:egspec}
shows example spectra covering a range of quality classes and
instrumental resolutions.
\begin{figure*}
\plotfull{egspec.ps}
\caption{Example spectra covering a range of quality classes and
instrumental resolutions: the top, middle and bottom rows are spectra
with quality classes A, B and C respectively; the left, central and
right columns are spectra with resolutions 100, 125 and 145\hbox{\,km\,s$^{-1}$}\
respectively. The label for each spectrum gives the galaxy name, the
GINRUNSEQ spectrum identifier, the instrumental resolution, the S/N and
quality class of the spectrum, the redshift, the dispersion and its
estimated error. Note that the panels show relative flux and have a
false zero for viewing convenience.
\label{fig:egspec}}
\end{figure*}
Figure~\ref{fig:snrq} shows the $S/N$ distribution for the whole sample
and for each quality class individually, and gives the total number of
objects, the fraction of the sample and the median $S/N$ in each class.
For the whole sample, 39\% of the spectra have $S/N$$>$30, 70\% have
$S/N$$>$20, and 96\% have $S/N$$>$10. The two quality measures are
clearly correlated, in the sense that better-classed spectra tend to
have higher $S/N$. However there is also considerable overlap in the
$S/N$ range spanned by the different classes. This overlap has various
sources: (i)~factors other than $S/N$ which affect the quality of the
redshift and dispersion estimates, notably the available restframe
spectral range (which depends on both the spectrograph setup and the
redshift of the target) and whether the object has emission lines;
(ii)~errors in estimating the $S/N$ (\hbox{e.g.}\ due to sky subtraction errors,
the neglect of the sky contribution in computing the $S/N$ for fainter
galaxies, or uncertainties in the CCD gain (affecting the conversion
from counts to photons); (iii)~subjective uncertainties in the quality
classification, particularly in determining the reliability of
dispersion estimates (\hbox{i.e.}\ between classes B and C). Both ways of
determining spectral quality are therefore needed in order to estimate
the reliability and precision of the spectroscopic parameters we
measure.
\begin{figure}
\plotone{snrq.ps}
\caption{The distribution of $S/N$ with quality class. For each class
the panels give the total number of spectra, the percentage of the
whole sample and the median $S/N$.
\label{fig:snrq}}
\end{figure}
\section{ANALYSIS}
\label{sec:analysis}
\subsection{Redshifts and Dispersions}
\label{ssec:czsig}
We derived redshifts and velocity dispersions from our spectra using the
{\tt fxcor} utility in IRAF, which is based on the cross-correlation
method of Tonry \& Davis (1979). We preferred this straightforward and
robust method to more elaborate techniques since it is well-suited to
the relatively modest $S/N$ of our spectra. We used a two-step
procedure, obtaining an initial estimate of the redshift using the whole
available spectrum and then using a fixed restframe wavelength range for
the final estimates of redshift and velocity dispersion. The procedure
was applied in a completely uniform manner to all the spectra in our
sample as far as differences in wavelength range and resolution would
allow.
The first step in the cross-correlation analysis is to fit and subtract
the continuum of each spectrum in order to avoid the numerical
difficulties associated with a dominant low-frequency spike in the
Fourier transform. In the first pass through {\tt fxcor} the continuum
shape was fitted with a cubic spline with the number of segments along
the spectrum chosen so that each segment corresponded to about
8000\hbox{\,km\,s$^{-1}$}. Each iteration of the fit excluded points more than
1.5$\sigma$ below or 3$\sigma$ above the previous fit. In this way we
achieved a good continuum fit without following broad spectral
features. We then apodised 10\% of the spectrum at each end with a
cosine bell before padding the spectrum to 2048 pixels with zeros.
This continuum-subtracted, apodised spectrum was then Fourier
transformed and a standard `ramp' filter applied. This filter is
described by 4 wavenumbers $(k_1,k_2,k_3,k_4)$, rising linearly from 0
to 1 between $k_1$ and $k_2$ and then falling linearly from 1 to 0
between $k_3$ and $k_4$. In the first pass these wavenumbers were chosen
to be $k_1$=4--8 and $k_2$=9--12 (tailored to remove residual power from
the continuum without affecting broad spectral features), and
$k_3$=$N_{pix}$/3 and $k_4$=$N_{pix}$/2 ($N_{pix}$ is the number of
pixels in original spectrum before it is padded to 2048 pixels; these
choices attenuate high-frequency noise and eliminate power beyond the
Nyquist limit at $N_{pix}$/2). The same procedures were also applied to
the spectrum of the stellar velocity standard to be used as a
template. The cross-correlation of the galaxy and stellar template was
then computed, and the top 90\% of the highest cross-correlation peak
fitted with a Gaussian in order to obtain a redshift estimate.
This procedure was repeated for every template from that run, and the
redshifts corrected to the heliocentric frame. Offsets in the velocity
zeropoint between templates, measured as the mean difference in the
redshifts measured with different templates for all the galaxies in the
run, were typically found to be $\ls$\,30\hbox{\,km\,s$^{-1}$}. These were brought into
relative agreement within each run by choosing the best-observed K0
template as defining the fiducial velocity zeropoint. Applying these
offsets brought the galaxy redshifts estimated from different templates
into agreement to within $\ls$\,3\hbox{\,km\,s$^{-1}$}. (The removal of run-to-run
velocity offsets is described below.) The mean over all templates then
gave the initial redshift estimate for the galaxy.
This initial redshift was then used to determine the wavelength range
corresponding to the restframe range $\lambda_{min}$=4770\AA\ to
$\lambda_{max}$=5770\AA. This range was chosen for use in the second
pass through {\tt fxcor} because: (i)~it contains the MgI\,$b$
5174\AA\ band, H$\beta$ 4861\AA\ and the FeI 5207\AA\ and 5269\AA\
lines, but excludes the NaI~D line at 5892\AA, which gives larger
velocity dispersions than the lines in the region of \hbox{Mg$b$}\ (Faber \&
Jackson 1976); (ii)~for redshifts up to our sample limit of
$cz$=15000\hbox{\,km\,s$^{-1}$}\ this restframe wavelength range is included in the
great majority of our spectra. The input for the second pass was thus
the available spectrum within the range corresponding to restframe
4770--5770\AA. All but two of our runs cover the restframe out to at
least 5330\AA\ for $cz$=15000\hbox{\,km\,s$^{-1}$}; the exceptions are run 115 (which
is not used for measuring dispersions) and run 131 (which reaches
restframe 5207\AA).
In the second pass through {\tt fxcor} we employed only minimal
continuum subtraction based on a 1- or 2-segment cubic spline fit,
preferring the better control over continuum suppression afforded by
more stringent filtering at low wavenumbers. After considerable
experimentation and simulation, we found that the best filter for
recovering velocity dispersions was a ramp with the same $k_3$ and
$k_4$ values as in the first pass, but with
$k_2$=$0.01(N_{max}-N_{min})$, where $N_{min}$ and $N_{max}$ are the
pixels corresponding to $\lambda_{min}$ and $\lambda_{max}$, and
$k_1$=0.75$k_2$. Again, the top 90\% of the highest cross-correlation
peak was fitted with a Gaussian. The position of this peak, corrected
for the motion of the template star and the heliocentric motion of the
earth relative to both the template and the galaxy, gave the final
redshift estimate.
The galaxy's velocity dispersion, $\sigma_g$, is in principle related to
the dispersion of the Gaussian fitted to the cross-correlation peak,
$\sigma_x$, by $\sigma_x^2 = \sigma_g^2 + 2\sigma_i^2$ (where $\sigma_i$
is the instrumental resolution; Tonry \& Davis 1979). In practice this
relationship needs to be calibrated empirically because of the imperfect
match between the spectra of a broadened stellar template and a galaxy
and the effects of the filter applied to both spectra. The calibration
relation between $\sigma_x$ and $\sigma_g$ for a typical case is shown
in Figure~\ref{fig:calib} (see caption for more details). We estimate
the instrumental resolution for a given run from the mean value of the
calibration curve intercepts for all the templates in the run
($\sigma_i\approx\sigma_x/\sqrt{2}$ when $\sigma_g$=0); these are the
values listed in Table~\ref{tab:obsruns}.
\begin{figure}
\plotone{calib.ps}
\caption{A typical calibration curve showing the relation between the
width of the cross-correlation peak, $\sigma_x$, and the true velocity
dispersion of the galaxy, $\sigma_g$. The crosses are the individual
calibrations obtained by broadening each of the other templates in the
run and cross-correlating with the template being calibrated. The
solid curve is the calibration curve used, a series of linear segments
joining the median value of $\sigma_x$ at each calibrated value of
$\sigma_g$. The dashed curve is the theoretical relation when no
filtering is applied, $\sigma_x^2 = \sigma_g^2 + 2\sigma_i^2$, where
$\sigma_i$ is the instrumental resolution, in this case 145\hbox{\,km\,s$^{-1}$}. Note
that the calibration curve flattens for $\sigma_g<\sigma_i$,
indicating that the true dispersion becomes increasingly difficult to
recover as it drops below the instrumental resolution.
\label{fig:calib}}
\end{figure}
The values of heliocentric radial velocity and velocity dispersion were
determined in this second pass through {\tt fxcor} for each galaxy
spectrum using all the templates in the same run. The final step is then
to combine the redshift and dispersion estimates from each template, as
summarised below.
For the redshifts the steps involved were as follows: (i)~Cases where
the ratio of cross-correlation function peak height to noise (the $R$
parameter defined by Tonry \& Davis 1979) was less than 2 were rejected,
as were cases that differed from the median by more than a few hundred
\hbox{\,km\,s$^{-1}$}. (ii)~The mean offset between the redshifts from a fiducial K0
template and each other template was used to shift all the redshifts
from the other template to the velocity zeropoint of the fiducial. These
offsets were typically $\ls$\,50\hbox{\,km\,s$^{-1}$}. (iii)~A mean redshift for each
galaxy was then computed from all the unrejected cases using 2-pass
2$\sigma$ clipping. (iv)~Any template which gave consistently discrepant
results was rejected and the entire procedure repeated. The scatter in
the redshift estimates from different templates after this procedure was
typically a few \hbox{\,km\,s$^{-1}$}.
A very similar procedure was followed in combining velocity dispersions
except that a scale factor rather than an offset was applied between
templates: (i)~Cases with $R$$<$4 were rejected. (ii)~The mean ratio
between the dispersions from a fiducial K0 template and each other
template was used to scale all the dispersions from the other template
to the dispersion scale of the fiducial. These dispersion scales
differed by less than 5\% for 90\% of the templates. (iii)~A mean
dispersion for each galaxy was then computed from all the unrejected
cases using 2-pass 2$\sigma$ clipping. (iv)~Any template with a scale
differing by more than 10\% from the mean was rejected as being a poor
match to the programme galaxies and the entire procedure was then
repeated. (Note that no significant correlation was found between scale
factor and spectral type over the range G8 to K5 spanned by our
templates.) The scatter in the dispersion estimates from different
templates after this procedure was typically 3--4\%.
Two corrections need to be applied to the velocity dispersions before
they are fully calibrated: (i)~an aperture correction to account for
different effective apertures sampling different parts of the galaxy
velocity dispersion profile, and (ii)~a run correction to remove
systematic scale errors between different observing setups. The latter
type of correction is also applied to the redshifts to give them a
common zeropoint. These two corrections are discussed below at
\S\ref{ssec:apcorr} and \S\ref{ssec:combruns} respectively.
\subsection{Linestrength Indices}
\label{ssec:indices}
Once redshifts and velocity dispersions were determined, linestrength
indices could also be measured using the prescription given by
Gonz\'{a}lez (1993). This is a refinement of the original `Lick' system
in which a standard set of bands was defined for measuring linestrength
indices for 11 features in the spectra of spheroidal systems (Burstein
\hbox{et~al.}\ 1984). Gonz\'{a}lez (1993), Worthey (1993) and Worthey \hbox{et~al.}\
(1994) describe how this system has been updated and expanded to a set of
21 indices. Here we measure both the \hbox{Mg$b$}\ and \hbox{Mg$_2$}\ indices.
The feature bandpass for \hbox{Mg$b$}\ index is 5160.1--5192.6\AA, encompassing
the Mg~I triplet with components at 5166.6\AA, 5172.0\AA\ and
5183.2\AA. The continuum on either side of the absorption feature is
defined in bands covering 5142.6--5161.4\AA\ and 5191.4--5206.4\AA.
\hbox{Mg$b$}\ is an {\em atomic} index, and so is defined as the equivalent
width of the feature in {\AA}ngstroms,
\begin{equation}
\hbox{Mg$b$} = \int\,\left(1-\frac{S(\lambda)}{C(\lambda)}\right)\,d\lambda ~,
\label{eqn:mgbdef}
\end{equation}
where the integral is over the feature bandpass, $S(\lambda)$ is the
object spectrum and $C(\lambda)$ is the linear pseudo-continuum defined
by interpolating between two continuum estimates, taken at the midpoints
of the blue and red continuum bands to be the mean values of the
observed spectrum in those bands.
Closely related to \hbox{Mg$b$}\ is the \hbox{Mg$_2$}\ index, for which the feature
bandpass is 5154.1--5196.6\AA\ and the continuum bands are
4895.1--4957.6\AA\ and 5301.1--5366.1\AA. This index measures both the
Mg~I atomic absorption and the broader MgH molecular absorption
feature. \hbox{Mg$_2$}\ is a {\em molecular} index, and so is defined as the
mean ratio of flux to local continuum in magnitudes,
\begin{equation}
\hbox{Mg$_2$} = -2.5\log_{10}\left(\frac{\int\,S(\lambda)/C(\lambda)\,d\lambda}
{\Delta\lambda}\right) ~,
\label{eqn:mg2def}
\end{equation}
where the integral is over the \hbox{Mg$_2$}\ feature bandpass,
$\Delta\lambda$=42.5\AA\ is the width of that bandpass, and the
pseudo-continuum is interpolated from the \hbox{Mg$_2$}\ continuum bands.
In fact we will often find it convenient to express the \hbox{Mg$b$}\ index in
magnitudes rather than as an equivalent width. By analogy with the
\hbox{Mg$_2$}\ index, we therefore define \hbox{Mg$b^\prime$}\ to be
\begin{equation}
\hbox{Mg$b^\prime$} = -2.5\log_{10}\left(1-\frac{\hbox{Mg$b$}}{\Delta\lambda}\right) ~,
\label{eqn:mgbprime}
\end{equation}
where in this case $\Delta\lambda$=32.5\AA, the width of the \hbox{Mg$b$}\
feature bandpass.
In passing it should be noted that a different definition of
linestrength indices has sometimes been used (\hbox{e.g.}\ Worthey 1994,
equations~4 and~5) in which the integral of the ratio of the object
spectrum and the continuum in equations~\ref{eqn:mgbdef}
and~\ref{eqn:mg2def} is replaced by the ratio of the integrals. This
alternative definition has merits (such as simplifying the error
properties of measured indices), but it is not mathematically equivalent
to the standard definition. In practice, however, the two definitions
generally give linestrengths with negligibly different numerical values.
It is usual in studies of this sort to employ the \hbox{Mg$_2$}\ index as the
main indicator of metallicity and star-formation history. However we
find it useful for operational reasons to also measure the \hbox{Mg$b$}\
index. One problem is that the limited wavelength coverage of the
spectra from some runs means that in a number of cases we cannot measure
the \hbox{Mg$_2$}\ index (requiring as it does a wider wavelength range)
although we can measure the \hbox{Mg$b$}\ index. We obtain \hbox{Mg$b$}\ for 676 objects
(with 299 having repeat measurements) and \hbox{Mg$_2$}\ for 582 objects (with
206 having repeat measurements). Another problem with \hbox{Mg$_2$}\ is that
the widely-separated continuum bands make it more susceptible than \hbox{Mg$b$}\
to variations in the non-linear continuum shape of our unfluxed spectra,
which result from using a variety of different instruments and observing
galaxies over a wide range in redshift. We therefore present
measurements of both \hbox{Mg$b$}\ and \hbox{Mg$_2$}: the former because it is
better-determined and available for more sample galaxies, the latter for
comparison with previous work. As previously demonstrated (Gorgas \hbox{et~al.}\
1990, J{\o}rgensen 1997) and confirmed here, \hbox{Mg$b$}\ and \hbox{Mg$_2$}\ are
strongly correlated, and so can to some extent be used interchangeably.
Several corrections must be applied to obtain a linestrength measurement
that is calibrated to the standard Lick system. The first correction
allows for the fact that the measured linestrength depends on the
instrumental resolution. Since all our spectra were obtained at higher
resolution than the spectra on which the Lick system was defined, we
simply convolve our spectra with a Gaussian of dispersion
$(\sigma_{Lick}^2-\sigma_i^2)^{1/2}$ in order to broaden our
instrumental resolution $\sigma_i$ (see Table~\ref{tab:obsruns}) to the
Lick resolution of 200\hbox{\,km\,s$^{-1}$}.
The second correction allows for the fact that the measured linestrength
depends on the galaxy's internal velocity dispersion---a galaxy with
high enough velocity dispersion $\sigma_g$ will have features broadened
to the point that they extend outside their index bandpasses, and so
their linestrengths will be underestimated. Moreover, if an absorption
feature is broadened into the neighbouring continuum bands then the
estimated continuum will be depressed and the linestrength will be
further reduced. The `$\sigma$-correction' needed to calibrate out this
effect can be obtained either by measuring linestrength as a function of
velocity broadening for a set of suitable stellar spectra (such as the
templates obtained for measuring redshifts and dispersions) or by
modelling the feature in question.
Although most previous studies have adopted the former approach, we
prefer to use a model to calibrate our indices, since we observe a
dependence of the \hbox{Mg$b$}\ profile shape on $\sigma$ that is not taken into
account by simply broadening stellar templates. Our simple model assumes
\hbox{Mg$b$}\ to be composed of three Gaussians centred on the three Mg~I lines
at $\lambda_b$=5166.6\AA, $\lambda_c$=5172.0\AA\ and
$\lambda_r$=5183.2\AA\ with corresponding relative strengths varying
linearly with dispersion from 1.0:1.0:1.0 at $\sigma$=100\hbox{\,km\,s$^{-1}$}\ to
0.2:1.0:0.7 at $\sigma$=300\hbox{\,km\,s$^{-1}$}. This dependence on dispersion is
empirically determined and approximate (the relative strengths of the
individual lines are not tightly constrained), but it does significantly
improve the profile fits compared to assuming any fixed set of relative
weights. Such variation of the \hbox{Mg$b$}\ profile shape reflects changes, as
a function of velocity dispersion, in the stellar population mix and
relative abundances (particularly of Mg, C, Fe and Cr), which each
affect the profile in complex ways (Tripicco \& Bell 1995).
Using the estimated value of the index to normalise the model profile
and the effective dispersion $(\sigma_g^2+\sigma_{Lick}^2)^{1/2}$ to
give the broadening, we can estimate both the profile flux which is
broadened out of the feature bandpass and the resulting depression of
the continuum. Correcting for both these effects gives an improved
estimate for the linestrength. Iterating leads rapidly to convergence
and an accurate $\sigma$-correction for the \hbox{Mg$b$}\ and \hbox{Mg$_2$}\
indices. We find that the \hbox{Mg$b$}\ $\sigma$-correction is typically +4\% at
100\hbox{\,km\,s$^{-1}$}\ and increases approximately linearly to +16\% at 400\hbox{\,km\,s$^{-1}$}; the
\hbox{Mg$_2$}\ $\sigma$-correction is typically 0.000~mag up to 200\hbox{\,km\,s$^{-1}$}\ and
increases approximately linearly to 0.004~mag at 400\hbox{\,km\,s$^{-1}$}.
Note that the usual method of determining the $\sigma$-correction by
broadening standard stars ignores the dependence of profile shape on
changes in the stellar population mix as a function of luminosity or
velocity dispersion. Our tests indicate that by doing so, the usual
method tends to overestimate \hbox{Mg$b$}\ for galaxies with large dispersions:
by 2\% at 200\hbox{\,km\,s$^{-1}$}, 6\% at 300\hbox{\,km\,s$^{-1}$}\ and 14\% at 400\hbox{\,km\,s$^{-1}$}. The two methods
give essentially identical results for \hbox{Mg$_2$}, since it has much smaller
$\sigma$-corrections due to its wider feature bandpass and
well-separated continuum bands.
The other corrections that need to be applied to the linestrength
estimates are: (i)~an aperture correction to account for different
effective apertures sampling different parts of the galaxy
(\S\ref{ssec:apcorr}); (ii)~a run correction to remove systematic scale
errors between different observing setups (\S\ref{ssec:combruns}); and
(iii)~an overall calibration to the Lick system determined by
comparisons with literature data (\S\ref{ssec:compare}).
\subsection{Error Estimates}
\label{ssec:errors}
Error estimates for our redshifts, velocity dispersions and
linestrengths come from detailed Monte Carlo simulations of the
measurement process for each observing run. By calibrating the errors
estimated from these simulations against the rms errors obtained from
the repeat measurements that are available for many of the objects (see
\S\ref{ssec:caliberr}), we can obtain precise and reliable error
estimates for each measurement of every object in our sample.
The procedure for estimating the uncertainties in our redshifts and
velocity dispersions was as follows. For each stellar template in each
observing run, we constructed a grid of simulated spectra with Gaussian
broadenings of 100--300\hbox{\,km\,s$^{-1}$}\ in 20\hbox{\,km\,s$^{-1}$}\ steps and continuum counts
corresponding to $S/N$ ratios of 10--90 in steps of 10. For each
spectrum in this grid we generated 16 realisations assuming Poisson
noise. These simulated spectra were then cross-correlated against all
the other templates from the run in order to derive redshifts and
velocity dispersions in the standard manner. The simulations do not
account for spectral mismatch between the galaxy spectra and the stellar
templates, but for well-chosen templates this effect is only significant
at higher $S/N$ than is typically found in our data.
\begin{figure*}
\plotfull{simerrs.ps}
\caption{Redshift and dispersion errors as functions of input dispersion
and $S/N$ (labelling the curves) from the simulations of four of the
larger runs. The top panel shows the random error in the redshift and
the centre and bottom panels show the systematic and random error in the
dispersion. The vertical dotted line indicates the instrumental
dispersion of each run.
\label{fig:simerrs}}
\end{figure*}
Figure~\ref{fig:simerrs} shows the random error in redshift and the
systematic and random errors in dispersion as functions of input
dispersion and $S/N$ for four of the larger runs. The systematic errors
in redshift are not shown as they are negligibly small ($\sim$1\hbox{\,km\,s$^{-1}$}),
although the simulations do not include possible zeropoint errors. The
systematic errors in dispersion are generally small (a few percent or
less) for $S/N$$>$20, but become rapidly larger at lower $S/N$. The
random errors in redshift increase for lower $S/N$ and higher
dispersion, while the random errors in dispersion increase for lower
$S/N$ but have a broad minimum at around twice the instrumental
dispersion. These curves have the general form predicted for the random
errors from the cross-correlation method (Tonry \& Davis 1979, Colless
1987).
Given the dispersion and $S/N$ measured for a spectrum, we interpolated
the error estimates from the simulation for that particular observing
run to obtain the systematic and random errors in each measured
quantity. We used the results of these simulations to correct the
systematic errors in the velocity dispersions and to estimate the
uncertainties in individual measurements of redshift and dispersion. For
quality class D measurements of redshifts, where the spectra are too
poor to estimate a dispersion and hence a reliable redshift error, we
take a conservative redshift error of 50\hbox{\,km\,s$^{-1}$}.
The linestrength error estimates were obtained by generating 50 Monte
Carlo realizations of the object spectrum with Poisson noise appropriate
to the spectrum's $S/N$ level. The \hbox{Mg$b$}\ and \hbox{Mg$_2$}\ linestrengths were
then measured for each of these realizations and the error estimated as
the rms error of these measurements about the observed value. The error
estimate obtained in this fashion thus takes into account the noise
level of the spectrum, but does not account for errors in the
linestrength due to errors in the redshift and dispersion estimates, nor
for systematic run-to-run differences in the underlying continuum shape.
The estimated errors in the spectroscopic parameters are compared with,
and calibrated to, the rms errors derived from repeat observations in
\S\ref{ssec:caliberr}.
\subsection{Aperture Corrections}
\label{ssec:apcorr}
The velocity dispersion measured for a galaxy is the luminosity-weighted
velocity dispersion integrated over the region of the galaxy covered by
the spectrograph aperture. It therefore depends on (i)~the velocity
dispersion profile; (ii)~the luminosity profile; (iii)~the distance of
the galaxy; (iv)~the size and shape of the spectrograph aperture; and
(v)~the seeing in which the observations were made. In order to
intercompare dispersion measurements it is therefore necessary to
convert them to a standard scale. The `aperture correction' this
requires has often been neglected because it depends in a complex manner
on a variety of quantities some of which are poorly known. The neglect
of such corrections may account in part for the difficulties often found
in reconciling dispersion measurements from different sources.
The aperture correction applied by Davies \hbox{et~al.}\ (1987) was derived by
measuring dispersions for a set of nearby galaxies through apertures of
4\arcsec$\times$4\arcsec and 16\arcsec$\times$16\arcsec. In this way
they used their nearby galaxies to define the velocity dispersion
profile and obtained a relation between the corrected value,
$\sigma_{cor}$, and the observed one, $\sigma_{obs}$. This turned out
to be an approximately linear relation amounting to a 5\% correction
over the distance range between Virgo and Coma.
More recently J{\o}rgensen \hbox{et~al.}\ (1995) have derived an aperture
correction from kinematic models based on data in the literature.
Published photometry and kinematics for 51 galaxies were used to
construct two-dimensional models of the surface brightness, velocity
dispersion, and rotational velocity projected on the sky. They found
that the position angle only gave rise to 0.5\% variations in the
derived dispersions and could thus be ignored. They converted
rectangular apertures into an `equivalent circular aperture' of radius
$r_{ap}$ which the models predicted would give the same dispersion as
the rectangular slit. They found that to an accuracy of 4\% one could
take $r_{ap} = 1.025(xy/\pi)^{1/2}$, where $x$ and $y$ are the width and
length of the slit.
From their models they then calculated the correction factor from the
observed dispersion to the dispersion in some standard aperture. For a
standard {\em metric} aperture, they found this aperture correction to
be well approximated by a power law of the form
\begin{equation}
\frac{\sigma_{cor}}{\sigma_{obs}} =
\left[ \left(\frac{r_{ap}}{r_0}\right)
\left(\frac{cz}{cz_0}\right) \right]^{0.04} ~,
\label{eqn:apcor1}
\end{equation}
where $\sigma_{obs}$ and $\sigma_{cor}$ are the observed and corrected
dispersions, $r_0$ is a standard aperture radius, defined to be
1.7~arcsec, and $cz_0$ is a standard redshift, defined as the redshift
of Coma. The standard metric aperture is thus 0.54\hbox{\,h$^{-1}$\,kpc}\ in radius.
Alternatively, one can correct to a standard {\em relative} aperture
(defined to be $R_e/8$) using the same power law relation,
\begin{equation}
\frac{\sigma_{cor}}{\sigma_{obs}}=\left(\frac{r_{ap}}{R_e/8}\right)^{0.04} ~.
\label{eqn:apcor2}
\end{equation}
This power law approximates the true relation to within 1\% over the
observed range of effective apertures (compare the distribution metric
aperture sizes in Figure~\ref{fig:apcor}a with Figure~4c of J{\o}rgensen
\hbox{et~al.}).
\begin{figure}
\plotone{apcor.ps}
\caption{The distribution of aperture corrections to a standard metric
aperture. (a)~The distribution of the ratio of the observed metric
apertures to the standard metric aperture (corresponding to 1.7~arcsec
at the redshift of Coma). (b)-(d)~The aperture corrections to this
standard metric aperture for the dispersion, \hbox{Mg$b$}\ and \hbox{Mg$_2$}\
measurements. Note that $\sigma^{cor}$ is the aperture-corrected
dispersion and $\sigma^{obs}$ is the raw observed dispersion; likewise
for the linestrengths.
\label{fig:apcor}}
\end{figure}
We also apply an aperture correction to our linestrengths. J{\o}rgensen
\hbox{et~al.}\ noted that the radial gradient in the \hbox{Mg$_2$}\ index is similar to
the radial gradient in $\log\sigma$, and so applied the same aperture
correction for \hbox{Mg$_2$}\ as for $\log\sigma$. We adopt this procedure for
\hbox{Mg$_2$}. For \hbox{Mg$b$}\ we convert to \hbox{Mg$b^\prime$}\ (Equation~\ref{eqn:mgbprime})
and, assuming that the radial profile of \hbox{Mg$b^\prime$}\ is similar to that of
\hbox{Mg$_2$}\ (and hence $\log\sigma$), we apply the $\log\sigma$ aperture
correction to \hbox{Mg$b^\prime$}\ before converting back to \hbox{Mg$b$}.
The distributions of corrections to the standard metric aperture for the
dispersions and linestrengths are shown in Figures~\ref{fig:apcor}b-d.
These corrections are generally positive, as most objects in our sample
are observed through larger effective apertures and are further away
than J{\o}rgensen \hbox{et~al.}'s standard aperture and redshift. The
corrections to standard relative apertures are quite similar, although
having slightly greater amplitude and range. We choose to adopt the
correction to a standard metric aperture in order to minimise the size
and range of the corrections and to facilitate comparisons with
dispersions and linestrengths in the literature.
\subsection{Combining Different Runs}
\label{ssec:combruns}
In comparing the redshifts, dispersions and linestrengths obtained from
different runs we found some significant systematic offsets. The origin
of these run-to-run offsets is not fully understood. For the redshifts,
the use of different velocity standard stars as the fiducials in
different runs clearly contributes some systematic errors. For the
dispersions, the calibration procedure we use should in principle remove
instrumental systematics; in practice, scale differences are common, as
is shown by the range of scale factors needed to reconcile velocity
dispersions from various sources in the compilation by McElroy (1995;
see Table~2).
\begin{table*}
\centering
\caption{Calibration of observing runs to a common system.}
\label{tab:runcorr}
\begin{tabular}{llrrlrrlrrlrr}
Run & \multicolumn{1}{c}{$\Delta cz$} & $N_{z}$ & $N^c_{z}$ &
\multicolumn{1}{c}{$\Delta\log\sigma$} & $N_{\sigma}$ & $N^c_{\sigma}$ &
\multicolumn{1}{c}{$\Delta$\hbox{Mg$b^\prime$}} & $N_{b}$ & $N^c_{b}$ &
\multicolumn{1}{c}{$\Delta$\hbox{Mg$_2$}} & $N_{2}$ & $N^c_{2}$ \\
& \multicolumn{1}{c}{(\hbox{\,km\,s$^{-1}$})} & & &
\multicolumn{1}{c}{(dex)} & & &
\multicolumn{1}{c}{(mag)} & & &
\multicolumn{1}{c}{(mag)} & & \vspace*{6pt} \\
101 & \phantom{0}\n$-$9 $\pm$ \n9 & 19 & 62 & $+$0.014 $\pm$ 0.016 & 18 & 65 & $+$0.009 $\pm$ 0.009 & 18 & 32 & $-$0.008 $\pm$ 0.007 & 16 & 30 \\
102 & \phantom{0}$+$62 $\pm$ 10 & 56 & 57 & $+$0.034 $\pm$ 0.023 & 56 & 56 & $+$0.012 $\pm$ 0.011 & 56 & 24 & $-$0.013 $\pm$ 0.010 & 56 & 12 \\
103 & \phantom{0}$-$43 $\pm$ 10 & 36 & 53 & $-$0.034 $\pm$ 0.028 & 34 & 48 & $-$0.002 $\pm$ 0.011 & 34 & 27 & $-$0.025 $\pm$ 0.009 & 34 & 20 \\
104$^b$ & \phantom{0}\n$+$0 & 11 & 0 & $+$0.000 & 11 & 0 & $+$0.000 & 11 & 0 & $+$0.000 & 11 & 0 \\
105 & \phantom{0}$+$16 $\pm$ \n9 & 36 & 54 & $+$0.018 $\pm$ 0.019 & 36 & 55 & $+$0.020 $\pm$ 0.007 & 36 & 37 & $+$0.008 $\pm$ 0.009 & 21 & 14 \\
106 & \phantom{0}$-$17 $\pm$ 14 & 23 & 61 & $+$0.024 $\pm$ 0.029 & 22 & 54 & $+$0.009 $\pm$ 0.007 & 22 & 41 & $+$0.001 $\pm$ 0.014 & 14 & 13 \\
107 & \phantom{0}$-$19 $\pm$ 10 & 27 & 61 & $+$0.060 $\pm$ 0.033 & 27 & 64 & $+$0.000 $\pm$ 0.009 & 27 & 33 & $-$0.003 $\pm$ 0.008 & 21 & 24 \\
108 & \phantom{0}$-$71 $\pm$ 23 & 9 & 35 & $-$0.055 $\pm$ 0.035 & 9 & 34 & $-$0.005 $\pm$ 0.008 & 9 & 12 & $+$0.032 $\pm$ 0.008 & 9 & 11 \\
109$^a$ & \phantom{0}\n$+$1 $\pm$ \n4 & 93 & 222 & $-$0.015 $\pm$ 0.005 & 92 & 220 & $+$0.000 $\pm$ 0.000 & 92 & 167 & $+$0.008 $\pm$ 0.002 & 92 & 126 \\
110 & \phantom{0}\n$+$3 $\pm$ \n6 & 71 & 186 & $-$0.010 $\pm$ 0.008 & 61 & 171 & $+$0.004 $\pm$ 0.004 & 61 & 78 & $-$0.009 $\pm$ 0.004 & 61 & 50 \\
111 & \phantom{0}$-$10 $\pm$ 14 & 19 & 72 & $-$0.024 $\pm$ 0.024 & 19 & 76 & $-$0.005 $\pm$ 0.006 & 19 & 33 & $+$0.046 $\pm$ 0.006 & 19 & 25 \\
112 & \phantom{0}$+$45 $\pm$ \n8 & 31 & 82 & $-$0.006 $\pm$ 0.008 & 31 & 103 & $+$0.001 $\pm$ 0.009 & 31 & 16 & $-$0.027 $\pm$ 0.013 & 16 & 7 \\
113 & $+$154 $\pm$ 15 & 20 & 9 & $+$0.041 $\pm$ 0.038 & 20 & 9 & $+$0.025 $\pm$ 0.012 & 20 & 7 & $-$0.015 $\pm$ 0.026 & 2 & 1 \\
114 & \phantom{0}\n$-$9 $\pm$ 11 & 12 & 22 & $-$0.059 $\pm$ 0.024 & 12 & 22 & $-$0.012 $\pm$ 0.009 & 12 & 20 & $+$0.032 $\pm$ 0.007 & 12 & 15 \\
115 & \phantom{0}\n$+$9 $\pm$ 10 & 8 & 24 & $+$0.024 $\pm$ 0.022 & 8 & 23 & $-$0.069 $\pm$ 0.026 & 1 & 2 & $-$0.087 $\pm$ 0.034 & 1 & 2 \\
116$^d$ & ~~~~~~~--- & --- & --- & ~~~~~~~~~~--- & --- & --- & ~~~~~~~~~~--- & --- & --- & ~~~~~~~~~~--- & --- & --- \\
117 & \phantom{0}\n$-$2 $\pm$ \n7 & 59 & 132 & $+$0.005 $\pm$ 0.021 & 55 & 121 & $+$0.016 $\pm$ 0.007 & 55 & 90 & $+$0.028 $\pm$ 0.007 & 41 & 51 \\
118$^c$ & $+$120 $\pm$ 22 & 14 & 4 & $-$0.018 $\pm$ 0.066 & 13 & 4 & $-$0.011 $\pm$ 0.029 & 13 & 3 & $+$0.000 & 5 & 0 \\
119 & \phantom{0}$-$20 $\pm$ \n9 & 17 & 20 & $-$0.004 $\pm$ 0.015 & 17 & 19 & $+$0.003 $\pm$ 0.007 & 17 & 19 & $-$0.013 $\pm$ 0.009 & 17 & 8 \\
120 & \phantom{0}$-$39 $\pm$ \n7 & 38 & 47 & $+$0.009 $\pm$ 0.011 & 34 & 44 & $+$0.005 $\pm$ 0.005 & 33 & 25 & $+$0.059 $\pm$ 0.008 & 26 & 14 \\
121 & \phantom{0}$-$66 $\pm$ \n8 & 86 & 177 & $+$0.038 $\pm$ 0.010 & 82 & 181 & $+$0.002 $\pm$ 0.008 & 82 & 23 & $+$0.008 $\pm$ 0.008 & 54 & 17 \\
122 & \phantom{0}$-$28 $\pm$ \n8 & 41 & 70 & $+$0.001 $\pm$ 0.012 & 37 & 58 & $+$0.020 $\pm$ 0.006 & 37 & 32 & $+$0.033 $\pm$ 0.008 & 31 & 24 \\
123 & \phantom{0}$-$22 $\pm$ \n8 & 22 & 41 & $+$0.022 $\pm$ 0.017 & 17 & 34 & $+$0.005 $\pm$ 0.006 & 16 & 26 & $+$0.010 $\pm$ 0.006 & 13 & 16 \\
124 & \phantom{0}$+$14 $\pm$ 17 & 22 & 49 & $+$0.020 $\pm$ 0.044 & 14 & 40 & $+$0.029 $\pm$ 0.012 & 14 & 8 & $+$0.012 $\pm$ 0.022 & 11 & 1 \\
125 & \phantom{0}$-$48 $\pm$ 14 & 57 & 62 & $+$0.037 $\pm$ 0.010 & 55 & 64 & $+$0.007 $\pm$ 0.008 & 55 & 8 & $-$0.018 $\pm$ 0.011 & 55 & 7 \\
126 & \phantom{0}$+$57 $\pm$ 14 & 36 & 43 & $-$0.004 $\pm$ 0.039 & 33 & 40 & $+$0.029 $\pm$ 0.023 & 33 & 9 & $+$0.008 $\pm$ 0.023 & 33 & 6 \\
127 & \phantom{0}\n$-$3 $\pm$ \n4 & 131 & 187 & $+$0.002 $\pm$ 0.007 & 127 & 167 & $-$0.007 $\pm$ 0.003 & 127 & 136 & $-$0.007 $\pm$ 0.003 & 127 & 83 \\
128 & \phantom{0}\n$+$6 $\pm$ 12 & 24 & 29 & $-$0.042 $\pm$ 0.010 & 23 & 29 & $+$0.010 $\pm$ 0.015 & 23 & 7 & $-$0.055 $\pm$ 0.013 & 9 & 4 \\
129$^b$ & \phantom{0}\n$+$0 & 3 & 0 & $+$0.000 & 3 & 0 & $+$0.000 & 3 & 0 & $+$0.000 & 3 & 0 \\
130$^d$ & ~~~~~~~--- & --- & --- & ~~~~~~~~~~--- & --- & --- & ~~~~~~~~~~--- & --- & --- & ~~~~~~~~~~--- & --- & --- \\
131$^e$ & \phantom{0}$+$35 $\pm$ \n5 & 123 & 174 & $-$0.078 $\pm$ 0.014 & 99 & 152 & $+$0.002 $\pm$ 0.004 & 98 & 136 & ~~~~~~~~~~--- & --- & --- \\
132 & \phantom{0}$-$24 $\pm$ 19 & 12 & 7 & $-$0.025 $\pm$ 0.026 & 12 & 6 & $-$0.032 $\pm$ 0.008 & 12 & 4 & $+$0.005 $\pm$ 0.016 & 12 & 4 \\
133 & \phantom{0}$-$11 $\pm$ \n4 & 128 & 247 & $-$0.009 $\pm$ 0.006 & 128 & 241 & $-$0.009 $\pm$ 0.002 & 128 & 183 & $-$0.014 $\pm$ 0.002 & 128 & 151 \\
\end{tabular}\vspace*{6pt}
\parbox{0.84\textwidth}{
$^a$ Run 109 is the fiducial run for \hbox{Mg$b^\prime$}, defined to have zero offset. \\
$^b$ Runs 104 and 129 have no objects in common with other runs. \\
$^c$ Run 118 has no \hbox{Mg$_2$}\ measurements in common with other runs. \\
$^d$ Runs 116 and 130 have no usable data. \\
$^e$ Run 131 has no \hbox{Mg$_2$}\ measurements. }
\end{table*}
We cannot directly calibrate the measurements from each run to the
system defined by a chosen fiducial run, as there is no run with objects
in common with all other runs to serve as the fiducial. Instead, we use
the mean offset, $\Delta$, between the measurements from any particular
run and {\em all} the other runs. To compute this offset we separately
compute, for each galaxy $i$, the error-weighted mean value of the
measurements obtained from the run in question, $x_{ij}$, and from all
other runs, $y_{ik}$:
\begin{equation}
\langle x_i \rangle = \frac{\sum_j x_{ij}/\delta_{ij}^2}
{\sum_j 1/\delta_{ij}^2} ~~,~~
\langle y_i \rangle = \frac{\sum_k y_{ik}/\delta_{ik}^2}
{\sum_k 1/\delta_{ik}^2} ~.
\label{eqn:meangal}
\end{equation}
Here $j$ runs over the $m_i$ observations of galaxy $i$ in the target
run and $k$ runs over the $n_i$ observations of galaxy $i$ in all other
runs; $\delta_{ij}$ and $\delta_{ik}$ are the estimated errors in
$x_{ij}$ and $y_{ik}$. We then take the average over all galaxies,
weighting by the number of comparison pairs, to arrive at an estimate
for the offset of the target run:
\begin{equation}
\Delta = \frac{\sum_i m_i n_i (\langle x_i \rangle - \langle y_i \rangle)}
{\sum_i m_i n_i}
\label{eqn:offset}
\end{equation}
Here $i$ runs over the $l$ galaxies in the sample. We can reject
outliers at this point by excluding galaxies for which the difference
$\langle x_i \rangle - \langle y_i \rangle$ is larger than some cutoff:
for $cz$, $\log\sigma$, \hbox{Mg$b^\prime$}\ and \hbox{Mg$_2$}\ we required differences less
than 300\hbox{\,km\,s$^{-1}$}, 0.2~dex, 0.1~mag and 0.1~mag respectively. The
uncertainty, $\epsilon$, in this estimate of the run offset is given by
\begin{equation}
\epsilon^2 = \frac{\sum_i (m_i n_i)^2
(\delta\langle x_i \rangle^2+\delta\langle y_i \rangle^2)}
{\bigl(\sum_i m_i n_i\bigr)^2}
\label{eqn:offerr}
\end{equation}
where $\delta\langle x_i \rangle$ and $\delta\langle y_i \rangle$ are
the error-weighted uncertainties in $\langle x_i \rangle$ and
$\langle y_i \rangle$ given by
\begin{equation}
\textstyle
\delta\langle x_i \rangle^2 = \bigl(\sum_j\delta_{ij}^{-2}\bigr)^{-1} ~~,~~
\delta\langle y_i \rangle^2 = \bigl(\sum_k\delta_{ik}^{-2}\bigr)^{-1} ~.
\label{eqn:galerr}
\end{equation}
We subtract the offset determined in this manner from each run and then
iterate the whole procedure until there are no runs with residual
offsets larger than $0.5\epsilon$. As a final step, we place the entire
dataset (now corrected to a common zeropoint) onto a fiducial system by
subtracting from all runs the offset of the fiducial system. Note that
the run corrections for dispersion and \hbox{Mg$b$}\ are determined in terms of
offsets in $\log\sigma$ and \hbox{Mg$b^\prime$}.
In order to maximise the number of objects with multiple measurements,
we included the dataset from the `Streaming Motions of Abell Clusters'
project (SMAC: M.J.Hudson, priv.comm.; see also Smith \hbox{et~al.}\ 1997) in
this analysis. There is a considerable overlap between the SMAC and EFAR
samples which significantly increases the number of comparison
observations and reduces the uncertainties in the run offsets. We chose
to use the `Lick' system of Davies \hbox{et~al.}\ (1987; included in the SMAC
dataset) as our fiducial, in order to bring the 7~Samurai, EFAR and SMAC
datasets onto a single common system. This is not possible with \hbox{Mg$b$},
which is not measured in most previous work or by SMAC. We therefore
chose run~109 (the Kitt Peak 4m run of November 1988) as the \hbox{Mg$b$}\
fiducial because it had a large number of high-quality observations and
the systematics of the slit spectrograph are believed to be well
understood.
We checked that this procedure gives relative run corrections consistent
with those obtained by directly comparing runs in those cases where
there {\em are} sufficient objects in common. We have also compared our
method with a slightly different method used by the SMAC collaboration
to determine the run corrections for their own data and found good
agreement (M.J.Hudson, priv.comm.). We carried out Monte Carlo
simulations of the whole procedure in order to check the uncertainties
in the offsets computed according to Equation~\ref{eqn:offerr}. We found
that this equation in general provides a good estimate of the
uncertainties, although when the number of comparisons is small or
involve a small number of other runs it can under-estimate the
uncertainties by up to 30\%. Our final estimates of the uncertainties
are therefore derived as the rms of the offsets from 100 Monte Carlo
simulations.
Table~\ref{tab:runcorr} lists the offsets for each run computed
according to the above procedure, their uncertainties based on Monte
Carlo simulations, the number of individual measurements ($N$) and the
number of comparison pairs ($N^c$). Note that to correct our observed
measurements to the fiducial system we {\em subtract} the appropriate
run offset in Table~\ref{tab:runcorr} from each individual measurement.
Of the 31 spectroscopic runs with usable data, only runs 104 and 129
have no objects in common with other runs and hence no run corrections;
run 118 has no \hbox{Mg$_2$}\ measurements in common and so no run correction
for \hbox{Mg$_2$}.
Weighting by the number of individual measurements in each run, the mean
amplitude of the corrections and their uncertainties are 28$\pm$8\hbox{\,km\,s$^{-1}$}\
in $cz$, 0.023$\pm$0.015~dex in $\log\sigma$, 0.008$\pm$0.006~mag in
\hbox{Mg$b^\prime$}\ and 0.015$\pm$0.006~mag in \hbox{Mg$_2$}. The significance of the
individual run corrections (in terms of the ratio of the amplitude of
the offset to its uncertainty) varies; however over all runs the reduced
$\chi^2$ is highly significant: 15.7, 4.0, 3.3 and 11.4 for the
corrections to the redshifts, dispersions, \hbox{Mg$b$}\ and \hbox{Mg$_2$}\
respectively. Application of the run corrections reduces the median rms
error amongst those objects with repeat measurements from 18\hbox{\,km\,s$^{-1}$}\ to
14\hbox{\,km\,s$^{-1}$}\ in redshift, 6.3\% to 5.6\% in dispersion, 4.9\% to 4.4\% in
\hbox{Mg$b$}\ and 0.012~mag to 0.009~mag in \hbox{Mg$_2$}. We also checked to see
whether applying the run corrections reduced the scatter in external
comparisons between our data and measurements in the literature (see
\ref{ssec:compare}). We found that although the scatter is dominated by
the combined random errors, the corrections did reduce the scatter
slightly in all cases.
As another test of the run corrections for \hbox{Mg$b^\prime$}\ and \hbox{Mg$_2$}\ (and also,
more weakly, for $\log\sigma$), we compared the \hbox{\mgbp--$\sigma$}\ and \hbox{\mgtwo--$\sigma$}\
distributions for each run (after applying the run corrections) with the
global \hbox{\mgbp--$\sigma$}\ and \hbox{\mgtwo--$\sigma$}\ relations derived in Paper~V. Using the
$\chi^2$ goodness-of-fit statistic to account both for measurement
errors in the dispersions and linestrengths and for the intrinsic
scatter about the \hbox{Mg--$\sigma$}\ relations, we find that for \hbox{\mgbp--$\sigma$}\ there
were two runs (113 and 132) with reduced $\chi^2$ greater than 3, while
for \hbox{\mgtwo--$\sigma$}\ there was one such run (122). In all three cases the
removal of 1 or 2 obvious outliers decreased the reduced $\chi^2$ to a
non-significant level.
\subsection{Calibrating the Estimated Errors}
\label{ssec:caliberr}
\begin{figure*}
\plotfull{errcomp.ps}
\caption{Comparison of the estimated errors derived from simulations and
the rms errors for galaxies with repeat measurements of redshift,
dispersion, \hbox{Mg$b$}\ and \hbox{Mg$_2$}. Each panel shows the differential and
cumulative distributions of the ratio of rms error to estimated error.
The stepped curves are the observed distributions, while the smooth
curves are the predicted distributions. The upper panels show the
comparisons using the original estimated errors; the lower panels show
the comparisons after correcting the estimated errors as described in
the text.
\label{fig:errcomp}}
\end{figure*}
Obtaining precise error estimates is particularly important because we
will make extensive use of them in applying maximum likelihood methods
to deriving the Fundamental Plane and relative cluster distances for our
sample. Although we have estimated the measurement errors as carefully
as possible, simulating the noise in the observations and the
measurement procedures, some sources of error are likely to remain
unaccounted-for and we may be systematically mis-estimating the errors.
We therefore auto-calibrate our errors by scaling the estimated errors
in the combined measurements (the {\em internal} error estimate, based
on the individual measurement errors derived from simulations; see
\S\ref{ssec:errors} and \S\ref{ssec:combmeas}) to match the rms errors
from objects with repeat measurements (an {\em external} error
estimate).
Figure~\ref{fig:errcomp} shows the differential and cumulative
distributions of the ratio of the rms error to the estimated error for
each galaxy with repeat measurements of redshift, dispersion, \hbox{Mg$b$}\ and
\hbox{Mg$_2$}. The smooth curves are the predicted differential and cumulative
distributions of this ratio assuming that the estimated errors are the
true errors. The top panel shows the comparison using the estimated
errors (including all the corrections discussed above). For the
redshifts and linestrengths, the estimated errors are generally
under-estimates of the true errors, since the ratio of rms to estimated
errors tends to be larger than predicted. For the dispersions the
estimated errors are generally over-estimates of the true errors, since
this ratio tends to be smaller than predicted. For all quantities the
assumption that the estimated errors are consistent with the true errors
is ruled out with high confidence by a Kolmorogorov-Smirnov (KS) test
applied to the observed and predicted cumulative distributions. These
differences between the estimated errors from the simulations and the
rms errors from repeat measurements reflect the approximate nature of
the $S/N$ estimates and systematic measurement errors not accounted for
in the simulations.
In order to bring our estimated errors into line with the rms errors
from the repeat measurements, we found it necessary to add 15\hbox{\,km\,s$^{-1}$}\ in
quadrature to the estimated redshift errors, to scale the dispersion and
\hbox{Mg$b$}\ errors by factors of 0.85 and 1.15 respectively, and to add
0.005~mag to the \hbox{Mg$_2$}\ errors. These corrections were determined by
maximising the agreement of the observed and predicted distributions of
the ratio of rms to estimated errors under a KS test (excluding outliers
with values of this ratio $>$3.5). The corrections are quite well
determined: to within a couple of \hbox{\,km\,s$^{-1}$}\ for the redshift correction, a
few percent for the dispersion and \hbox{Mg$b$}\ corrections, and 0.001~mag for
the \hbox{Mg$_2$}\ correction. Applying these corrections and repeating the
comparison of rms and estimated errors gives the lower panels of
Figure~\ref{fig:errcomp}, which shows the good agreement between the rms
errors from repeat measurements and the calibrated errors estimates for
the redshifts, dispersions and Mg linestrengths.
The need for such a correction to the redshift errors may be due in part
to the residual zeropoint uncertainties in the redshifts and in part to
a tendency for the simulations to under-estimate the errors for high
$S/N$ spectra. The origin of the over-estimation of the dispersion
errors is uncertain, although it may result from slightly different
prescriptions for estimating the $S/N$ in the observations and the
simulations. The under-estimation of the linestrength errors may be due
to neglecting the effects of errors in the redshift and dispersion
estimates and the different continuum shapes of spectra from different
runs when measuring linestrengths.
\section{RESULTS}
\label{sec:results}
\subsection{Individual Measurements}
\label{ssec:measure}
The previous two sections describe the observations and analysis of our
spectroscopic data. Table~\ref{tab:spectab} lists the observational
details for each spectrum and the fully-corrected measurements of
redshift, dispersion, \hbox{Mg$b$}\ and \hbox{Mg$_2$}, together with their calibrated
error estimates. Note that these error estimates are the individual
measurement errors, and must be combined in quadrature with the run
correction uncertainties given in Table~\ref{tab:runcorr} to give the
total error estimate. We list the measurement errors rather than the
total errors because the total errors are not independent, being
correlated for objects in the same run. The version of the table
presented here is abridged; the full table will be available upon
publication from NASA's Astrophysical Data Center (ADC) and from the
Centre de Donn\'{e}es astronomiques de Strasbourg (CDS).
\begin{table*}
\centering
\caption{Individual spectroscopic measurements (abridged)}
\label{tab:spectab}
\begin{tabular}{clcclrrrrrrrrrl}
(1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) & (12) & (13) & (14) & (15) \\
{\scriptsize GINRUNSEQ} & Galaxy & Tele- & Obsvn & $Q$ & $S/N$ & \multicolumn{2}{c}{$cz$} &
\multicolumn{2}{c}{$\sigma$} & \multicolumn{2}{c}{\hbox{Mg$b$}} & \multicolumn{2}{c}{\hbox{Mg$_2$}} & {Notes} \\
& Name & scope & Date & & & \multicolumn{2}{c}{(\hbox{\,km\,s$^{-1}$})} & \multicolumn{2}{c}{(\hbox{\,km\,s$^{-1}$})} &
\multicolumn{2}{c}{(\AA)} & \multicolumn{2}{c}{(mag)} & \vspace*{6pt} \\
001106215 & A76~A & MG24 & 880914 & A & 21.4 & 11296 & 51 & 262 & 50 & 5.20 & 0.54 & 0.353 & 0.020 & \\
002106221 & A76~B & MG24 & 880914 & B & 18.8 & 11317 & 47 & 225 & 47 & 4.76 & 0.59 & 0.326 & 0.019 & \\
003106218 & A76~C & MG24 & 880914 & A & 17.8 & 11973 & 49 & 217 & 48 & 5.17 & 0.68 & 0.392 & 0.021 & \\
004107095 & A76~D & MG24 & 881011 & C & 14.8 & 12184 & 31 & 316 & 103 & 6.18 & 0.83 & 0.326 & 0.028 & \\
005120504 & A76~E & IN25 & 901016 & B & 33.2 & 12241 & 19 & 217 & 16 & 6.35 & 0.22 & 0.312 & 0.011 & \\
005120505 & A76~E & IN25 & 901017 & B & 32.8 & 12147 & 17 & 169 & 11 & 6.16 & 0.23 & 0.275 & 0.012 & \\
006107098 & A76~F & MG24 & 881011 & B & 17.6 & 12371 & 28 & 307 & 82 & 5.29 & 0.83 & 0.331 & 0.027 & \\
008122534 & A85~A & MG24 & 911016 & B & 18.7 & 16577 & 39 & 290 & 52 & 5.14 & 0.59 & 0.344 & 0.021 & \\
008123319 & A85~A & IN25 & 911201 & D & 21.4 & 16692 & 50 & --- & --- & --- & --- & --- & --- & \\
009120626 & A85~B & IN25 & 901017 & A & 37.6 & 17349 & 34 & 436 & 46 & 5.74 & 0.22 & --- & --- & \\
010120628 & A85~C & IN25 & 901017 & D & 33.9 & 22837 & 50 & --- & --- & --- & --- & --- & --- & \hbox{Mg$b$}\ at sky \\
011122540 & A85~1 & MG24 & 911016 & B & 25.8 & 15112 & 20 & 195 & 18 & 3.82 & 0.39 & 0.253 & 0.016 & \\
012132001 & A85~2 & SS2B & 931022 & B & 28.3 & 16264 & 25 & 294 & 32 & 5.86 & 0.31 & 0.337 & 0.011 & \\
013101059 & A119~A & MG24 & 861202 & B & 16.5 & 11516 & 49 & 299 & 48 & 6.06 & 0.51 & 0.294 & 0.022 & \\
013109339 & A119~A & KP4M & 881107 & A & 44.2 & 11457 & 19 & 289 & 18 & 5.00 & 0.20 & 0.320 & 0.010 & \\
013131330 & A119~A & ES36 & 931008 & A & 30.9 & 11451 & 24 & 320 & 29 & 4.86 & 0.26 & --- & --- & \\
014101063 & A119~B & MG24 & 861202 & C & 14.4 & 13205 & 63 & 323 & 65 & 5.90 & 0.72 & 0.320 & 0.025 & \\
014109339 & A119~B & KP4M & 881107 & A & 35.0 & 13345 & 21 & 276 & 21 & 4.86 & 0.22 & 0.361 & 0.010 & \\
015109343 & A119~C & KP4M & 881107 & A & 33.2 & 13508 & 19 & 250 & 19 & 5.51 & 0.22 & 0.295 & 0.011 & \\
015131330 & A119~C & ES36 & 931008 & B & 19.5 & 13484 & 28 & 265 & 35 & 5.88 & 0.48 & --- & --- & \\
016109346 & A119~D & KP4M & 881107 & C* & 20.8 & 14980 & 16 & 104 & 13 & 3.30 & 0.37 & 0.151 & 0.016 & H$\beta$ \\
016110611 & A119~D & KP2M & 881114 & D* & 15.6 & 15022 & 50 & --- & --- & --- & --- & --- & --- & H$\beta$ \\
016131330 & A119~D & ES36 & 931008 & D* & 16.1 & 14996 & 50 & --- & --- & --- & --- & --- & --- & H$\beta$ \\
017109347 & A119~E & KP4M & 881107 & A & 36.1 & 12807 & 19 & 251 & 17 & 5.18 & 0.18 & 0.326 & 0.010 & \\
017131330 & A119~E & ES36 & 931008 & A & 27.0 & 12788 & 21 & 243 & 22 & 6.01 & 0.32 & --- & --- & \\
018109342 & A119~F & KP4M & 881107 & B & 28.4 & 13034 & 18 & 193 & 15 & 4.95 & 0.32 & 0.245 & 0.013 & \\
018131330 & A119~F & ES36 & 931008 & C & 21.1 & 13006 & 16 & 112 & 20 & 4.69 & 0.37 & --- & --- & \\
019109342 & A119~G & KP4M & 881107 & E & 32.7 & --- & --- & --- & --- & --- & --- & --- & --- & mis-ID \\
019131330 & A119~G & ES36 & 931008 & A & 36.7 & 13457 & 19 & 267 & 19 & 4.78 & 0.20 & --- & --- & \\
021122654 & A119~I & MG24 & 911018 & A & 31.0 & 13271 & 20 & 225 & 19 & 4.85 & 0.38 & 0.326 & 0.014 & \\
022131330 & A119~J & ES36 & 931008 & B & 22.3 & 13520 & 21 & 219 & 24 & 4.55 & 0.33 & --- & --- & \\
023131330 & A119~1 & ES36 & 931008 & C* & 11.5 & 4127 & 18 & 92 & 38 & 0.50 & 0.78 & --- & --- & H$\beta$ \\
024122657 & A119~2 & MG24 & 911018 & E & 28.9 & --- & --- & --- & --- & --- & --- & --- & --- & mis-ID \\
024131330 & A119~2 & ES36 & 931008 & B & 20.4 & 12346 & 16 & 100 & 20 & 5.13 & 0.38 & --- & --- & \\
025107103 & J3~A & MG24 & 881011 & B & 15.2 & 14453 & 32 & 333 & 111 & 6.25 & 0.74 & 0.329 & 0.025 & \\
026120714 & J3~B & IN25 & 901018 & A & 39.5 & 13520 & 18 & 231 & 15 & 4.93 & 0.23 & --- & --- & \\
027107106 & J3~C & MG24 & 881011 & C & 15.8 & 13771 & 21 & 163 & 38 & 4.33 & 0.86 & 0.224 & 0.027 & \\
028120712 & J3~D & IN25 & 901017 & A & 36.3 & 14316 & 22 & 287 & 23 & 5.45 & 0.24 & --- & --- & double \\
028120713 & J3~D & IN25 & 901017 & B & 30.8 & 14837 & 19 & 207 & 15 & 4.87 & 0.26 & --- & --- & double \\
031107190 & J4~A & MG24 & 881012 & C & 13.2 & 12074 & 24 & 185 & 51 & 5.35 & 0.75 & 0.387 & 0.029 & \\
032107189 & J4~B & MG24 & 881012 & C & 13.3 & 12090 & 31 & 302 & 102 & 4.11 & 1.08 & 0.261 & 0.032 & \\
033132002 & J4~C & SS2B & 931021 & A & 30.0 & 17154 & 30 & 358 & 43 & 6.20 & 0.26 & 0.355 & 0.010 & \\
036107195 & A147~A & MG24 & 881012 & C & 12.2 & 12811 & 26 & 208 & 62 & 3.96 & 1.16 & 0.288 & 0.034 & \\
036117298 & A147~A & MG24 & 891015 & B & 19.0 & 12741 & 35 & 281 & 49 & 4.84 & 0.60 & 0.314 & 0.022 & \\
036133157 & A147~A & CT4M & 931019 & A & 61.4 & 12760 & 17 & 235 & 12 & 4.77 & 0.17 & 0.285 & 0.010 & \\
036133158 & A147~A & CT4M & 931019 & A & 62.3 & 12776 & 17 & 253 & 13 & 5.01 & 0.18 & 0.289 & 0.010 & \\
\end{tabular}
\vspace*{6pt} \\ \parbox{\textwidth}{This is an abridged version of this
table; the full table will be available upon publication from NASA's
Astrophysical Data Center (ADC) and from the Centre de Donn\'{e}es
astronomiques de Strasbourg (CDS). The columns of this table give:
(1)~observation identifier (GINRUNSEQ); (2)~galaxy name; (3)~telescope
used; (4)~date of observation; (5)~quality parameter; (6)~signal to
noise ratio; (7--8)~redshift and estimated error; (9--10)~velocity
dispersion and estimated error; (11--12)~\hbox{Mg$b$}\ linestrength and
estimated error; (13--14)~\hbox{Mg$_2$}\ linestrength and estimated error; and
(15) notes on each observation. In the notes, `double' means the EFAR
galaxy is double; `star' means the EFAR object is a star not a galaxy;
`mis-ID' means the spectrum is for some galaxy other than the nominated
EFAR object; `mis-ID*' means the spectrum is for a nearby star rather
than the EFAR object; `\hbox{Mg$b$}\ at sky' means the object is at a redshift
which puts \hbox{Mg$b$}\ on the 5577\AA\ sky line; `\#=\#' notes the duplicated
pairs in the EFAR sample (see Paper~I; only the first of the two GINs is
used); emission line objects (with an asterisk on $Q$) have the emission
features listed; `H$\beta$ abs' or `H$\beta$ abs, [OIII]' means the
redshift is based on the H$\beta$ absorption feature (and [OIII] if
present), as the spectrum stops short of \hbox{Mg$b$}\ (no dispersion or \hbox{Mg$b$}\
index is given for these objects). The objects for which we have no
spectrum have GINs: 7, 20, 29, 30, 34, 35, 55, 62, 64, 67, 82, 83, 91,
104, 121, 131, 133, 134, 161, 181, 191, 214, 225, 228, 231, 234, 256,
265, 309, 327, 391, 405, 407, 417, 434, 435, 442, 450, 451, 452, 458,
463, 464, 465, 470, 475, 477, 483, 484, 486, 494, 516, 520, 521, 522,
523, 526, 544, 551, 553, 567, 569, 570, 575, 576, 577, 587, 594, 597,
603, 605, 624, 625, 644, 671, 727, 760, 793, 798, 801, 901.}
\end{table*}
The entries in Table~\ref{tab:spectab} are as follows: Column~1 gives
GINRUNSEQ, a unique nine-digit identifier for each spectrum, composed of
the galaxy identification number (GIN) as given in the master list of
EFAR sample galaxies (Table~3 of Paper~I), the run number (RUN) as given
in Table~\ref{tab:obsruns}, and a sequence number (SEQ) which uniquely
specifies the observation within the run; column~2 gives the galaxy
name, as in the master list of Paper~I; column~3 is the telescope code,
as in Table~\ref{tab:obsruns}; column~4 is the UT date of the
observation; columns~5 \&~6 are the quality parameter (with an asterisk
if the spectrum shows emission features) and $S/N$ of the spectrum (see
\S\ref{ssec:quality}); columns~7 \&~8 are the fully-corrected
heliocentric redshift $cz$ (in \hbox{\,km\,s$^{-1}$}) and its measurement error;
columns~9 \&~10 are the fully-corrected velocity dispersion $\sigma$ (in
\hbox{\,km\,s$^{-1}$}) and its measurement error; columns~11 \&~12 are the
fully-corrected \hbox{Mg$b$}\ linestrength index and its measurement error (in
\AA); columns~13 \&~14 are the fully-corrected \hbox{Mg$_2$}\ linestrength
index and its measurement error (in mag); column~15 provides comments,
the full meanings of which are described in the notes to the table.
There are 1319 spectra in this table. Note that 81 objects from our
original sample do not have spectroscopic observations and do not appear
in the table (see the list of missing GINs in the table notes). Three of
these are the duplicate objects (GINs 55, 435, 476) and three are known
stars (GINs 131, 133, 191). Most of the others are objects which our
imaging showed are not early-type galaxies, although there are a few
early-type galaxies for which we did not get a spectrum. There are 34
spectra which are unusable ($Q$=E) either because the spectrum is too
poor (13 cases) or because the object was mis-identified (20 cases) or
is a known star (1 case, GIN 123). Of the 1285 usable spectra (for 706
different galaxies), there are 637 spectra with $Q$=A, 407 with $Q$=B,
161 with $Q$=C and 80 with $Q$=D.
The cumulative distributions of the total estimated errors in the
individual measurements (combining measurement errors and run correction
uncertainties in quadrature) are shown in Figure~\ref{fig:errdist1} for
quality classes A, B and C, and for all three classes together. The
error distributions can be quantitatively characterised by their 50\%
and 90\% points, which are listed in Table~\ref{tab:typerrs1}. The
overall median error in a single redshift measurement is 22\hbox{\,km\,s$^{-1}$}, the
median relative errors in single measurements of dispersion and \hbox{Mg$b$}\
are 10.5\% and 8.2\%, and the median error in a single measurement of
\hbox{Mg$_2$}\ is 0.015~mag.
\subsection{Combining Measurements}
\label{ssec:combmeas}
We use a weighting scheme to combine the individual measurements of each
quantity to obtain a best estimate (and its uncertainty) for each galaxy
in our sample. The weighting has three components:
\begin{table}
\centering
\caption{The distribution of estimated errors per measurement}
\label{tab:typerrs1}
\begin{tabular}{ccccccccc}
$Q$ & \multicolumn{2}{c}{$\Delta cz$ (km/s)} &
\multicolumn{2}{c}{$\Delta\sigma/\sigma$} &
\multicolumn{2}{c}{$\Delta$\hbox{Mg$b$}/\hbox{Mg$b$}} &
\multicolumn{2}{c}{$\Delta$\hbox{Mg$_2$}\ (mag)} \\
& 50\% & 90\% & 50\% & 90\% & 50\% & 90\% & 50\% & 90\% \\
All & 22 & 40 & 0.105 & 0.255 & 0.082 & 0.184 & 0.015 & 0.028 \\
A & 20 & 33 & 0.076 & 0.163 & 0.061 & 0.125 & 0.013 & 0.022 \\
B & 24 & 43 & 0.140 & 0.275 & 0.104 & 0.192 & 0.018 & 0.028 \\
C & 25 & 48 & 0.181 & 0.343 & 0.136 & 0.303 & 0.024 & 0.036 \\
\end{tabular}
\end{table}
\begin{figure}
\plotone{errdist1.ps}
\caption{The cumulative distributions of estimated errors for individual
measurements of redshift, velocity dispersion and \hbox{Mg$b$}\ linestrength.
The distributions for quality classes A, B and C are shown as full,
long-dashed and short-dashed lines respectively; the overall
distribution is the thick full line. (a)~The distribution of estimated
errors in $cz$; (b)~estimated relative errors in $\sigma$; (c)~estimated
relative errors in \hbox{Mg$b$}; (d)~estimated errors in \hbox{Mg$_2$}.
\label{fig:errdist1}}
\end{figure}
(i) {\em Error weighting:} For multiple measurements $X_i$ having
estimated total errors $\Delta_i$ (the measurement errors and run
correction uncertainties added in quadrature), we weight the values
inversely with their variances, \hbox{i.e.}\ by $\Delta_i^{-2}$.
(ii) {\em Quality weighting:} We apply a weighting $W_Q$ which
quantifies our degree of belief (over and above the estimated errors) in
measurements obtained from spectra with different quality parameters.
Following the discussion in \S\ref{ssec:quality}, for spectra with
$Q$=A,B,C,D,E we use $W_Q$=1,1,1,0.5,0 in computing redshifts,
$W_Q$=1,1,0.5,0,0 in computing dispersions, and $W_Q$=1,1,0.5,0,0 in
computing linestrengths.
(iii) {\em Run weighting:} we also apply a run-weighting $W_R$=0 to
exclude run 115, for reasons explained in \S\ref{ssec:setups}; all other
runs are given $W_R$=1.
The combined estimate $X$ is thus computed from the individual
measurements $X_i$ as the weighted mean
\begin{equation}
X = {\textstyle \sum_i} W_i X_i / {\textstyle \sum_i} W_i ~,
\label{eqn:combval}
\end{equation}
where $W_i = \Delta_i^{-2} W_{Qi} W_{Ri}$. The uncertainty in this
weighted mean is computed as
\begin{equation}
\Delta = ({\textstyle \sum_i} W_i)^{-1/2} ~.
\label{eqn:comberr}
\end{equation}
This procedure is used to obtain combined estimates of the redshift,
dispersion and linestrengths for each galaxy. We estimate the overall
quality $Q$ as the highest quality amongst the individual measurements
and obtain a combined estimate of the $S/N$ as
\begin{equation}
S/N = ({\textstyle \sum_i} (S/N)_i^{2} W_{Qi} W_{Ri})^{1/2} ~,
\label{eqn:combsnr}
\end{equation}
using the same weightings as for the dispersions (except when the
overall quality is $Q$=D, when these weightings are omitted).
Table~\ref{tab:galtab} gives the combined estimates of the spectroscopic
parameters for each galaxy in the EFAR sample. The version of the table
presented here is abridged; the full table will be available upon
publication from NASA's Astrophysical Data Center (ADC) and from the
Centre de Donn\'{e}es astronomiques de Strasbourg (CDS). The table
lists: galaxy identification number (GIN), galaxy name, cluster
assignment number (CAN; see \S\ref{sec:clusass}), and the number of
spectra, redshifts, dispersions and \hbox{Mg$b$}\ and \hbox{Mg$_2$}\ linestrengths
obtained for this object; then, for each of redshift, dispersion, \hbox{Mg$b$}\
and \hbox{Mg$_2$}: the combined estimate, its estimated total error ($\Delta$)
and the weighted rms error from any repeat observations ($\delta$);
finally, the combined $S/N$ estimate and the overall quality parameter
(with an asterisk if the galaxy possesses emission lines). Note that
only objects with useful measurements are included; hence the lowest
quality class present in this table is $Q$=D, and the 7 galaxies with
only $Q$=E spectra (GINs 123, 284, 389, 448, 599, 637, 679) in
Table~\ref{tab:spectab} are omitted.
\begin{table*}
\centering
\caption{Spectroscopic parameters for the EFAR galaxies (abridged)}
\label{tab:galtab}
\begin{tabular}{rlrcrrrrrrrrrrrrrl}
(1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) & (12) & (13) & (14) & (15) & (16) & (17) & (18) \\
GIN & Galaxy & CAN & $N$ & $cz$ & $\Delta cz$ & $\delta cz$ & $\sigma$ & $\Delta\sigma$ & $\delta\sigma$ &
\hbox{Mg$b$} & $\Delta$\hbox{Mg$b$} & $\delta$\hbox{Mg$b$} & \hbox{Mg$_2$} & $\Delta$\hbox{Mg$_2$} & $\delta$\hbox{Mg$_2$} & $S/N$ & $Q$ \\
& Name & & $s~z~\sigma~b~2$ & \multicolumn{3}{c}{\dotfill(\hbox{\,km\,s$^{-1}$})\dotfill} & \multicolumn{3}{c}{\dotfill(\hbox{\,km\,s$^{-1}$})\dotfill} &
\multicolumn{3}{c}{\dotfill(\AA)\dotfill} & \multicolumn{3}{c}{\dotfill(mag)\dotfill} & & \vspace*{6pt} \\
1 & A76~A & 1 & 1~1~1~1~1 & 11296 & 53 & --- & 262 & 53 & --- & 5.20 & 0.57 & --- & 0.353 & 0.024 & --- & 21.4 & A \\
2 & A76~B & 1 & 1~1~1~1~1 & 11317 & 49 & --- & 224 & 49 & --- & 4.76 & 0.62 & --- & 0.326 & 0.024 & --- & 18.8 & B \\
3 & A76~C & 1 & 1~1~1~1~1 & 11973 & 51 & --- & 217 & 50 & --- & 5.17 & 0.70 & --- & 0.392 & 0.025 & --- & 17.8 & A \\
4 & A76~D & 1 & 1~1~1~1~1 & 12184 & 33 & --- & 316 & 150 & --- & 6.18 & 1.21 & --- & 0.326 & 0.041 & --- & 10.5 & C \\
5 & A76~E & 1 & 2~2~2~2~2 & 12189 & 14 & 47 & 184 & 10 & 23 & 6.26 & 0.18 & 0.09 & 0.295 & 0.010 & 0.018 & 46.7 & B \\
6 & A76~F & 1 & 1~1~1~1~1 & 12371 & 30 & --- & 307 & 85 & --- & 5.29 & 0.86 & --- & 0.331 & 0.028 & --- & 17.6 & B \\
8 & A85~A & 2 & 2~2~1~1~1 & 16604 & 35 & 49 & 290 & 53 & --- & 5.14 & 0.61 & --- & 0.344 & 0.022 & --- & 18.7 & B \\
9 & A85~B & 2 & 1~1~1~1~0 & 17349 & 35 & --- & 436 & 47 & --- & 5.74 & 0.25 & --- & --- & --- & --- & 37.6 & A \\
10 & A85~C & 102 & 1~1~0~0~0 & 22837 & 71 & --- & --- & --- & --- & --- & --- & --- & --- & --- & --- & 33.9 & D \\
11 & A85~1 & 2 & 1~1~1~1~1 & 15112 & 22 & --- & 195 & 19 & --- & 3.82 & 0.42 & --- & 0.253 & 0.018 & --- & 25.8 & B \\
12 & A85~2 & 2 & 1~1~1~1~1 & 16264 & 31 & --- & 294 & 37 & --- & 5.86 & 0.37 & --- & 0.337 & 0.019 & --- & 28.3 & B \\
13 & A119~A & 3 & 3~3~3~3~2 & 11459 & 15 & 17 & 297 & 15 & 13 & 5.04 & 0.16 & 0.31 & 0.316 & 0.009 & 0.010 & 56.4 & A \\
14 & A119~B & 3 & 2~2~2~2~2 & 13330 & 20 & 42 & 278 & 21 & 10 & 4.90 & 0.22 & 0.21 & 0.358 & 0.010 & 0.011 & 36.5 & A \\
15 & A119~C & 3 & 2~2~2~2~1 & 13500 & 16 & 11 & 253 & 17 & 6 & 5.57 & 0.20 & 0.14 & 0.295 & 0.011 & --- & 38.5 & A \\
16 & A119~D & 3 & 3~3~1~1~1 & 14982 & 16 & 9 & 104 & 18 & --- & 3.30 & 0.52 & --- & 0.151 & 0.023 & --- & 14.7 & C* \\
17 & A119~E & 3 & 2~2~2~2~1 & 12798 & 14 & 9 & 248 & 14 & 4 & 5.37 & 0.16 & 0.35 & 0.326 & 0.010 & --- & 45.1 & A \\
18 & A119~F & 3 & 2~2~2~2~1 & 13018 & 12 & 14 & 175 & 13 & 33 & 4.88 & 0.28 & 0.11 & 0.245 & 0.013 & --- & 32.1 & B \\
19 & A119~G & 3 & 2~1~1~1~0 & 13457 & 20 & --- & 267 & 21 & --- & 4.78 & 0.22 & --- & --- & --- & --- & 36.7 & A \\
21 & A119~I & 3 & 1~1~1~1~1 & 13271 & 22 & --- & 225 & 20 & --- & 4.85 & 0.41 & --- & 0.326 & 0.016 & --- & 31.0 & A \\
22 & A119~J & 3 & 1~1~1~1~0 & 13520 & 22 & --- & 219 & 25 & --- & 4.55 & 0.35 & --- & --- & --- & --- & 22.3 & B \\
23 & A119~1 & 103 & 1~1~1~1~0 & 4127 & 19 & --- & 92 & 54 & --- & 0.50 & 1.12 & --- & --- & --- & --- & 8.1 & C* \\
24 & A119~2 & 3 & 2~1~1~1~0 & 12346 & 17 & --- & 100 & 20 & --- & 5.13 & 0.39 & --- & --- & --- & --- & 20.4 & B \\
25 & J3~A & 4 & 1~1~1~1~1 & 14453 & 34 & --- & 333 & 114 & --- & 6.25 & 0.77 & --- & 0.329 & 0.026 & --- & 15.2 & B \\
26 & J3~B & 4 & 1~1~1~1~0 & 13519 & 19 & --- & 231 & 16 & --- & 4.93 & 0.26 & --- & --- & --- & --- & 39.5 & A \\
27 & J3~C & 4 & 1~1~1~1~1 & 13770 & 23 & --- & 163 & 57 & --- & 4.33 & 1.26 & --- & 0.224 & 0.040 & --- & 11.2 & C \\
28 & J3~D & 4 & 2~2~2~2~0 & 14610 & 15 & 258 & 231 & 13 & 37 & 5.18 & 0.20 & 0.29 & --- & --- & --- & 47.6 & A \\
31 & J4~A & 5 & 1~1~1~1~1 & 12074 & 26 & --- & 185 & 75 & --- & 5.35 & 1.11 & --- & 0.387 & 0.043 & --- & 9.3 & C \\
32 & J4~B & 5 & 1~1~1~1~1 & 12090 & 33 & --- & 302 & 148 & --- & 4.11 & 1.56 & --- & 0.261 & 0.047 & --- & 9.4 & C \\
33 & J4~C & 105 & 1~1~1~1~1 & 17154 & 36 & --- & 358 & 48 & --- & 6.20 & 0.32 & --- & 0.355 & 0.019 & --- & 30.0 & A \\
36 & A147~A & 6 & 4~4~4~4~4 & 12771 & 11 & 19 & 244 & 9 & 12 & 4.88 & 0.13 & 0.14 & 0.289 & 0.007 & 0.008 & 89.9 & A \\
37 & A147~B & 6 & 4~4~4~4~4 & 13119 & 11 & 15 & 316 & 10 & 20 & 4.68 & 0.11 & 0.30 & 0.304 & 0.006 & 0.017 & 104.2 & A \\
38 & A147~C & 6 & 3~3~3~3~3 & 13156 & 13 & 8 & 247 & 12 & 7 & 5.22 & 0.15 & 0.22 & 0.305 & 0.008 & 0.007 & 66.8 & A \\
39 & A147~D & 6 & 3~3~3~3~3 & 13444 & 12 & 2 & 185 & 9 & 15 & 4.98 & 0.18 & 0.28 & 0.294 & 0.008 & 0.004 & 67.8 & A \\
40 & A147~E & 6 & 3~3~3~3~3 & 13049 & 10 & 10 & 176 & 8 & 7 & 4.58 & 0.14 & 0.12 & 0.267 & 0.007 & 0.002 & 76.1 & A \\
41 & A147~F & 6 & 2~2~2~2~2 & 11922 & 12 & 5 & 87 & 10 & 4 & 3.65 & 0.27 & 0.02 & 0.195 & 0.012 & 0.002 & 37.9 & B \\
42 & A147~1 & 6 & 3~3~3~3~3 & 12832 & 11 & 29 & 148 & 9 & 3 & 4.35 & 0.16 & 0.09 & 0.252 & 0.008 & 0.013 & 65.8 & A \\
43 & A160~A & 7 & 2~2~2~2~1 & 11401 & 15 & 3 & 181 & 17 & 19 & 3.86 & 0.26 & 0.04 & 0.250 & 0.017 & --- & 35.1 & A \\
44 & A160~B & 107 & 3~3~1~1~1 & 18258 & 24 & 17 & 192 & 21 & --- & 6.61 & 0.30 & --- & 0.289 & 0.022 & --- & 26.3 & B \\
45 & A160~C & 7 & 1~1~1~1~0 & 12380 & 34 & --- & 412 & 58 & --- & 4.61 & 0.37 & --- & --- & --- & --- & 27.8 & A \\
46 & A160~D & 107 & 3~3~0~0~0 & 18271 & 41 & 84 & --- & --- & --- & --- & --- & --- & --- & --- & --- & 36.1 & D \\
47 & A160~E & 7 & 4~4~4~4~2 & 14056 & 13 & 24 & 226 & 16 & 12 & 5.01 & 0.23 & 0.26 & 0.293 & 0.017 & 0.014 & 39.2 & A \\
48 & A160~F & 7 & 3~3~3~3~2 & 13657 & 14 & 16 & 176 & 18 & 27 & 5.18 & 0.24 & 0.43 & 0.307 & 0.013 & 0.009 & 33.9 & A \\
49 & A160~G & 7 & 4~4~4~4~2 & 13137 & 13 & 25 & 196 & 20 & 22 & 4.82 & 0.25 & 0.39 & 0.293 & 0.015 & 0.056 & 32.4 & B \\
50 & A160~H & 7 & 1~1~1~1~0 & 13589 & 21 & --- & 195 & 23 & --- & 5.18 & 0.30 & --- & --- & --- & --- & 21.6 & A \\
51 & A160~I & 107 & 3~3~0~0~0 & 18643 & 41 & 30 & --- & --- & --- & --- & --- & --- & --- & --- & --- & 43.9 & D \\
52 & A160~J & 7 & 4~4~4~4~1 & 11254 & 9 & 13 & 145 & 10 & 15 & 2.74 & 0.20 & 0.32 & 0.217 & 0.041 & --- & 46.1 & A* \\
53 & A160~1 & 107 & 2~2~0~0~0 & 18108 & 50 & 103 & --- & --- & --- & --- & --- & --- & --- & --- & --- & 27.3 & D* \\
54 & A160~2 & 107 & 1~1~0~0~0 & 18201 & 71 & --- & --- & --- & --- & --- & --- & --- & --- & --- & --- & 21.7 & D \\
56 & A168~A & 108 & 1~1~1~1~0 & 5299 & 19 & --- & 265 & 16 & --- & 4.35 & 0.26 & --- & --- & --- & --- & 51.7 & A \\
57 & A168~B & 108 & 1~1~1~1~0 & 5253 & 23 & --- & 310 & 25 & --- & 5.29 & 0.30 & --- & --- & --- & --- & 43.6 & A \\
\end{tabular}
\vspace*{6pt} \\ \parbox{\textwidth}{This is an abridged version of this
table; the full table will be available upon publication from NASA's
Astrophysical Data Center (ADC) and from the Centre de Donn\'{e}es
astronomiques de Strasbourg (CDS). The columns of this table give:
(1)~galaxy identification number (GIN); (2)~galaxy name; (3)~the cluster
assignment number (CAN); (4)~the number of spectra $N_s$, redshifts
$N_z$, dispersions $N_{\sigma}$, \hbox{Mg$b$}\ linestrengths $N_b$ and \hbox{Mg$_2$}\
linestrengths $N_2$ obtained for this object; then the combined
estimate, its estimated total error ($\Delta$) and the weighted rms
error from any repeat observations ($\delta$) for each of
(5--7)~redshift, (8--10)~dispersion, (11--13)~\hbox{Mg$b$}\ linestrength and
(14--16)~\hbox{Mg$_2$}\ linestrength; (17)~the combined $S/N$ estimate; and
(18)~the overall quality parameter (with an asterisk if the galaxy
possesses emission lines). Only objects with useful measurements are
included; hence the lowest quality class present in this table is $Q$=D,
and the 7 galaxies with only $Q$=E spectra (GINs 123, 284, 389, 448,
599, 637, 679) are omitted.}
\end{table*}
The cumulative distributions of uncertainties in the combined results
are shown in Figure~\ref{fig:errdist2}, both for the entire dataset and
for quality classes A, B and C separately. The error distributions can
be quantitatively characterised by their 50\% and 90\% points, which are
listed in Table~\ref{tab:typerrs2}. The overall median error in redshift
is 20\hbox{\,km\,s$^{-1}$}, the median relative errors in dispersion and \hbox{Mg$b$}\ are 9.1\%
and 7.2\%, and the median error in \hbox{Mg$_2$}\ is 0.015~mag. For the whole
sample, and for quality classes A and B, the median errors in the
combined measurements are smaller than the median errors in the
individual measurements, as one expects. However for dispersion, \hbox{Mg$b$}\
and \hbox{Mg$_2$}\ the errors are larger for quality class C and at the 90th
percentile; this results from assigning a quality weighting of 0.5
to $Q$=C when combining the individual measurements of these quantities.
The uncertainties listed in Table~\ref{tab:galtab} represent the best
estimates of the total errors in the parameters for each galaxy. However
it must be emphasised that they are {\em not} independent of each other,
as the run correction errors are correlated across all measurements from
a run. To properly simulate the joint distribution of some parameter for
the whole dataset, one must first generate realisations of the run
correction errors (drawn from Gaussians with standard deviations given
by the uncertainties listed in Table~\ref{tab:runcorr}) and the
individual measurement errors (drawn from Gaussians with standard
deviations given by the uncertainties listed Table~\ref{tab:spectab}).
For each individual measurement, one must add the realisation of its
measurement error and the realisation of the appropriate run correction
error (the same for all measurements in a given run) to the measured
value of the parameter. The resulting realisations of the individual
measurements are finally combined using the recipe described above to
yield a realisation of the value of the parameter for each galaxy in the
dataset.
\begin{table}
\centering
\caption{The distribution of errors per galaxy}
\label{tab:typerrs2}
\begin{tabular}{ccccccccc}
$Q$ & \multicolumn{2}{c}{$\Delta cz$ (km/s)} &
\multicolumn{2}{c}{$\Delta\sigma/\sigma$} &
\multicolumn{2}{c}{$\Delta$\hbox{Mg$b$}/\hbox{Mg$b$}} &
\multicolumn{2}{c}{$\Delta$\hbox{Mg$_2$}\ (mag)} \\
& 50\% & 90\% & 50\% & 90\% & 50\% & 90\% & 50\% & 90\% \\
All & 20 & 36 & 0.091 & 0.240 & 0.072 & 0.188 & 0.015 & 0.032 \\
A & 17 & 30 & 0.067 & 0.161 & 0.053 & 0.120 & 0.012 & 0.023 \\
B & 23 & 44 & 0.118 & 0.270 & 0.103 & 0.194 & 0.018 & 0.033 \\
C & 24 & 47 & 0.220 & 0.507 & 0.180 & 0.425 & 0.030 & 0.048 \\
\end{tabular}
\end{table}
\begin{figure}
\plotone{errdist2.ps}
\caption{The cumulative distributions of the total estimated errors in
the combined measurements of redshift, velocity dispersion, \hbox{Mg$b$}\ and
\hbox{Mg$_2$}\ for each galaxy. The distributions for quality classes A, B and
C are shown as full, long-dashed and short-dashed lines respectively;
the overall distribution is shown as the thick full line. (a)~The
distribution of combined errors in $cz$; (b)~combined relative errors in
$\sigma$; (c)~combined relative errors in \hbox{Mg$b$}; (d)~combined errors in
\hbox{Mg$_2$}.
\label{fig:errdist2}}
\end{figure}
The distributions of redshift, velocity dispersion, \hbox{Mg$b$}\ and \hbox{Mg$_2$}\
for the galaxies in the EFAR sample are displayed in
Figure~\ref{fig:specpar}. The galaxies for which we measured velocity
dispersions are only a subset of our sample of programme galaxies
(629/743), and represent a refinement of the sample selection criteria.
Figure~\ref{fig:dwsig} shows the fraction of programme galaxies with
measured dispersions as a function of the galaxy diameter $D_W$ on which
the selection function of the programme galaxy sample is defined. There
is a steady decline in the fraction of the sample for which usable
dispersions were measured, from 100\% for the largest galaxies (with
$D_W \gs 40$~arcsec) to about 75\% for the smallest (with 8~arcsec~$\ls
D_W \ls$~15~arcsec; there are only 3 programme galaxies with $D_W
<$~8~arcsec). This additional selection effect must be allowed for when
determining Fundamental Plane distances.
\begin{figure}
\plotone{specpar.ps}
\caption{The distributions of (a)~redshift, (b)~velocity dispersion,
(c)~\hbox{Mg$b$}\ linestrength, and (d)~\hbox{Mg$_2$}\ linestrength for the galaxies in
the EFAR sample.
\label{fig:specpar}}
\end{figure}
\begin{figure}
\plotone{dwsig.ps}
\caption{The fraction of programme objects for which we measured a
velocity dispersion as a function of the logarithm of the selection
diameter $D_W$ (in arcsec).
\label{fig:dwsig}}
\end{figure}
\subsection{Internal and External Comparisons}
\label{ssec:compare}
One of the strengths of our spectroscopic sample is the high fraction
of objects with repeat observations: there are 375 galaxies with a
single dispersion measurement, 160 with two measurements and 141 with
three or more measurements. Figure~\ref{fig:errdist3} shows the
cumulative distributions of rms errors in redshift, dispersion, \hbox{Mg$b$}\
and \hbox{Mg$_2$}\ obtained from these repeat observations. The detailed
internal comparisons made possible by these repeat measurements have
been used to establish the run corrections (\S\ref{ssec:combruns}) and
to calibrate the estimated errors (\S\ref{ssec:caliberr}). The latter
process ensured that the estimated errors were statistically
consistent with the rms errors of the repeat measurements.
\begin{figure}
\plotone{errdist3.ps}
\caption{The cumulative distributions of the rms errors from repeat
measurements of redshift, velocity dispersion, \hbox{Mg$b$}\ and \hbox{Mg$_2$}. The
distributions for spectral types A, B and C are shown as full,
long-dashed and short-dashed lines respectively; the overall
distribution is shown as the thick full line. (a)~The distribution of
rms errors in $cz$ in \hbox{\,km\,s$^{-1}$}; (b)~relative rms errors in $\sigma$;
(c)~relative rms errors in \hbox{Mg$b$}; (d)~rms errors in \hbox{Mg$_2$}.
\label{fig:errdist3}}
\end{figure}
We also make external comparisons of our measurements with the work of
other authors. The EFAR redshifts are compared in
Figure~\ref{fig:compcz} with redshifts given in the literature by the
7~Samurai (Davies \hbox{et~al.}\ 1987), Dressler \& Shectman (1988), Beers
\hbox{et~al.}\ (1991), Malumuth \hbox{et~al.}\ (1992), Zabludoff \hbox{et~al.}\ (1993), Colless
\& Dunn (1996) and Lucey \hbox{et~al.}\ (1997). Only 11 of the 256 comparisons
give redshift differences greater than 300\hbox{\,km\,s$^{-1}$}: in 6 cases the EFAR
redshift is confirmed either by repeat measurements or other published
measurements; in the remaining 5 cases the identification of the galaxy
in question is uncertain in the literature. For the 245 cases where the
redshift difference is less than 300\hbox{\,km\,s$^{-1}$}, there is no significant
velocity zeropoint error and the rms scatter is 85\hbox{\,km\,s$^{-1}$}. Since our repeat
measurements show much smaller errors (90\% are less than 36\hbox{\,km\,s$^{-1}$}), most
of this scatter must arise in the literature data, some of which were
taken at lower resolution or $S/N$ than our data.
\begin{figure}
\plotone{compcz.ps}
\caption{Differences between EFAR redshifts and those from various
sources in the literature.
\label{fig:compcz}}
\end{figure}
Figure~\ref{fig:compsig} compares the EFAR dispersions with published
dispersions from the work of the 7~Samurai (Davies \hbox{et~al.}\ 1987),
Guzm\'{a}n (1993), J{\o}rgensen \hbox{et~al.}\ (1995) and Lucey \hbox{et~al.}\ (1997),
and the compilation of earlier measurements by Whitmore \hbox{et~al.}\ (1985).
Note that we do not compare to the more recent compilation by McElroy
(1995), since its overlap with our sample is essentially just the sum of
above sources. The mean differences,
$\Delta=\log\sigma_{EFAR}-\log\sigma_{lit}$, and their standard errors
are indicated on the figure; none of these scale differences is larger
than 6\% and in fact all five comparisons are consistent with zero scale
error at the 2$\sigma$ level or better. The rms scatter in these
comparisons is significantly greater than the errors in our dispersion
measurements, implying that in general the literature measurements have
larger errors and/or that there are unaccounted-for uncertainties in the
comparison.
\begin{figure}
\plotone{compsig.ps}
\caption{Comparisons of EFAR dispersions with those from various sources
in the literature: (a)~Davies \hbox{et~al.}\ (1987), (b)~Guzm\'{a}n (1993),
(c)~J{\o}rgensen (1997), (d) Lucey \hbox{et~al.}\ (1997), and (e)~Whitmore
\hbox{et~al.}\ (1985). In each case the mean difference,
$\Delta=\langle\log\sigma_{EFAR}-\log\sigma_{lit}\rangle$, and its
standard error are indicated, along with the rms scatter and the number
of galaxies in the comparison.
\label{fig:compsig}}
\end{figure}
We determine the zeropoint calibration of our linestrength measurements
with respect to the Lick system (see \S\ref{ssec:indices}) by comparing
our \hbox{Mg$b^\prime$}\ and \hbox{Mg$_2$}\ linestrengths to measurements for the same
galaxies given by Trager \hbox{et~al.}\ (1998). We find that slightly different
calibrations are needed for objects with different redshifts, the result
of slight variations in the non-linear continuum shape as the spectra
are redshifted with respect to the instrument response and the sky
background (see \S\ref{ssec:indices}). Good agreement with Trager \hbox{et~al.}\
is obtained if we use different zeropoints for galaxies with redshifts
above and below $cz$=3000\hbox{\,km\,s$^{-1}$}\ (although there are no objects in the
comparison at $cz$$>$10000\hbox{\,km\,s$^{-1}$}). Excluding a few outliers, we find
weighted mean differences between the EFAR and Trager \hbox{et~al.}\
linestrengths of $\langle\Delta\hbox{Mg$b^\prime$}\rangle$=$-0.022$~mag and
$\langle\Delta\hbox{Mg$_2$}\rangle$=$-0.083$~mag for $cz$$<$3000\hbox{\,km\,s$^{-1}$}, and
$\langle\Delta\hbox{Mg$b^\prime$}\rangle$=$-0.008$~mag and
$\langle\Delta\hbox{Mg$_2$}\rangle$=$-0.028$~mag for $cz$$>$3000\hbox{\,km\,s$^{-1}$}.
Subtracting these zeropoint corrections gives the final,
fully-corrected, linestrength measurements as listed in
Tables~\ref{tab:spectab} \&~\ref{tab:galtab}. Figures~\ref{fig:mgbcomp}
\&~\ref{fig:mg2comp} show the residual differences between the EFAR and
Trager \hbox{et~al.}\ linestrength measurements after applying these zeropoint
corrections. The rms scatter is 0.019~mag in \hbox{Mg$b^\prime$}\ for the 41 objects
in common, and 0.023~mag in \hbox{Mg$_2$}\ for the 24 objects in common. There
is no statistically significant trend with linestrength, velocity
dispersion or redshift remaining in the residuals after these zeropoint
corrections are applied.
\begin{figure}
\plotone{mgbcomp.ps}
\caption{The residual differences in \hbox{Mg$b^\prime$}\ linestrengths from EFAR and
Trager \hbox{et~al.}\ (1998) {\em after} applying the zeropoint corrections
discussed in the text: (a)~the distribution of residuals; (b)~the
residuals as a function of \hbox{Mg$b^\prime$}; (c)~the residuals as a function of
$\log\sigma$; (d)~the residuals as a function of redshift. Outliers
excluded from the determination of the zeropoint corrections are shown
as crosses.
\label{fig:mgbcomp}}
\vspace*{0.5cm}
\end{figure}
\begin{figure}
\plotone{mg2comp.ps}
\caption{The residual differences in \hbox{Mg$_2$}\ linestrengths from EFAR and
Trager \hbox{et~al.}\ (1998) {\em after} applying the zeropoint corrections
discussed in the text: (a)~the distribution of residuals; (b)~the
residuals as a function of \hbox{Mg$_2$}; (c)~the residuals as a function of
$\log\sigma$; (d)~the residuals as a function of redshift. Outliers
excluded from the determination of the zeropoint corrections are shown
as crosses.
\label{fig:mg2comp}}
\vspace*{0.5cm}
\end{figure}
Figure~\ref{fig:compmg2} compares our calibrated \hbox{Mg$_2$}\ linestrengths
to those obtained in A2199, A2634 and Coma by Lucey \hbox{et~al.}\ (1997). The
overall agreement for the 36 objects in common is very good, with a
statistically non-significant zeropoint offset and an rms scatter of
0.029~mag, similar to that found in the comparison with Trager \hbox{et~al.}
\begin{figure}
\plotone{compmg2.ps}
\caption{Comparisons of \hbox{Mg$_2$}\ linestrengths obtained by EFAR and
Lucey \hbox{et~al.}\ (1997). The mean difference, $\Delta$ = \hbox{Mg$_2$}(EFAR) $-$
\hbox{Mg$_2$}(Lucey), and its standard error are indicated, along with the
rms scatter and the number of galaxies in the comparison.
\label{fig:compmg2}}
\vspace*{0.5cm}
\end{figure}
\begin{figure}
\vspace*{0.9cm}
\plotone{mgbmg2.ps}
\caption{The relation between \hbox{Mg$b^\prime$}\ and \hbox{Mg$_2$}\ and its maximum
likelihood fit. Ellipticals are marked by circles, E/S0s by squares, cDs
by asterisks and spirals by triangles. Typical estimated errors are
shown in the top left corner. The relation between \hbox{Mg$b^\prime$}\ and \hbox{Mg$_2$}\ as
a function of age and metallicity, as predicted by Worthey (1994), is
shown as the grid lying parallel to, but offset from, the data.
\label{fig:mgbmg2}}
\vspace*{0.5cm}
\end{figure}
The relation between the measured \hbox{Mg$b^\prime$}\ and \hbox{Mg$_2$}\ linestrengths for
all the galaxies in the EFAR sample is shown in Figure~\ref{fig:mgbmg2}.
We fit this relation using a maximum likelihood technique which accounts
for both measurement errors and selection effects (Saglia \hbox{et~al.}\ 1998,
in preparation; Paper~VI). We find
\begin{equation}
\hbox{Mg$_2$} = 1.94 \hbox{Mg$b^\prime$} - 0.05 ~,
\label{eqn:mgbmg2}
\end{equation}
with a perpendicular rms residual of 0.019~mag (corresponding to an rms
of 0.041~mag in \hbox{Mg$_2$}, or 0.021~mag in \hbox{Mg$b^\prime$}). The relation is the same
if we fit ellipticals, E/S0s, cDs or spirals separately. This relation
is similar to those derived by Burstein \hbox{et~al.}\ (1984) and J{\o}rgensen
(1997). We can therefore use \hbox{Mg$b^\prime$}\ as a predictor of \hbox{Mg$_2$}\ (albeit
with larger uncertainties) for those cases where \hbox{Mg$_2$}\ cannot be
measured directly.
Also shown in Figure~\ref{fig:mgbmg2} is the predicted relation between
\hbox{Mg$b^\prime$}\ and \hbox{Mg$_2$}\ as a function of age and metallicity given by Worthey
(1994). His models correctly predict the slope of the relation, but are
offset by $-0.025$~mag in \hbox{Mg$b^\prime$} (or by $+0.05$~mag in \hbox{Mg$_2$}),
indicating a difference in the model's zeropoint calibration for one or
both indices.
\section{CLUSTER ASSIGNMENTS}
\label{sec:clusass}
The correct assignment of galaxies to clusters (or groups) is crucial to
obtaining reliable redshifts and distances for the EFAR cluster
sample. We also need to increase the precision of the cluster redshifts
in order to minimise uncertainties in the clusters' peculiar velocities.
To achieve these goals we merged the EFAR redshifts with redshifts for
all galaxies in ZCAT (Huchra \hbox{et~al.}, 1992; version of 1997 May 29) which
lie within 3\hbox{\,h$^{-1}$\,Mpc}\ (2 Abell radii) of each nominal EFAR cluster centre
(see Table~1 of Paper~I). We then examined the redshift distributions of
the combined sample in order to distinguish groups, clusters and field
galaxies along the line of sight to a nominal EFAR `cluster'. We also
considered the distribution of galaxies on the sky before assigning the
EFAR galaxies to specific groupings.
\begin{figure*}
\plotfull{cluz1.ps}
\caption{The redshift distributions of galaxies within 3\hbox{\,h$^{-1}$\,Mpc}\ of each
nominal EFAR cluster using the EFAR and ZCAT data. Each distribution is
labelled at top right by the nominal cluster ID number (CID). The solid
histogram shows the distribution of EFAR galaxies; the open histogram
shows the extra ZCAT galaxies. The groupings adopted have boundaries in
redshift marked by dotted lines and are labelled by their cluster
assignment number (CAN). Clusters without numbers and boundaries contain
no EFAR galaxies.
\label{fig:cluz}}
\end{figure*}
\addtocounter{figure}{-1}
\begin{figure*}
\plotfull{cluz2.ps}
\caption{{\em (ctd)}}
\vspace*{1cm}
\end{figure*}
\addtocounter{figure}{-1}
\begin{figure*}
\plotfull{cluz3.ps}
\caption{{\em (ctd)}}
\vspace*{1cm}
\end{figure*}
\addtocounter{figure}{-1}
\begin{figure*}
\plotfull{cluz4.ps}
\caption{{\em (ctd)}}
\vspace*{1cm}
\end{figure*}
\begin{table*}
\centering
\caption{Cluster mean redshifts and velocity dispersions}
\label{tab:cluz}
\renewcommand{\arraystretch}{0.85}
\begin{tabular}{rrrrrrrrrrrrrrrr}
& \multicolumn{3}{c}{\dotfill EFAR\dotfill} &
\multicolumn{3}{c}{\dotfill EFAR+ZCAT\dotfill} & & &
& \multicolumn{3}{c}{\dotfill EFAR\dotfill} &
\multicolumn{3}{c}{\dotfill EFAR+ZCAT\dotfill} \\
~CAN &
$N$ & \multicolumn{1}{c}{$\langle cz \rangle$} & \multicolumn{1}{c}{$\sigma$} &
$N$ & \multicolumn{1}{c}{$\langle cz \rangle$} & \multicolumn{1}{c}{$\sigma$} & & &
~CAN &
$N$ & \multicolumn{1}{c}{$\langle cz \rangle$} & \multicolumn{1}{c}{$\sigma$} &
$N$ & \multicolumn{1}{c}{$\langle cz \rangle$} & \multicolumn{1}{c}{$\sigma$}\\
\multicolumn{16}{c}{~} \\
1 & 6 & 11888 $\pm$ 191 & 468 & 6 & 11888 $\pm$ 191 & 468 &&& 241 & 2 & 20440 $\pm$ \n94 & 134 & 2 & 20440 $\pm$ \n94 & 134 \\
2 & 4 & 16332 $\pm$ 466 & 931 & 19 & 16454 $\pm$ 227 & 991 &&& 42 & 6 & 8783 $\pm$ 114 & 280 & 11 & 8856 $\pm$ \n93 & 309 \\
102 & 1 & 22837 $\pm$ \n71 & --- & 11 & 23340 $\pm$ 134 & 444 &&& 142 & 4 & 15863 $\pm$ 146 & 292 & 5 & 15917 $\pm$ 125 & 280 \\
3 & 10 & 13168 $\pm$ 287 & 907 & 45 & 13280 $\pm$ 158 & 1060 &&& 43 & 9 & 13667 $\pm$ \n82 & 246 & 17 & 13536 $\pm$ \n81 & 332 \\
103 & 1 & 4127 $\pm$ \n18 & --- & 1 & 4127 $\pm$ \n18 & --- &&& 143 & 1 & 21438 $\pm$ \n20 & --- & 1 & 21438 $\pm$ \n20 & --- \\
4 & 4 & 14090 $\pm$ 263 & 527 & 4 & 14090 $\pm$ 263 & 527 &&& 44 & 8 & 16950 $\pm$ 191 & 539 & 11 & 17184 $\pm$ 191 & 633 \\
5 & 2 & 12082 $\pm$ \phantom{0}\n8 & 11 & 3 & 11994 $\pm$ \n88 & 153 &&& 144 & 2 & 13190 $\pm$ \n23 & 33 & 2 & 13190 $\pm$ \n23 & 33 \\
105 & 1 & 17154 $\pm$ \n30 & --- & 1 & 17154 $\pm$ \n30 & --- &&& 244 & 1 & 21199 $\pm$ \n64 & --- & 1 & 21199 $\pm$ \n64 & --- \\
6 & 7 & 12899 $\pm$ 183 & 484 & 11 & 13193 $\pm$ 177 & 588 &&& 45 & 5 & 11123 $\pm$ 143 & 320 & 5 & 11123 $\pm$ 143 & 320 \\
7 & 7 & 12782 $\pm$ 425 & 1123 & 8 & 12625 $\pm$ 400 & 1131 &&& 145 & 3 & 15757 $\pm$ 109 & 188 & 3 & 15757 $\pm$ 109 & 188 \\
107 & 5 & 18296 $\pm$ \n91 & 204 & 5 & 18296 $\pm$ \n91 & 204 &&& 245 & 1 & 14024 $\pm$ \n39 & --- & 1 & 14024 $\pm$ \n39 & --- \\
8 & 5 & 13466 $\pm$ \n49 & 109 & 12 & 13414 $\pm$ \n33 & 116 &&& 345 & 1 & 7789 $\pm$ \n36 & --- & 1 & 7789 $\pm$ \n36 & --- \\
108 & 2 & 5276 $\pm$ \n23 & 33 & 5 & 5000 $\pm$ 136 & 303 &&& 46 & 9 & 11136 $\pm$ 197 & 591 & 12 & 11321 $\pm$ 208 & 719 \\
9 & 2 & 5212 $\pm$ 376 & 531 & 19 & 5482 $\pm$ 103 & 447 &&& 146 & 1 & 23945 $\pm$ \n21 & --- & 2 & 24166 $\pm$ 221 & 313 \\
109 & 1 & 9599 $\pm$ \n22 & --- & 15 & 9473 $\pm$ 126 & 486 &&& 47 & 1 & 13876 $\pm$ \n26 & --- & 9 & 13576 $\pm$ 113 & 338 \\
10 & 7 & 15546 $\pm$ 270 & 715 & 7 & 15546 $\pm$ 270 & 715 &&& 48 & 7 & 13466 $\pm$ 114 & 302 & 22 & 13455 $\pm$ \n78 & 364 \\
110 & 1 & 12206 $\pm$ \n50 & --- & 1 & 12206 $\pm$ \n50 & --- &&& 148 & 1 & 23099 $\pm$ \n71 & --- & 3 & 23233 $\pm$ 134 & 231 \\
210 & 1 & 20957 $\pm$ \n12 & --- & 1 & 20957 $\pm$ \n12 & --- &&& 49 & 7 & 9944 $\pm$ 247 & 653 & 64 & 10528 $\pm$ \n80 & 640 \\
11 & 4 & 14747 $\pm$ 163 & 325 & 19 & 14499 $\pm$ 179 & 782 &&& 50 & 9 & 10427 $\pm$ 193 & 580 & 67 & 10548 $\pm$ \n89 & 730 \\
12 & 10 & 12342 $\pm$ 142 & 448 & 11 & 12315 $\pm$ 131 & 435 &&& 51 & 6 & 12593 $\pm$ 127 & 312 & 69 & 12353 $\pm$ \n84 & 699 \\
112 & 1 & 20993 $\pm$ \n41 & --- & 1 & 20993 $\pm$ \n41 & --- &&& 52 & 6 & 9982 $\pm$ 262 & 642 & 12 & 10215 $\pm$ 146 & 504 \\
13 & 8 & 11074 $\pm$ 213 & 603 & 10 & 10944 $\pm$ 191 & 605 &&& 53 & 16 & 10786 $\pm$ 149 & 598 & 49 & 10675 $\pm$ 131 & 917 \\
14 & 10 & 5145 $\pm$ 100 & 317 & 59 & 4935 $\pm$ \n61 & 471 &&& 54 & 1 & 4410 $\pm$ \n18 & --- & 16 & 4699 $\pm$ 113 & 452 \\
15 & 6 & 10725 $\pm$ 204 & 499 & 8 & 10655 $\pm$ 214 & 606 &&& 154 & 1 & 13830 $\pm$ \n20 & --- & 3 & 13818 $\pm$ 212 & 367 \\
16 & 9 & 9376 $\pm$ 161 & 484 & 9 & 9376 $\pm$ 161 & 484 &&& 254 & 1 & 26612 $\pm$ \n73 & --- & 1 & 26612 $\pm$ \n73 & --- \\
17 & 7 & 14417 $\pm$ 196 & 519 & 9 & 14355 $\pm$ 163 & 490 &&& 55 & 3 & 9635 $\pm$ \n39 & 68 & 7 & 9636 $\pm$ \n29 & 77 \\
18 & 7 & 8353 $\pm$ 169 & 448 & 11 & 8206 $\pm$ 232 & 770 &&& 155 & 1 & 20285 $\pm$ \n29 & --- & 1 & 20285 $\pm$ \n29 & --- \\
19 & 2 & 9256 $\pm$ 511 & 723 & 2 & 9256 $\pm$ 511 & 723 &&& 56 & 3 & 26280 $\pm$ 339 & 587 & 3 & 26280 $\pm$ 339 & 587 \\
20 & 8 & 9676 $\pm$ 172 & 486 & 14 & 9663 $\pm$ 126 & 472 &&& 156 & 1 & 7955 $\pm$ \n15 & --- & 1 & 7955 $\pm$ \n15 & --- \\
120 & 1 & 16841 $\pm$ \n15 & --- & 1 & 16841 $\pm$ \n15 & --- &&& 256 & 1 & 12730 $\pm$ \n18 & --- & 1 & 12730 $\pm$ \n18 & --- \\
21 & 13 & 7241 $\pm$ 210 & 757 & 86 & 7253 $\pm$ \n72 & 663 &&& 356 & 1 & 21558 $\pm$ \n71 & --- & 1 & 21558 $\pm$ \n71 & --- \\
22 & 3 & 8666 $\pm$ \n28 & 48 & 4 & 8649 $\pm$ \n26 & 53 &&& 57 & 6 & 9520 $\pm$ 107 & 263 & 22 & 9582 $\pm$ 132 & 617 \\
23 & 9 & 20400 $\pm$ 188 & 565 & 9 & 20400 $\pm$ 188 & 565 &&& 157 & 1 & 16075 $\pm$ \n45 & --- & 4 & 16237 $\pm$ 192 & 384 \\
123 & 2 & 11852 $\pm$ \n34 & 49 & 2 & 11852 $\pm$ \n34 & 49 &&& 58 & 16 & 10943 $\pm$ 197 & 789 & 99 & 11106 $\pm$ \n79 & 781 \\
24 & 6 & 9651 $\pm$ 204 & 499 & 137 & 9854 $\pm$ \n66 & 776 &&& 59 & 8 & 12864 $\pm$ 242 & 683 & 16 & 12693 $\pm$ 215 & 862 \\
25 & 10 & 10999 $\pm$ 204 & 646 & 12 & 11021 $\pm$ 172 & 596 &&& 60 & 5 & 13993 $\pm$ 142 & 318 & 11 & 13707 $\pm$ 119 & 393 \\
26 & 3 & 8677 $\pm$ 297 & 514 & 12 & 8937 $\pm$ 189 & 653 &&& 160 & 2 & 10474 $\pm$ 651 & 921 & 6 & 10730 $\pm$ 223 & 547 \\
27 & 5 & 4540 $\pm$ 188 & 420 & 30 & 4436 $\pm$ \n65 & 355 &&& 260 & 1 & 18545 $\pm$ \n26 & --- & 1 & 18545 $\pm$ \n26 & --- \\
28 & 5 & 10030 $\pm$ 363 & 813 & 17 & 9884 $\pm$ 208 & 857 &&& 61 & 2 & 4742 $\pm$ \n16 & 11 & 5 & 4878 $\pm$ \n90 & 202 \\
128 & 1 & 16920 $\pm$ \n27 & --- & 1 & 16920 $\pm$ \n27 & --- &&& 62 & 5 & 14918 $\pm$ 167 & 373 & 6 & 14991 $\pm$ 154 & 378 \\
29 & 5 & 9492 $\pm$ 339 & 759 & 18 & 9528 $\pm$ 181 & 769 &&& 63 & 3 & 9699 $\pm$ \n92 & 159 & 7 & 9777 $\pm$ 116 & 306 \\
129 & 1 & 16483 $\pm$ \n18 & --- & 1 & 16483 $\pm$ \n18 & --- &&& 163 & 5 & 15563 $\pm$ 273 & 610 & 5 & 15563 $\pm$ 273 & 610 \\
30 & 1 & 12711 $\pm$ \n19 & --- & 2 & 12576 $\pm$ 135 & 191 &&& 263 & 1 & 27486 $\pm$ \n71 & --- & 1 & 27486 $\pm$ \n71 & --- \\
130 & 1 & 16760 $\pm$ \n14 & --- & 1 & 16760 $\pm$ \n14 & --- &&& 64 & 5 & 9062 $\pm$ 154 & 345 & 28 & 9423 $\pm$ 113 & 598 \\
31 & 3 & 7288 $\pm$ \n47 & 81 & 5 & 7289 $\pm$ \n31 & 69 &&& 65 & 11 & 9223 $\pm$ 171 & 568 & 46 & 9137 $\pm$ \n97 & 659 \\
131 & 3 & 9384 $\pm$ 171 & 296 & 6 & 9299 $\pm$ 102 & 250 &&& 66 & 15 & 9156 $\pm$ 178 & 689 & 73 & 9014 $\pm$ \n93 & 796 \\
32 & 5 & 16416 $\pm$ \n94 & 210 & 5 & 16416 $\pm$ \n94 & 210 &&& 166 & 1 & 17795 $\pm$ \n44 & --- & 1 & 17795 $\pm$ \n44 & --- \\
132 & 3 & 13499 $\pm$ 144 & 250 & 3 & 13499 $\pm$ 144 & 250 &&& 67 & 7 & 13876 $\pm$ 215 & 570 & 7 & 13876 $\pm$ 215 & 570 \\
232 & 1 & 8926 $\pm$ \phantom{0}\n9 & --- & 1 & 8926 $\pm$ \phantom{0}\n9 & --- &&& 167 & 1 & 6086 $\pm$ \n12 & --- & 1 & 6086 $\pm$ \n12 & --- \\
332 & 1 & 4758 $\pm$ \phantom{0}\n9 & --- & 1 & 4758 $\pm$ \phantom{0}\n9 & --- &&& 68 & 8 & 11514 $\pm$ \n97 & 273 & 22 & 11547 $\pm$ \n67 & 316 \\
33 & 7 & 11652 $\pm$ 340 & 899 & 7 & 11652 $\pm$ 340 & 899 &&& 69 & 2 & 17344 $\pm$ 403 & 570 & 2 & 17344 $\pm$ 403 & 570 \\
34 & 8 & 14498 $\pm$ 265 & 749 & 9 & 14488 $\pm$ 234 & 702 &&& 70 & 13 & 10320 $\pm$ 109 & 392 & 23 & 10396 $\pm$ \n78 & 376 \\
35 & 27 & 11834 $\pm$ 133 & 694 & 66 & 11866 $\pm$ \n78 & 630 &&& 71 & 4 & 8465 $\pm$ 147 & 294 & 6 & 8442 $\pm$ 113 & 278 \\
36 & 6 & 12861 $\pm$ 309 & 757 & 69 & 12732 $\pm$ 101 & 837 &&& 72 & 5 & 10311 $\pm$ 136 & 305 & 5 & 10311 $\pm$ 136 & 305 \\
136 & 2 & 8764 $\pm$ 234 & 331 & 2 & 8764 $\pm$ 234 & 331 &&& 73 & 3 & 8121 $\pm$ 176 & 305 & 7 & 7935 $\pm$ 194 & 514 \\
37 & 6 & 8888 $\pm$ \n81 & 199 & 8 & 8866 $\pm$ \n82 & 231 &&& 74 & 4 & 14965 $\pm$ 267 & 534 & 4 & 14965 $\pm$ 267 & 534 \\
137 & 1 & 11399 $\pm$ \n18 & --- & 1 & 11399 $\pm$ \n18 & --- &&& 174 & 1 & 18187 $\pm$ \n36 & --- & 1 & 18187 $\pm$ \n36 & --- \\
38 & 4 & 12788 $\pm$ 280 & 559 & 6 & 12695 $\pm$ 252 & 616 &&& 75 & 5 & 9784 $\pm$ 233 & 521 & 5 & 9784 $\pm$ 233 & 521 \\
138 & 1 & 15405 $\pm$ \n16 & --- & 1 & 15405 $\pm$ \n16 & --- &&& 76 & 4 & 15365 $\pm$ 320 & 640 & 4 & 15365 $\pm$ 320 & 640 \\
238 & 1 & 17845 $\pm$ \n27 & --- & 1 & 17845 $\pm$ \n27 & --- &&& 77 & 9 & 7639 $\pm$ 124 & 373 & 20 & 7728 $\pm$ 112 & 502 \\
338 & 1 & 10873 $\pm$ \n35 & --- & 5 & 10438 $\pm$ 236 & 528 &&& 177 & 1 & 23926 $\pm$ \n12 & --- & 2 & 23992 $\pm$ \n66 & 93 \\
39 & 12 & 8882 $\pm$ \n69 & 238 & 18 & 8832 $\pm$ \n61 & 257 &&& 277 & 1 & 27282 $\pm$ \n13 & --- & 1 & 27282 $\pm$ \n13 & --- \\
139 & 3 & 19044 $\pm$ 183 & 317 & 4 & 19076 $\pm$ 133 & 266 &&& 78 & 6 & 11504 $\pm$ 133 & 327 & 9 & 11571 $\pm$ 136 & 408 \\
239 & 1 & 17438 $\pm$ \n55 & --- & 2 & 17443 $\pm$ \phantom{0}\n5 & 7 &&& 79 & 10 & 12692 $\pm$ 328 & 1036 & 40 & 12441 $\pm$ 149 & 944 \\
339 & 1 & 15651 $\pm$ \n30 & --- & 1 & 15651 $\pm$ \n30 & --- &&& 80 & 24 & 12331 $\pm$ 133 & 649 & 48 & 12399 $\pm$ 107 & 740 \\
40 & 3 & 11612 $\pm$ 217 & 376 & 4 & 11846 $\pm$ 280 & 560 &&& 180 & 1 & 27823 $\pm$ \phantom{0}\n8 & --- & 1 & 27823 $\pm$ \phantom{0}\n8 & --- \\
140 & 2 & 9806 $\pm$ 160 & 226 & 2 & 9806 $\pm$ 160 & 226 &&& 82 & 12 & 9771 $\pm$ 238 & 824 & 43 & 9573 $\pm$ 113 & 741 \\
240 & 1 & 17291 $\pm$ \n26 & --- & 1 & 17291 $\pm$ \n26 & --- &&& 83 & 9 & 12157 $\pm$ 253 & 759 & 23 & 12252 $\pm$ 156 & 748 \\
340 & 1 & 28973 $\pm$ \n28 & --- & 1 & 28973 $\pm$ \n28 & --- &&& 84 & 5 & 8345 $\pm$ 224 & 500 & 24 & 7962 $\pm$ \n90 & 442 \\
41 & 5 & 8973 $\pm$ 135 & 303 & 7 & 8848 $\pm$ 127 & 335 &&& 90 & 29 & 6663 $\pm$ 172 & 924 & 435 & 6942 $\pm$ \n50 & 1034 \\
141 & 2 & 13685 $\pm$ 405 & 573 & 3 & 13794 $\pm$ 258 & 447 \\
\end{tabular}
\end{table*}
The results of this process are shown in Figure~\ref{fig:cluz}, which
shows the redshift distributions of galaxies within 3\hbox{\,h$^{-1}$\,Mpc}\ around each
of the nominal EFAR clusters (labelled by their cluster ID number, CID;
see Paper~I) and the adopted groupings in redshift space. Note that
CID=81 (A2593-S) does not appear since it was merged with CID=80
(A2593-N)---see below. Each EFAR galaxy was assigned to one of these
groupings and given a cluster assignment number (CAN), listed in
Table~\ref{tab:galtab}. The main grouping along the line of sight has a
CAN which is simply the original two-digit CID; other groupings have
CANs with a distinguishing third leading digit. The groupings (which we
will hereafter call clusters regardless of their size) are labelled by
their CANs in Figure~\ref{fig:cluz}, which also shows the boundaries of
each cluster in redshift space. The last two digits of each galaxy's CAN
is its CID, apart from 41 galaxies which were reassigned to other
neighbouring clusters: two galaxies in CID=33 were reassigned to CAN=34
(GINs 254, 255); two galaxies in CID=34 were reassigned to CAN=33 (GINs
263, 264); five galaxies in CID=35 were reassigned to CAN=36 (GINs 270,
274, 275, 281, 282); fourteen galaxies in CID=36 were reassigned to
CAN=35 (GINS 285--292, 295--297, 299--301), one galaxy in CID=47 was
reassigned to CAN=50 (GIN 406); three galaxies in CID=59 and two in
CID=61 were reassigned to CAN=53 (GINs 514, 517, 527, 536, 537); five
galaxies with CID=69 were reassigned to CAN=70 (GINs 617, 618, 619, 622,
623); and all seven galaxies with CID=81 were reassigned to CAN=80 (GINs
709--715).
Table~\ref{tab:cluz} lists, for each CAN, the number of EFAR galaxies,
the number of EFAR+ZCAT galaxies, and the mean redshift, its standard
error (taken to the error in the redshift for clusters with only one
member) and the velocity dispersion. These quantities are computed both
from the EFAR sample and from the EFAR+ZCAT sample. In many of the
clusters the EFAR sample is greatly supplemented by the ZCAT galaxies,
leading to much-improved estimates of the mean cluster redshift: using
EFAR galaxies only the median uncertainty in the mean cluster redshift
(for clusters with more than one member) is 177\hbox{\,km\,s$^{-1}$}; with EFAR+ZCAT
galaxies the median uncertainty is reduced to 133\hbox{\,km\,s$^{-1}$}.
\section{CONCLUSIONS}
\label{sec:conclude}
We have described the observations, reductions, and analysis of 1319
spectra of 714 early-type galaxies studied as part of the EFAR project.
We have obtained redshifts for 706 galaxies, velocity dispersions and
\hbox{Mg$b$}\ linestrengths for 676 galaxies, and \hbox{Mg$_2$}\ linestrengths for 582
galaxies. Although obtained in 33 observing runs spanning seven years
and 10 different telescopes, we have applied uniform procedures to
derive the spectroscopic parameters and brought all the measurements of
each parameter onto a standard system which we ensure is internally
consistent through comparisons of the large numbers of repeat
measurements, and externally consistent through comparisons with
published data. We have performed detailed simulations to estimate
measurement errors and calibrated these error estimates using the repeat
observations.
The fully-corrected measurements of each parameter from the individual
spectra are given in Table~\ref{tab:spectab}; the final parameters for
706 galaxies, computed as the appropriately-weighted means of the
individual measurements, are listed in Table~\ref{tab:galtab}. The
median estimated errors in the combined measurements (including
measurement errors and run correction uncertainties) are $\Delta
cz$=20\hbox{\,km\,s$^{-1}$}, $\Delta\sigma/\sigma$=9.1\% (\hbox{i.e.}\
$\Delta\log\sigma$=0.040~dex), $\Delta\hbox{Mg$b$}/\hbox{Mg$b$}$=7.2\% (\hbox{i.e.}\
$\Delta\hbox{Mg$b^\prime$}$=0.013~mag) and $\Delta\hbox{Mg$_2$}$=0.015~mag. Comparisons with
redshifts and dispersions from the literature show no systematic errors.
The linestrengths required only small zeropoint corrections to bring
them onto the Lick system.
We have assigned galaxies to physical clusters (as opposed to apparent
projected clusters) by examining the line-of-sight velocity
distributions based on EFAR and ZCAT redshifts, together with the
projected distributions on the sky. We derive mean redshifts for these
physical clusters, which will be used in estimating distances and
peculiar velocities, and also velocity dispersions, which will be used
to test for trends in the galaxy population with cluster mass or local
environment.
The results presented here comprise the largest single set of velocity
dispersions and linestrengths for early-type galaxies published to
date. These data will be used in combination with the sample selection
criteria of Wegner \hbox{et~al.}\ (1996, Paper~I) and the photometric data of
Saglia \hbox{et~al.}\ (1997, Paper~III) to analyse the properties and peculiar
motions of early-type galaxies in the two distant regions studied by the
EFAR project.
\section*{Acknowledgements}
We gratefully acknowledge all the observatories which supported this
project: MMC, RKM, RLD and DB were Visiting Astronomers at Kitt Peak
National Observatory, while GB, RLD and RKM were Visiting Astronomers at
Cerro Tololo Inter-American Observatory---both observatories are
operated by AURA, Inc.\ for the National Science Foundation; GW and DB
used MDM Observatory, operated by the University of Michigan, Dartmouth
College and the Massachusetts Institute of Technology; DB and RKM used
the Multiple Mirror Telescope, jointly operated by the Smithsonian
Astrophysical Observatory and Steward Observatory; RPS used facilities
at Calar Alto (Centro Astrofisico Hispano Alemano) and La Silla (ESO);
MMC observed at Siding Spring (MSSSO); MMC, RLD, RPS and GB used the
telescopes of La Palma Observatory. We thank the many support staff at
these observatories who assisted us with our observations. We thank
S.Sakai for doing one observing run. We also thank the SMAC team for
providing comparison data prior to publication, and Mike Hudson for
helpful discussions.
We also gratefully acknowledge the financial support provided by various
funding agencies: GW was supported by the SERC and Wadham College during
a year's stay in Oxford, and by the Alexander von Humboldt-Stiftung
during a visit to the Ruhr-Universit\"{a}t in Bochum; MMC acknowledges
the support of a Lindemann Fellowship, DIST Collaborative Research
Grants and an Australian Academy of Science/Royal Society Exchange
Program Fellowship; RPS was supported by DFG grants SFB 318 and 375.
This work was partially supported by NSF Grant AST90-16930 to DB,
AST90-17048 and AST93-47714 to GW, and AST90-20864 to RKM. The entire
collaboration benefitted from NATO Collaborative Research Grant 900159
and from the hospitality and financial support of Dartmouth College,
Oxford University, the University of Durham and Arizona State
University. Support was also received from PPARC visitors grants to
Oxford and Durham Universities and a PPARC rolling grant `Extragalactic
Astronomy and Cosmology in Durham 1994-98'.
| proofpile-arXiv_065-9065 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:experiments}
Charles Darwin's theory of evolution was inspired by his observations
at the Galapagos Islands \cite{Darwin1859}.
These observations of small
differences between related species led him to identify the importance
of mutations and to create the theory of natural selection. The
speciation he saw was intimately related to spatial structure -- the
geographical separation of the islands -- and to temporal dynamics --
the spreading of birds and animals from island to island. The
geographical separation prevented the occupants of a new island from
being mixed back into the population from which they came. The
geographical separation soon turned into segregation of the population
into genetically different groups -- the first step towards speciation.
Aside from natural selection, another evolutionary "force" acting in this
case is genetic drift. Genetic drift is the process where due to
fluctuations in a small population a mutation can spread in the entire
population even if it is selectively neutral or even slightly
disadvantageous.
Recently, pattern formation in bacterial colonies became the focus of
attention
\cite{BL98,MKNIHY92,MS96,Kessler85,FM89,PK92a,BSST92,MHM93,BSTCCV94a,BCSALT95,WTMMBB95,BCCVG97,KW97,ES98}.
In this paper we study the subject of expression of
mutations, and especially segregation of populations, in the context of
expanding bacterial colonies. We take the bacterial colonies to be a
model system for the study of expression of mutations in a spreading
population, as well as an interesting and important subject by itself.
During the course of evolution, bacteria have developed sophisticated
cooperative behavior and intricate communication capabilities
\cite{SDA57,Shap88,BenJacob97,BCL98,LB98}. These include:
direct cell-cell physical interactions via extra- membrane polymers
\cite{Mend78,Devreotes89}, collective production of extracellular
"wetting" fluid for movement on hard surfaces
\cite{MKNIHY92,Harshey94}, long range chemical signaling, such as
quorum sensing \cite{FWG94,LWFBSLW95,FWG96} and
chemotactic signaling\footnote{Chemotaxis is a bias of movement
according to the gradient of a chemical agent. Chemotactic signaling is
a chemotactic response to an agent emitted by the bacteria.}
\cite{BB91,BE95,BB95}, collective activation and deactivation of
genes \cite{ST91,SM93,MS96} and even exchange of genetic material
\cite{GR95,RPF95,Miller98}. Utilizing these capabilities, bacterial
colonies develop complex spatio-temporal patterns in response to
adverse growth conditions.
Fujikawa and Matsushita \cite{FM89,MF90,FM91} reported for the
first time\footnote{We refer to the first time that branching growth
was studied as such. Observations of branching colonies occurred long
ago \protect\cite{SC38,Henrici48}. } that bacterial colonies could
grow elaborate branching patterns of the type known from the study of
fractal formation in the process of diffusion-limited-aggregation (DLA)
\cite{WitSan81,Sander86,Vicsek89}. This work was done with \bsub*,
but was subsequently extended to other bacterial species such as {\it
Serratia marcescens} and {\it Salmonella anatum} \cite{MM95}. It
was shown explicitly that nutrient diffusion was the relevant dynamics
responsible for the growth instability.
Motivated by these observations, Ben-Jacob {\it et al. }
\cite{BSST92,BTSA94,BSTCCV94a} conducted new experiments to
see how adaptive bacterial colonies could be in the presence of external
"pressure", here in the form of a limited nutrient supply and hard
surface. The work was done with a newly identified species, \Tname*
\cite{TBG98}. This species is motile on the hard surface and its
colonies exhibit branching patterns (Fig. \ref{fig:physa1}).
There is a well known observed (but rarely studied) phenomenon of
bursts of new sectors of mutants during the growth of bacterial
colonies (see for example Fig. \ref{fig:shapiro} and Refs
\cite{ST91,Shapiro95b}).
Actually, the phenomenon is more general. Fig. \ref{fig:boschke}
(taken from \cite{BB98}) shows
an emerging sector in a yeast colony.
If the mutants have the same
growth dynamics as the "normal", wild-type, bacteria they will usually go
unnoticed (unless some property such as coloring distinguish them)
\footnote{Different coloring may result from different enzymatic activity
(natural coloring) or from a different response to a staining process
(artificial coloring). In both cases the mutation is not neutral in the
strictest sense, but it is neutral as far as the dynamics is concerned.}. If,
however, the mutants have different growth dynamics, a distinguished
sector with a different growth pattern might indicate their presence.
In a branching colony, the geometrical structure may aid the
bursting of a sector of "neutral" mutants; once a branch (or a cluster
of branches) is detached from his neighboring branches (detached in
the sense that bacteria cannot cross from one branch to the other), the
effective population is smaller than the colony's population. In such a
"reduced" population, genetic drift is more probable and a neutral
mutant may take over the population in some branches. Sectors of
"neutral" mutations usually go undetected -- by definition their growth dynamics
is identical to that of the wild-type (original) bacteria and no
geometrical feature highlights the sectors.
Sectors of advantageous mutation are much easier to detect, as they
usually grow in a somewhat different pattern. An advantage in this
context might be faster multiplication, higher motility or elevated
sensitivity to chemotactic materials. In all those cases the mutants have
an advantage in accessing food resources. In a pre-set geometry (or
without spatial structure) the mutants might starve the wild-type
bacteria and drive them to extinction. But in a spreading colony each
part of the colony is heading in a different direction, thus the two
populations can co-exist. The dynamic process of spreading aids the
segregation of the population.
The first analytical study of spatial spread of mutations was done by
Fisher \cite{Fisher37}. He studied the spread of advantageous
mutation in the limit of large, spatially uniform population, using the
Fisher-Kolmogorove equation. This equation describes the time
evolution of a field representing the fraction of the mutants in the local
population. The same equation can be taken to describe the spreading
of a population into an uninhabited space, in which case the field
represents the density of the bacteria. To study mutants by this
description one must extend the model to include two fields standing
for two different types of bacteria. Since these equations are expressed
in the continuous limit, it excludes a-priori the effect of genetic drift.
As we discuss elsewhere \cite{GKCB98}, the Fisher equation has
other shortcomings that make it unsuitable for modeling bacterial
colonies.
To study the sectors in the bacterial colonies we use generic models,
i.e. models that adhere as much as possible to biological data, but only
to details which are needed to understand the subject. The generic
models can be grouped into two main categories:
1. Continuous or reaction-diffusion models \cite{PS78,Mackay78}. In
these models the bacteria are represented by their 2D density, and a
reaction-diffusion equation of this density describes their time
evolution. This equation is coupled to the other reaction-diffusion
equations for fields of chemicals, such as nutrient.
2. Discrete models such as the Communicating Walkers model of Ben-
Jacob {\it et al. } \cite{BSTCCV94a,BCSCV95,BCCVG97} and the Bions
model of Kessler and Levine \cite{KL93,KLT97}. In this approach,
the microorganisms (bacteria in the first model and amoebae in second)
are represented by discrete entities (walkers and bions, respectively)
which can consume nutrient, reproduce, perform random or biased
movement, and produce or respond to chemicals. The time evolution
of the nutrient and the chemicals is described by reaction-diffusion
equations. In the context of branching growth of bacterial colonies, the
continuous modeling approach has been pursued recently by Mimura
and Matsushita {\it et al. } \cite{Mimura97,MWIRMSM98}, Kawasaki {\it et al. }
\cite{KMMUS97} and Kitsunezaki \cite{Kitsunezaki97}. In
\cite{GKCB98} we present a summary and critique of this approach
(also see \cite{Rafols98}).
In the current study, we use both discrete and continuous models,
altered to include two bacterial types. In some cases (but not all) the
two bacterial types have different growth dynamics. We begin each run
with a uniform population. The event of mutation is included with
some finite probability of the wild-type strain changing into a mutant
during the process of multiplication.
Representing mutations in the above two modeling schemes gives rise
to possible problems.
In a continuous model there is a problem representing a single
mutation because the equations deal with bacterial area density, not
with individual bacterium. In a previous paper we have studied
\cite{CGKB98} the inclusion of finite size effects in the continuous
model via a cutoff in the dynamics. For the study of mutations, we use
as our basic "mutation unit" the cutoff density (see below). The value
of this density is in the order of a single bacterium in an area
determined by the relevant diffusion length (the idea of using a cutoff
density to represent discrete entities was first raised by Kessler and
Levine \cite{KL98}).
In the discrete model, each "walker" represents not one bacterium, but
many bacteria ($10^4 - 10^6$) \cite{BSTCCV94a}. Thus, a mutation
of one walker means the collective mutation of all the bacteria it
represents.
Note that in this paper we do not discuss the origin of the mutations.
The common view in biology is that all mutations are random. Both
Darwin's original view \cite{Darwin1859a} and modern experiments in
microbiology \cite{COM88,Hall88,Hall91,Foster93} suggest the
possibility of mutations designed by the bacteria as a response to a
specific stressful condition. Since the stress in this case cannot be
accurately assessed by a single bacterium, another possibility is that the
colony as a whole designs the mutation in response to the
environmental conditions. It is not necessary that only the descendants
of a single cell will have the mutation; the bacteria have the means
\cite{GR95,RPF95,Miller98} to perform "horizontal inheritance" i.e. to transfer
genetic information from cell to cell. If such autocatalytic or
cooperative mutation occurs in the experiments, then a mutating
walker in the Communicating Walkers model might be an accurate
model after all.
In the next section we present the experimental observations of bacterial
colonies, mutations and sectors in bacterial colonies. In section
\ref{sec:models} we present the two models used in this study. Section
\ref{sec:results} presents the study itself; the simulated colonies, the
results, and comparison with the experimental observation. We conclude
(section \ref{sec:discussion}) with a short discussion of the results and
possible implications to other issues, such as growth of tumors and
diversification of populations.
\section{Observations of Colonial Development}
In this section, we focus on the phenomena observed during the growth
of \Tname.
\subsection{Growth Features}
As was mentioned, the typical growth pattern on semi-solid agar is a branching pattern,
as shown in Fig. \ref{fig:physa1}. The structure of the branching pattern
varies for different growth conditions, as is demonstrated in
Fig. \ref{fig:physa2}.
Under the microscope, bacterial cells are seen to perform a random-walk-like
movement in a layer of fluid on the agar surface (Fig. \ref{fig:finger}).
This wetting fluid is assumed to be excreted by the
cells and/or drawn by the cells from the agar \cite{BSTCCV94a,BSTCCV94b}.
The cellular movement is confined to this fluid; isolated cells spotted on
the agar surface do not move.
The fluid's boundary thus defines a local
boundary for the branch. Whenever the cells are active, the boundary
propagates slowly, as a result of the cellular movement and production of
wetting fluid.
The various colonial patterns can be grouped into several ``essential
patterns'' or morphologies \cite{BSTCCV94a,BTSA94}.
In order to explain the various growth morphologies, we have suggested
that bacteria use {\em chemotactic signaling} when confronted with
adverse growth conditions \cite{BSTCCV94a,CCB96,BSCTCV95}.
Chemotaxis means changes in the movement of the cell in response to
a gradient of certain chemical fields
\cite{Adler69,BP77,Lacki81,Berg93}. The movement is biased along
the gradient either in the gradient direction (attractive chemotaxis
towards, for example, food) or in the opposite direction (repulsive
chemotaxis away from, for example, oxidative toxins).
Usually chemotactic response means a response to an externally
produced field as in the case of chemotaxis towards food. However,
the chemotactic response can be also to a field produced directly or
indirectly by the bacterial cells. We will refer to this case as
chemotactic signaling.
At very low agar concentrations, $0.5 \% $ and below, the colonies
exhibit compact patterns, as shown in Fig. \ref{fig:sec11}. Microscopic
observations reveal that in this case, the bacteria swim within the
agar. Thus, there is no ``envelope'' to the colony, and hence no
branching pattern emerges (see \cite{GKCB98,KCGB98}).
\subsection{Observations of Sectors}
The bursting of sectors can be observed both during compact and
branching growth.
Examples of the first kind are shown in
Figs. \ref{fig:sec346}, while examples of
the latter are shown in
Figs.
\ref{fig:sectors12},\ref{fig:sec57}.
As we can see from the pictures, sectors emerging during branching
growth have a greater variety of structure and shapes than those
emerging from compact colonies..
This is demonstrated by Fig. \ref{fig:sectors12} depicting colonies at intermediate levels of
nutrients and agar, and Fig. \ref{fig:sec57}, showing colonies
grown at high nutrient level and at the presence of antibiotics. Note
that the sector on the left side of Fig. \ref{fig:sec57} is much more expanded than that
on the right, probably because it has irrupted at an earlier
stage of the colonial development \footnote{Such an early irrupted sector
might indicate a mixed population in the initial inoculum, and not a new mutant.}.
Throughout this paper, we use the term ``mutant'' to describe the
population in the emerging sectors. We have not verified the existence
of a genetic difference between the bacteria in the sector and those
in the rest of the colony. We have, however, verified that the
phenotypic difference between the two populations is inheritable,
using inoculation.
Below we shall see that it is
sometimes possible to relate the shape of the sector, and the
way it ``bursts'' out of the colony, with the specific kind of advantage
that the mutants possess over the original bacteria.
\section{Models}
\label{sec:models}
We now describe the two models we have used to study the development
of a colony consisting of two bacterial strains.
Both models are based on the ones originally used to study a single
strain colony \cite{GKCB98,BCL98,CGKB98,BSTCCV94a}.
\subsection{The Continuous Model}
A number of continuous models have been
proposed to describe colonial development \cite{KL98,Kitsunezaki97,MWIRMSM98,KMMUS97}.
Following \cite{GKCB98}, the model we take includes a linear growth
term and a non-linear diffusion of the bacteria (representing the
effect of a lubricating fluid \cite{KCGB98}). In the case of a single strain, the
time evolution of the
2D bacterial density $b({\bf x},t)$ is given by:
\begin{eqnarray}
\label{onespec-eqn}
\lefteqn{ \partderiv{b}{t} = \nabla (D(b) \nabla
b) + \varepsilon n b \Theta (b-\beta) - \mu b}
\end{eqnarray}
The first term on the RHS describes the bacterial movement,
with $D(b)=D_0 b^k$ ($D_0$ and $k>0$ constants) \cite{Kitsunezaki97}.
The second term is the population growth, which is proportional to
food consumption.
$n({\bf x},t)$ is the nutrient 2D density and
$\varepsilon$ the nutrient$\rightarrow$bacteria
conversion factor.
The growth term is multiplied by a step function $\Theta(b-\beta)$,
which sets it to zero if the bacterial density is smaller then a
threshold
$\beta$. This threshold represents the discreteness of the bacteria
\cite{KL98}.
We have previously shown that the effect of the step function is
negligible for small $\beta$ \cite{CGKB98}, but we also use it when
implementing the modeling of mutations.
The third term describes bacterial transformation into stationary, pre-spore state, with $\mu$ the sporulation rate.
In this model, the time development equations for the nutrient concentration $n({\bf x},t)$ and the
stationary bacteria concentration $s({\bf x},t)$ are given by:
\begin{eqnarray}
\label{onespec-eqn2}
\lefteqn{ \partderiv{n}{t} = D_n\nabla ^2 n - \varepsilon b n \Theta (b-\beta)}\\
\lefteqn{ \partderiv{s}{t} = \mu b}
\end{eqnarray}
We include the effect of chemotaxis in the model using a
{\em chemotactic flux} $\vec{J}_{chem}$, which is written (for
the case of a chemorepellant) as:
\begin{equation}
{\vec{J}_{chem} = \zeta(b) \chi (r) \nabla r}
\label{j_chem}
\end{equation}
where $r({\bf x},t)$ is the concentration of the chemorepellent agent,
$\zeta(b) = b \cdot b^k = b^{k+1}$ is the bacterial response to
the chemotactic agent \cite{GKCB98},
and
$\chi(r)$ is the chemotactic sensitivity to the repellant, which is
negative for a chemorepellant.
In the case of a chemoattractant, e.g. a nutrient, the expression for
the flux will have an opposite sign.
In the case of the ``receptor law'', the sensitivity $\chi(r)$ takes
the form \cite{Murray89}:
\begin{equation}
{\chi(r) = \frac{\chi_0 K}{(K+r)^2}}
\end{equation}
with $K$ and $\chi_0$ constants.
The equation for $r({\bf x},t)$ is:
\begin{equation}
{\partderiv{r}{t} = D_r \nabla^2 r + \Gamma_r s - \Omega_r b r - \lambda_r r}
\label{r-eqn}
\end{equation}
where $D_r$ is the diffusion coefficient of the chemorepellent agent,
$\Gamma_r$ is the emission rate of repellant by the pre-spore cells,
$\Omega_r$ is the decomposition rate of the repellant
by active bacteria,
and $\lambda_r$ is the rate of spontaneous decomposition of the
repellant.
In order to generalize this model to study mutants,
we must introduce two fields, for the densities of the wild-type
bacteria (``type 1'') and the mutants (``type 2''), and allow some
probability of transition from wild-type to mutants.
In the absence of chemotaxis, the equations for the bacterial density
of the two strains will be written
(with subscript denoting bacteria type):
\begin{eqnarray}
\label{twospec-eqn}
\lefteqn{ \partderiv{b_1}{t} = \nabla (D_1(b) \nabla b_1) +
\varepsilon_1 n b_1 \Theta (b-\beta) - \mu_1 b_1 - F_{12}}\\
\lefteqn{ \partderiv{b_2}{t} = \nabla (D_2(b) \nabla b_2) +
\varepsilon_2 n b_2 \Theta (b-\beta) - \mu_2 b_2 + F_{12}}
\end{eqnarray}
where $D_{0_{1,2}}=D_{0_{1,2}} b^k$ ($b=b_1+b_2$).
Note that the mutant strain $b_2$ includes a ``source'' term $F_{12}$,
which is the rate of transition $b_1 \rightarrow b_2$, and is given by
the growth rate of $b_1$ multiplied by a constant mutation rate
(For simplicity, we do not include the process of reverse mutations $F_{21}$.
\subsection{The Communicating Walkers Model}
\label{subsec:walker}
The Communicating Walkers model \cite{BSTCCV94a}
is a hybridization of the ``continuous'' and ``atomistic''
approaches used in the study of non-living systems. The diffusion of
the chemicals is handled by solving a continuous diffusion equation
(including sources and sinks) on a tridiagonal lattice. The bacterial cells are represented by walkers
allowing a more detailed description. In a typical experiment there
are $10^9-10^{10}$ cells in a petri-dish at the end of the growth.
Hence it is impractical to incorporate into the model each and every
cell. Instead, each of the walkers represents about $10^4-10^5$ cells
so that we work with $10^4-10^6$ walkers in one numerical
``experiment''.
The walkers perform an off-lattice random walk on a plane within an
envelope representing the boundary of the wetting fluid. This
envelope is defined on the same triangular lattice where the
diffusion equations are solved. To incorporate the swimming of the
bacteria into the model, at each time step each of the active walkers
(motile and metabolizing, as described below) moves a step of size
$d$ at a random angle $\Theta $.
If this new position is outside the envelope, the walker does not
move. A counter on the segment of the envelope which would have been
crossed by the movement is
increased by one. When the segment counter reaches a specified number
of hits $N_c $, the envelope propagates one lattice step and an
additional lattice cell is added to the colony. This requirement of
$N_c $ hits represent the colony propagation through wetting of
unoccupied areas by the bacteria. Note that $ N_c $ is related to the
agar dryness, as more wetting fluid must be produced (more
``collisions'' are needed) to push the envelope on a harder
substrate.
Motivated by the presence of a maximal growth rate of the bacteria
even for optimal conditions, each walker in the model consumes food
at a constant rate $\Omega_c $ if sufficient food is available.
We represent the metabolic state of the $i$-th walker by an 'internal
energy' $E_i $. The rate of change of the internal energy is given by
\begin{equation}
{\frac{d E_i}{d t }} = \kappa C_{consumed} - {\frac{E_m }{\tau_R}}~,
\end{equation}
where $\kappa $ is a conversion factor from food to internal energy
($\kappa \cong 5\cdot 10^3 cal/g$) and $E_m $ represent the total
energy loss for all processes over the reproduction time $\tau_R$,
excluding energy loss for cell division. $C_{consumed}$ is
$
C_{consumed} \equiv \min \left( \Omega_C , \Omega_C^{\prime}\right)~,
$
where $\Omega_C^{\prime}$ is the maximal rate of food consumption as
limited by the locally available food.
When sufficient food is available, $E_i$ increases until it reaches a
threshold energy. Upon reaching this threshold, the walker divides
into two. When a walker is starved for long interval of time, $E_i$
drops to zero and the walker ``freezes''. This ``freezing''
represents entering a pre-spore state.
We represent the diffusion of nutrients by solving the diffusion
equation for a single agent whose concentration is denoted by
$n(\vec{r},t)$:
\begin{equation}
{\frac{\partial n}{\partial t}}=D_n \nabla^2C - b C_{consumed}~,
\end{equation}
where the last term includes the consumption of food by the walkers
($b $ is their density). The equation is solved on the tridiagonal
lattice. The simulations are started with inoculum of walkers at the
center and a uniform distribution of the nutrient.
When modeling chemotaxis performed by walkers, it is possible to
modulate the periods between tumbling (without changing the speed) in
the same way the bacteria do. It can be shown that step length
modulation has the same mean effect as keeping the step length
constant and biasing the direction of the steps (higher probability
to move in the preferred direction). As this later approach is
numerically simpler, this is the one implemented in the Communicating
Walkers model.
As in the aforementioned continuous model, an additional equation is
written for the time evolution of the chemorepellant.
When dealing with two bacterial strains, each walker in the simulation
belongs to either ``type 1'' (wild-type) or ``type 2'' (mutant), which
may differ in their various biological parameters, such as step length
(i.e. motility) or sensitivity to
chemotaxis. The colony is initialized with an inoculation of wild-type
walkers, which -- when multiplying -- have some finite probability of
giving birth to a mutant. The two populations than co-evolve
according to the dynamics described above.
\section{Results}
\label{sec:results}
\subsection{Compact growth}
We start by examining mutations in colonies grown on soft agar, where
growth is compact. We expect that in
this case a neutral mutation will not form a segregated sector. The
mutant does, however, increase its relative part of the
total population in a sector of the colony (figure \ref{fig:4}).
In other words, due to the expansion of the colony, an initially tiny
number of mutants gradually
becomes a significant part of the total population in a specific area.
Experimentally, if the mutant has some distinguishable feature --
e.g. color -- a sector will be observed.
Next we study the more interesting case of superior
mutants. In this case one observes a sector which grows faster than the
rest of the colony (see figure \ref{fig:sec346}).
In the simulations, a sharp segregation is obtained when the mutant is endowed
with a higher growth rate $\varepsilon$ (figure \ref{fig:41}) or a
higher motility (larger $D_0$, figure \ref{fig:42}).
Figure \ref{fig:22} depicts the results obtained by simulations of the
Communicating Walker model (with the mutant having a larger step
length, which is equivalent to a higher diffusion coefficient).
As can be seen, both models exhibit a fan-like sector of mutants, very
similar to the one observed in the experiments. The ``mixing area'',
where both strains are present, is narrow, and its width is related to
the width of the propagating front of the colony (the area where most
bacteria are alive, $b>s$).
\subsection{Branching growth}
We now turn to the case of sectoring during branching
growth.
As seen in figure \ref{fig:5}, in this case there is a slow process of segregation
even for a neutral mutation.
This results from the fact that a
particular branch may stem from a small number of bacteria, thus
allowing an initially insignificant number of mutants quickly to
become the majority in some branches, and therefore in some area of
the colony (genetic drift in small populations).
Mutants superior in motility (figure \ref{fig:6}) or growth rate
(figure \ref{fig:40}) form segregated fan-like sectors which burst out
of relatively slow advancing colony.
\subsection{The effect of chemotaxis}
As we have mentioned earlier, an additional important feature of the
bacterial movement is chemotaxis. We start by considering neutral
mutations in the case of a colony which employs repulsive chemotactic signaling.
As seen in figure \ref{fig:7}, the chemotactic response enhances the segregation of
neutral mutants. This results from
branches being thinner in the presence of repulsive chemotaxis, and the reduced
mixing of bacteria because of the directed motion.
In the case of mutants with superior motility (figure \ref{fig:1}), a segregated sector is
formed
which is not fan-like (as opposed to the case without
chemotaxis),
probably because of the biased, radially-oriented motion of the
bacteria, coming from the long range repulsive chemotaxis.
A fan-like sector {\em does} appear when the mutant has a higher
sensitivity to the chemotactic signals (figure \ref{fig:45}). In this
case, however, the sector is composed of a mixture of the
``wild-type'' and the mutants.
Figure \ref{fig:11} displays the result of Communicating Walker
simulation for this case\footnote{The
discrete model shows a higher tendency for segregation
here.}.
Note the similarly of both models' results with experimental observations (figure \ref{fig:sectors12}).
The influence of food chemotaxis on the sectors (figures
\ref{fig:66},\ref{fig:2},\ref{fig:3}) is similar to that of repulsive
chemotaxis.
Thus, the shape of the emerging sector (e.g. fan-like or not), and the
difference between the branches in the original colony and in the
sector, might testify to the nature of the advantage possessed by the mutant.
\section{Discussion}
\label{sec:discussion}
In this paper we presented our study of the appearance of segregated
sectors of mutants in expanding bacterial colonies. After reviewing
the experimental observations of this phenomenon, we showed the
results of simulations performed using two different models, the
discrete ``Communicating Walker'' model, and a continuous
reaction-diffusion model. Using these models as an aid to analytical
reasoning, we are able to understand what factors -- geometrical,
regulatory and others -- favor the
segregation of the mutant population. These factors include:
\begin{enumerate}
\item Expansion of the colony -- in the form of a finite front propagating
away from areas of depleted nutrient, and towards areas of high
nutrient concentration.
\item Branching patterns, where the population in each branch is much
smaller than the colony's population, making a genetic drift more
probable, so that a mutant take over the whole population in a
sector
of the colony.
\item Chemotaxis: Food chemotaxis and repulsive chemotactic signaling
cause the bacterial motion to become less random and more directed
(outward and towards nutrients), thus lowering mixing of populations.
\item An advantageous mutant, having e.g. a higher motility or a
faster reproduction rate, will probably conquer a sector of its own
and quickly become segregated. This sector will usually be fan-like,
bursting out of the colony, owing to the faster expansion of the
mutants, as compared to that of the wild-type population.
\end{enumerate}
As in every modeling endevour, one must note the model's limitations
along with its success in reproducing and predicting biological
phenomena.
As mentioned above, care must be taken when modeling discrete entities
(i.e. bacteria) using a continuous model (for more on that, see
\cite{KL98,GKCB98,CGKB98}).
This point gains even more importance when we deal with the process
of mutation, which is a ``single bacteria event''. The discrete Walker
model is not free of this shortcoming as well, because in this model
each walker represents not one bacterium, but many \cite{BSTCCV94a}.
The observed segregation of mutant population raises some interesting
evolutionary questions. Faster movement (for example) is
an advantage for the bacteria, as the burst of sectors show. Why then
does this mutation not take over the general population and
becomes the wild-type?
In other words, why were there any wild-type bacteria for us to isolate
in the first place?
One possible answer is that this advantage might turn out to be a
{\em dis}advantage at different environmental conditions (e.g.
inability to remain confined to some small toxin-free oasis). Another
possibility is that this mutant, though possessing some superior
biological feature, is lacking in another feature, essential to its
long-term survival (e.g. wasting too much energy on movement when it
is not advantageous).
Beyond the study of sectoring in bacterial colonies,
intuition about the basic mechanisms of spatial
segregation of populations might be useful for other problems.
Such problems may include the important issues of growth of
tumors and the diversification of populations on a macroscopic scale.
Both may employ similar geometrical features and communication
capabilities, leading to segregation.
Notwithstanding the above mentioned reservations, we believe this
study demonstrates once more the capability of generic models to serve
as a theoretical research tool, not only to study the basic patterns
created by bacterial colonies, but also to gain deeper understanding
of more general phenomena, such as the segregation of mutants in an
expanding colony.
\section*{Acknowledgments}
We thank J. A. Shapiro and E. Boschke for allowing us to use
photographs of their colonies.
Identifications of the
\Tname and genetic studies are carried in collaboration with the group of
D.\ Gutnick.
We are most thankful to I.\ Brains for technical assistance.
The presented studies are supported in part by a grant from
the Israeli Academy of Sciences grant no.\ 593/95 and by the
Israeli-US Binational Science Foundation BSF grant no. 00410-95.
\bibliographystyle{unsrt}
| proofpile-arXiv_065-9067 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
A most exciting aspect of inflationary models is the possibility of
constraining them through observations of cosmological perturbations.
Generally, the power spectrum of primordial fluctuations can be easily related
to the inflation potential \cite{LL}. In more complicated models,
like multiple-stage inflation \cite{KLS,RS,ST},
slow-roll conditions may be violated; then, the primordial
spectrum has to be derived from first principles, and exhibits
a non-trivial feature at some scale.
These scenarios shouldn't be regarded as unrealistic:
inflation has to take place within the framework
of a high-energy theory, with many scalar fields.
So, the real question is not to know wether multiple-stage inflation
can occur, but wether it is predictive, or
just speculative. Multiple-stage inflation has already
been invoked for many purposes :
generation of features in the power spectrum
at observable scales, setting of initial conditions,
description of the end of inflation. Here we are interested in the
first possibility. The range of cosmological perturbations observable today
in the linear or quasi-linear regime has been generated during approximately
8 {\it e}-folds, 50 {\it e}-folds before the end of inflation.
So, a feature in the primordial power spectrum could very well be observable,
without any fine-tuning;
only observations can rule out this possibility.
In the radiation and matter dominated Universe, primordial fluctuations of
the metric perturbations couple with all components, leading to the formation
of Cosmic Microwave Background (CMB) anisotropies and Large Scale Structure
(LSS). It is well established that a pure Cold Dark Matter (CDM) scenario,
in a flat universe, and with a scale-invariant or even a tilted primordial
spectrum, is in conflict with LSS data. So, many variants of this scenario
have been considered, including a Broken Scale Invariant (BSI) power spectrum.
Indeed, standard CDM predictions are improved when {\it less}
power is predicted on small scales.
Specific cases have been compared accurately with both
LSS data and recent CMB experiments,
including the double-inflationary spectrum of \cite{P94}, and
the steplike spectrum of \cite{S92}.
However, even with such spectra, it has been shown \cite{LP,LPS1,LPS2}
that standard CDM cannot be made compatible with all observations.
Independently, recent constraints from Ia supernovae \cite{PGRW}
strongly favour the $\Lambda$CDM scenario, with a cosmological constant
$\Lambda \simeq 0.6-0.7$ and a Hubble parameter $h=0.6-0.7$.
In this framework, a reasonable fit to all CMB and LSS data can be obtained
with a flat or a slighly tilted spectrum. So, $\Lambda$CDM is very promising
and should be probably considered as a current standard picture,
from which possible
deviations can be tested. In this respect, in spite of large error bars,
some data indicate a possible sharp feature in the primordial power
spectrum around the scale $k\simeq 0.05 h$Mpc$^{-1}$.
First, APM redshift survey data seem to point out
the existence of a step in the present matter power spectrum \cite{GB,RS}.
Second, Abell galaxy cluster data exhibit a similar feature
\cite{E} (at first sigh, it looks
more like a spike than like a step, but in fact a steplike primordial spectrum
multiplied with the $\Lambda$CDM transfer function reproduces this shape
\cite{LPS1}).
Third, this scale corresponds to $l \sim 300$, {\it i.e.} to the
first accoustic peak in the CMB anisotropies, and increasing power
at this scale, through a bump or a step, would lead to a better agreement
with Saskatoon. In the next years, future observations will either
rule out this possibility
(which will then be attributed to underestimated errorbars),
or confirm the existence of a feature, and
precisely constraint its shape.
In \cite{LPS1,LPS2}, we compared a specific BSI $\Lambda$CDM model
with CMB and LSS data.
The primordial spectrum was taken from a toy model proposed by Starobinsky
\cite{S92}, in which
the derivative of the inflaton potential changes very rapidly in a narrow
region. In this case, the primordial spectrum
consists in two approximately flat plateaus, separated by a step. This
model improves the fits to CMB and LSS data (even when the cluster data
are not taken into account). It also reproduces fairly well the feature of
Einasto et al. \cite{E} when the step is inverted with respect to the
usual picture, {\it i.e.}, with {\it more} power on small scales.
Independently, in a preliminary work \cite{GAS} (motivated
by the inflationary model of \cite{BAFO}),
a primordial spectrum with a bump centered
at $k\simeq 0.06 h$Mpc$^{-1}$ was compared with CMB
and LSS data, including more redshift surveys than in \cite{LPS1},
and not taking Einasto et al. \cite{E} data into account.
It is remarkable that among many cosmological scenarios,
this BSI spectrum combined with $\Lambda$CDM yields one of the best fits.
So, we have good motivations for searching inflationary models based on
realistic high-energy physics, that predict a bump or an inverted
step at some scale, and approximately scale-invariant plateaus far from it.
Successfull comparison with the data requires the deviation
from scale invariance to be concentrated in a narrow region
$k_1 \leq k \leq k_2$; roughly, $k_2/k_1$ should be in the range 2-10.
This is quite challenging, because
double inflationary models studied so far predict systematically
{\it less} power on small scale, with
a logarithmic decrease on large scale rather than a plateau.
However, these models were based on the general framework of chaotic
inflation. Today, the best theoretically motivated framework
is hybrid inflation \cite{LLL}. Indeed, hybrid inflation has many attractive
properties, and appears naturally in supersymmetric models:
the inflaton field(s) follow(s) one of the flat directions of the potential,
and the approximately constant potential energy density is provided by the
susy-breaking F or D-term. When the F-term does not vanish,
conditions for inflation are generally spoiled by supergravity
corrections, unless interesting but rather complicated variants are considered
(for a very good review, see \cite{LR}). On the contrary, the simple D-term
inflation mechanism proposed by Bin\'etruy \& Dvali \cite{BD} and Haylo
\cite{HA} is naturally
protected against supergravity corrections, and can be easily implemented
in realistic particle physics models, like the GUT model of reference
\cite{DR}, without
invoking {\it ad hoc} additional sectors.
If supergravity is considered as an effective theory from superstrings,
D-term inflation is hardly compatible with heterotic
strings theory \cite{LR,ERR}, but consistent with type I string theory
\cite{HA2}.
Our goal is to show that in this framework, a simple mechanism
can generate a steplike power spectrum with {\it more} power on small scales.
This mechanism is based on a variant of D-term inflation, with two
Fayet-Iliopoulos terms. However, the fact that it is D-term inflation
is not crucial for our purpose:
a similar lagrangian could be obtained with two non-vanishing F-terms,
or one D-term plus one F-term, like in \cite{ST}. We will not consider the
link between the model of this paper and string theory, putting by hand
the Fayet-Iliopoulos terms from the beginning.
In a very interesting paper, Tetradis \& Sakellariadou \cite{ST}
studied a supersymmetric double inflationary model with a quite similar
lagrangian. However, the motivation of these authors is
to save standard CDM. So, they are pushed to
regions in parameter space quite different from us, and do not
consider a steplike spectrum with flat plateaus, but rather
a power-law spectrum with $n=1$ on large scales and $n<1$ on small scales.
\section {The model}
We consider a supersymmetric model with two gauge symmetries
$U(1)_A \times U(1)_B$, and two associated
Fayet-Iliopoulos positive terms $\xi_A$ and $\xi_B$
(there is no motivation from string theory to do so, at least at the moment,
but SUSY and SUGRA allow an arbitrary number of Fayet-Iliopoulos
terms to be put by hand in the lagrangian).
In this framework, the most simple workable model involves six
scalar fields: two singlets $A$ and $B$, and four charged fields
$A_{\pm}$, $B_{\pm}$, with charges $(\pm 1,0)$, $(\pm 1,\pm 1)$.
Let us comment this particular choice. First,
the presence of two singlets is crucial. With only one singlet
coupling to both $A_\pm$ and $B_\pm$, we would still have double-inflation,
but the second stage would be driven by both F and D-terms,
and no sharp feature would be predicted in the primordial spectrum.
Second, each charged field could be charged under one symmetry only;
then, a steplike spectrum would be generated, but with necessarily
{\it less} power
on small scales. Here, the fact that $B_-$ has a charge $-1$
under both symmetries is directly responsible for the inverted step,
as will become clear later. Finally,
both global susy and supergravity versions of this model can be studied:
supergravity corrections would change the details
of the scenario described thereafter, but not its main features.
We consider the superpotential:
$
W=\alpha A A_+ A_- + \beta B B_+ B_-
$
(with a redefinition of $A$ and $B$, we have suppressed
terms in $B A_+ A_-$, $A B_+ B_-$, and made ($\alpha$, $\beta$) real
and positive).
In global susy, the corresponding tree-level scalar potential is:
\begin{eqnarray}
&V&= \alpha^2 |A|^2 (|A_+|^2 \!\!\! + \! |A_-|^2) + \alpha^2 |A_+ A_-|^2
+ \frac{g_A^2}{2} (|A_+|^2 \!\!\! - \! |A_-|^2 \!\!\! + \! |B_+|^2 \!\!\!
- \! |B_-|^2 \!\!\! + \xi_A)^2
\nonumber \\
&+& \beta^2 |B|^2 (|B_+|^2 \!\!\! + \! |B_-|^2) + \beta^2 |B_+ B_-|^2
+ \frac{g_B^2}{2} (|B_+|^2 \!\!\! - \! |B_-|^2 \!\!\! + \xi_B)^2, \label{vtree}
\end{eqnarray}
with a global supersymmetric vacuum in which all fields are at the origin,
excepted $|B_-|=\sqrt{\xi_B}$, and, depending on the sign of $(\xi_A-\xi_B)$,
$|A_-|$ or $|A_+|=\sqrt{|\xi_A-\xi_B|}$.
\section{The two slow-roll inflationary stages}
\subsection{Inflationary effective potential}
There will be generically two stages of inflation,
provided that initial conditions for $(A,B)$ obey to :
\begin{equation} \label{condition}
|A|^2 \geq \frac{g_A^2 \xi_A}{\alpha^2}, \qquad
|B|^2 \geq \frac{g_A^2 \xi_A + g_B^2 \xi_B}{\beta^2}.
\end{equation}
Then, charged fields have positive squared masses
and stand in their (local) minimum
$A_\pm=B_\pm=0$ (for a discussion of the charged fields initial conditions,
see for instance \cite{LT,BCT}).
$A$ and $B$ have a constant phase, while their moduli
$\tilde{A} \equiv |A|/\sqrt{2}$ and $\tilde{B} \equiv |B|/\sqrt{2}$ behave
like two real inflaton fields and roll to the origin,
until one inequality in (\ref{condition})
breaks down. We assume that the condition on $B$ breaks first.
During this first stage, the field evolution is driven by the effective
potential:
$V_1= (g_A^2 \xi_A^2 + g_B^2 \xi_B^2)/2 + \Delta V_1$.
The one-loop correction $\Delta V_1$ is small
($\Delta V_1 \ll V_1$) \cite{HA2},
but crucial for the field evolution. It consists in two terms with a
logarithmic dependence on $A$ and $B$. The former takes a simple form
following from \cite{DSS},
because we can assume $g_A^2 \xi_A \ll \alpha^2 |A|^2 $.
The latter is more complicated because the dimensionless quantity
$b \equiv (g_A^2 \xi_A + g_B^2 \xi_B)/(\beta^2 |B|^2)$
goes to one when $B$ reaches its critical value.
The exact expressions are:
\begin{eqnarray}
&\Delta V_1& =
\frac{g_A^4 \xi_A^2}{32 \pi^2} \left( \ln \frac{\alpha^2 |A|^2}{\Lambda^2}
+ \frac{3}{2} \right) \\
&+& \frac{(g_A^2 \xi_A + g_B^2 \xi_B)^2}{32 \pi^2} \left(
\ln \frac{\beta^2 |B|^2}{\Lambda^2} + \frac{(1+b)^2 \ln (1+b)
+ (1-b)^2 \ln (1-b)}{2b^2} \right) \nonumber
\end{eqnarray}
($\Lambda$ is the renormalization energy scale at which $g_A$ and $g_B$ must
be evaluated).
When $b \ll 1$, the term involving $b$ tends to $\frac{3}{2}$.
Even at $b=1$, this term only
increases the derivative $(\partial V_1 / \partial \tilde{B})$
by a factor $2 \ln 2$ : so, it can be
neglected in qualitative studies. In this approximation, it is easy to
calculate the trajectories of $A$ and $B$, and to note that $B$ reaches its
critical value before $A$ only if the initial field values
obey to:
\begin{equation} \label{ini}
\frac{|B|_0}{|A|_0} < 1+\frac{g_B^2 \xi_B}{g_A^2 \xi_A}.
\end{equation}
This condition is natural, in the sense that it allows
$|A|_0$ and $|B|_0$ to be of the same order of magnitude, whatever the values
of $g_{A,B}$ and $\xi_{A,B}$.
At the end of the first stage, ($B$, $B_-$) evolve to another false vacuum:
$B=0$, $|B_-|^2=(g_A^2 \xi_A + g_B^2 \xi_B)/(g_A^2 + g_B^2)$.
During this transition,
the charged fields $B_+$, $A_\pm$ remain automatically stable
if we impose $\xi_B \leq 2 \xi_A$.
Afterwards, a second stage occurs: $A$ rolls to the origin, driven
by the potential:
\begin{equation}
V_2 = \frac{g_A^2 g_B^2 (\xi_A-\xi_B)^2}{2(g_A^2 + g_B^2)} \left( 1 +
\frac{g_A^2 g_B^2}{16 \pi^2 (g_A^2 + g_B^2)}
\left( \ln \frac{\alpha^2 |A|^2}{\Lambda^2} + \frac{3}{2} \right) \right),
\end{equation}
until $|A_+|$ or $|A_-|$ becomes unstable, and quickly drives the fields
to the supersymmetric minimum.
\subsection{Second stage of single-field inflation}
Let us focus first on the second stage of inflation,
in order to find the small scale primordial power spectrum ({\it i.e.}, if
the second stage starts at $t=t_2$, and $k_2 \equiv a(t_2) H(t_2)$,
the power spectrum at scales $k>k_2$). This stage should
last approximately $N \simeq 50$ {\it e}-folds, so that the transition
takes place when scales observable today cross the Hubble radius.
A standard calculation shows that for
$\alpha$ of order one, the second slow-roll condition breaks
before $A$ reaches its critical value (which is given by
$\alpha^2 |A_{C}|^2 = \frac{g_A^2 g_B^2}{g_A^2+g_B^2}
|\xi_A-\xi_B|$). Integrating back in time,
we find that $N$ {\it e}-folds before the end of inflation,
\begin{equation} \label{abreak}
|A|= \sqrt{\frac{N}{2 \pi^2}} \frac{g_A g_B}
{\sqrt{g_A^2 + g_B^2}} {\mathrm M}_P
\end{equation}
(we are using the reduced Planck mass
${\mathrm M}_P \equiv (8 \pi G)^{-1/2} = 2.4 \times 10^{18} {\mathrm GeV}$).
Then, the primordial spectrum can be easily calculated, using the single-field
slow-roll expression \cite{LL}:
\begin{equation} \label{secspec}
\delta_H^2 = \frac{1}{75 \pi^2 {\mathrm M}_P^6}
\frac{V_2^3}{(d V_2 / d \tilde{A})^2} =
\frac{16 \pi^2}{75 {\mathrm M}_P^6}
\frac{g_A^2 + g_B^2}{g_A^2 g_B^2}
(\xi_A - \xi_B)^2 |A|^2.
\end{equation}
To normalize precisely this spectrum (\ref{secspec})
to COBE, it would be necessary to
calculate the contribution of cosmic strings generated by symmetry breaking
\cite{J}, to make assumptions about the geometry and matter content of the
universe, and to fix the amplitude of the step in the primordial
power spectrum (between COBE scale and $k_2$). However, if
perturbations generated by cosmic strings are not dominant, and if
the primordial spectrum is approximately scale-invariant
as required by observation, we can estimate the order of magnitude of the
primordial power spectrum at all observable scales: $\delta_H^2
\sim 5 \times 10^{-10}$. So, at $k=k_2$, inserting
(\ref{abreak}) into (\ref{secspec}), we find the constraint:
\begin{equation} \label{norm}
\sqrt{|\xi_A-\xi_B|} \sim 3 \times 10^{-3} {\mathrm M}_P
\sim 10^{15-16} {\mathrm GeV}.
\end{equation}
At first sight, this constraint
could be satisfied when ($\sqrt{\xi_A}$, $\sqrt{\xi_B}$) are both much
greater than $10^{-3} {\mathrm M}_P$, and very close to each other.
Then, however, the amplitude of the large-scale plateau would violate the
COBE normalization, as can be seen from the following.
Also, there is no reason to believe that one term is much smaller than the
other: this would raise a fine-tuning problem. So, we will go on assuming
that both Fayet-Iliopoulos terms have an order of magnitude
$\sqrt{\xi_A}\sim \sqrt{\xi_B} \sim 10^{-3} {\mathrm M}_P$, just as
in single D-term inflation.
The spectrum tilt $n_S$ at $k=k_2$ can be deduced from the
slow-roll parameters ($\epsilon$, $\eta$) \cite{LL}.
Like in any model of single-field
D-term inflation, $\epsilon$ can be neglected, and
$n_S(k_2)=1+2\eta(k_2)=1-1/N \simeq 0.98$. So, the spectrum on small scales
is approximately scale-invariant.
\subsection{First stage of two-field inflation}
During the first inflation,
the primordial spectrum calculation must be done carefully.
If slow-roll conditions were to hold during
the transition between both inflationary stages, the evolution of metric
perturbations (for modes outside the Hubble radius)
would be described at first order by the well-known slow-roll solution
(see for instance \cite{PS}):
\begin{equation} \label{phisr}
\Phi = - C_1 \frac{\dot{H}}{H^2}-
H \frac{d}{dt} \left( \frac{d_A V_A + d_B V_B}{V_A + V_B} \right),
\end{equation}
where $\Phi$ is the gravitational potential in the longitudinal gauge.
Here, $C_1$ is the time-independent coefficient of the growing
adiabatic mode, while $d_A$ and $d_B$ are coefficients related to
the non-decaying
isocurvature mode (in fact, only $d_A-d_B$ is meaningful \cite{PS}).
The expression of $V_A$ and $V_B$ at a given time can be calculated
only if the whole field evolution is known, between
the first Hubble crossing and the end of inflation.
Formally, in the general case of multiple fields $\phi_i$, the $V_i$'s can be
found by integrating $d V_i = (\partial V / \partial \phi_i) d \phi_i$
back in time, starting from the end of inflation. This just means here
that during the second slow-roll, $V_A=V_2$, $V_B=0$, and during the first
slow-roll:
\begin{eqnarray}
V_A &=& \frac{g_A^2 g_B^2 (\xi_A - \xi_B)^2}{2 (g_A^2+g_B^2)}
+\frac{g_A^4 \xi_A^2}{32 \pi^2} \left( \ln \frac{\alpha^2 |A|^2}{\Lambda^2}
+ \frac{3}{2} \right) , \nonumber \\
V_B &=& \frac{( g_A^2 \xi_A + g_B^2 \xi_B )^2}{2 (g_A^2 + g_B^2)}
+\frac{(g_A^2 \xi_A + g_B^2 \xi_B)^2}{32 \pi^2} \left(
\ln \frac{\beta^2 |B|^2}{\Lambda^2} + \frac{3}{2} \right).
\end{eqnarray}
We see immediately from (\ref{phisr}) with $V_B=0$ that the
isocurvature mode is suppressed during the second inflationary stage.
On the other hand, it must be taken into account
during the first stage. This leads to the well-known expression
\cite{PS}:
\begin{equation} \label{lsp}
\delta_H^2 (k) = \frac{V}{75 \pi^2 {\mathrm M}_P^6}
\left( \frac{V_{A}^2}{(d V_A / d \tilde{A})^2} +
\frac{V_{B}^2}{(d V_B / d \tilde{B})^2} \right)_k,
\end{equation}
where the subscript $k$ indicates that quantities are evaluated at the
time of Hubble radius crossing during the first stage.
If slow-roll is briefly disrupted during the transition (this is the
interesting case if we want to generate a narrow step or bump),
the solution (\ref{phisr}) doesn't hold at any time, but we still
have a more general exact solution, describing the adiabatic mode
(in the long-wavelenght regime):
\begin{equation} \label{phinsr}
\Phi = C_1 \left( 1-\frac{H}{a} \int_0^t a dt \right) +
({\rm other~modes}).
\end{equation}
If, during the transition, the other modes are not dominant (as usually
expected), the matching of the three solutions : (\ref{phisr}) before the
transition, (\ref{phinsr}) during the transtion, and (\ref{phisr}) afterwards,
shows that $C_1$ is really the same number during all stages:
the slow-roll disruption doesn't leave a signature on
the large-scale power spectrum (\ref{lsp}).
On the other hand, if a specific phenomenon amplifies significantly
isocurvature modes during the transition, the same matching
shows that during the second stage, there will be an additional term
contributing to the adiabatic mode. This role may be played by
parametric amplification of metric perturbations, caused by
oscillations of $B_-$. Generally speaking, the possibility
that parametric resonance could affect modes well outside the Hubble
radius is still unclear, and might be important in multi-field
inflation \cite{PARA}.
In our case, this problem would require a carefull numerical
integration, and would crucially depend on the details of the
one-loop effective potential during the transition. So, we leave this issue
for another publication, and go on under the standard assumption that
expression (\ref{lsp}) can be applied to
any mode that is well outside the Hubble radius during the transition.
Let us apply this assumption to our model, and find
the large scale primordial power spectrum ({\it i.e.}, if
the first stage ends at $t=t_1$, and $k_1 \equiv a(t_1) H(t_1)$,
the power spectrum at scales $k<k_1$). The contribution to $\delta_H$ arising
from perturbations in $A$ reads:
\begin{equation} \label{dha}
\delta_H^2 |_A \equiv \frac{V}{75 \pi^2 {\mathrm M}_P^6}
\frac{V_A^2}{(d V_A / d \tilde{A})^2}
= \frac{16 \pi^2}{75 {\mathrm M}_P^6}
\frac{g_B^4 (g_A^2 \xi_A^2 + g_B^2 \xi_B^2) (\xi_A - \xi_B)^4}
{g_A^4 (g_A^2 + g_B^2)^2 \xi_A^4} |A|^2.
\end{equation}
The transition lasts approximately 1 {\it e}-fold (indeed, during this
stage, the evolution
is governed by second-order differential equations with damping terms
$+3H \dot{\phi_i}$, and $(B, B_-)$ stabilize within a time-scale
$\Delta t \sim H^{-1}$). During that time, $A$ is still in slow-roll
and remains approximately constant. So, using eqs. (\ref{secspec}) and
(\ref{dha}),
it is straightforward to estimate the amplitude of the step in the
primordial spectrum, under the assumption that $\delta_H (k_2) |_B$ is
negligible:
\begin{equation} \label{p}
p^2 \equiv \frac{\delta_H^2 (k_1) |_A}{\delta_H^2 (k_2)}
= \frac{(1-\xi_B/\xi_A)^2 (1+g_B^2 \xi_B^2/g_A^2 \xi_A^2)}{(1+g_A^2/g_B^2)^3}.
\end{equation}
Since we already imposed $\xi_B \leq 2\xi_A$, $p$ can easily be smaller
than one, so that we obtain, as desired, more power on small scales.
The simple explanation is that with $B_-$ charged under both symmetries, the
transition affects not only the dynamics of $(B, B_-)$, but also the
one-loop correction proportional to $(\ln |A|^2)$, in such way that the
slope $(\partial V / \partial \tilde{A})$ can decrease by the above factor $p$.
However, perturbations in $B$ must also be taken into account.
Their contribution to the large scale primordial spectrum reads:
\begin{equation}
\delta_H^2|_B \equiv \frac{V}{75 \pi^2 {\mathrm M}_P^4}
\frac{V_B^2}{(d V_B/ d \tilde{B})^2} = \frac{16 \pi^2}{75 {\mathrm M}_P^6}
\frac{(g_A^2 \xi_A^2 + g_B^2 \xi_B^2)}
{(g_A^2+g_B^2)^2} |B|^2.
\end{equation}
At the end of the first stage, the value of $|B|$ is roughly given by
${\mathrm M}_P^2 (\partial^2 V_1 / \partial \tilde{B}^2) = V_1$,
since after the breaking
of this slow-roll condition, eq. (\ref{lsp}) is not valid anymore,
and $B$ quickly rolls to its critical value. This yields:
\begin{equation}
|B|=\frac{g_A^2 \xi_A + g_B^2 \xi_B}{2 \pi
\sqrt{g_A^2 \xi_A^2 + g_B^2 \xi_B^2}} {\mathrm M}_P.
\end{equation}
We can now compare $\delta_H^2 |_A$ and $\delta_H^2 |_B$ at the end of the
first stage. It appears that for a natural choice of free
parameters ($g_A \sim g_B$, $\xi_A \sim \xi_B \sim |\xi_A - \xi_B|$),
both terms can give a dominant contribution to the global primordial
spectrum.
If $\delta_H^2 |_B$ dominates, we expect $p>1$, and a
tilted large-scale power spectrum
(due to the decrease of $|B|$, which is at the end of its
slow-roll stage). On the other hand, if $\delta_H^2 |_B$ is
negligible, the step amplitude is directly
given by (\ref{p}), and the large
scale plateau is approximately flat, with $n_S \simeq 0.98$
like in single-field D-term inflation. As we said in the introduction,
this latter case is the most interesting one
in the framework of $\Lambda$CDM models.
A numerical study shows that $\delta_H^2 |_A \gg \delta_H^2 |_B$
holds in a wide region in parameter space.
Indeed, we explored systematically the region
in which $0.1 \leq g_B/g_A \leq 10$, and $(\sqrt{\xi_A}, \sqrt{\xi_B})$
are in the range $ (0.1-10) \times 10^{-3} {\mathrm M}_P$.
We find that $\delta_H^2 |_A \geq 10~\delta_H^2 |_B$ whenever:
\begin{equation}
(g_B \geq 0.8 g_A,~\sqrt{\xi_A} \geq 1.1 \sqrt{\xi_B}) \quad
{\rm or} \quad
(\forall g_A, \forall g_B,~\sqrt{\xi_B} \geq 1.1 \sqrt{ \xi_A}).
\end{equation}
So, inside these two regions, the primordial
spectrum has two approximately scale-invariant plateaus $(n_S \simeq 0.98)$,
and the step amplitude is given by (\ref{p}).
Further, a good agreement with observations requires a small inverted step,
$0.75 \leq p \leq 0.85$ \cite{LPS1}, and of course, a correct order of
magnitude for the amplitude,
$\delta_H^2(k_1) \sim {\cal O} (10^{-10})$. These additional constraints
single out two regions in parameter space:
\begin{eqnarray}
&& \left( 2.2 g_A \leq g_B \leq 4 g_A,
~13(\sqrt{\xi_A}-4.5 \! \times \! 10^{-3})
\leq 10^3 \xi_B
\leq 8(\sqrt{\xi_A}-2.5 \! \times \! 10^{-3}) \right)
\nonumber \\
&{\rm or}&
\quad
\left( g_B \leq 1.5 g_A,
~150\xi_A-2.5 \! \times \! 10^{-3}
\leq \sqrt{\xi_B}
\leq 90\xi_A-4.3 \! \times \! 10^{-3} \right).
\end{eqnarray}
In this section, we only studied the primordial spectrum of
adiabatic perturbations. Indeed, it is easy to show that tensor
contributions to large-scale CMB anisotropies are negligible in this model,
like in usual single-field D-term inflation (a significant
tensor contribution would require $g_{A,B} \sim {\cal O} (10)$
or greater, while the consitency of the underlying SUSY or SUGRA theory
requires $g_{A,B} \sim {\cal O} (10^{-1})$ or smaller \cite{HA2}).
\subsection{Transition between slow-roll inflationary stages}
We will briefly discuss the issue of primordial spectrum calculation
for $k_1 < k < k_2$. During this stage, $A$ is still slow-rolling, but
$B$ and $B_-$ obey to the second-order differential equations in global
susy:
\begin{equation} \label{motion}
\ddot{\tilde{B}}_{(-)}+3H \dot{\tilde{B}}_{(-)}+
\frac{\partial V}{\partial \tilde{B}_{(-)}}=0.
\end{equation}
The derivatives of the potential are given by the tree-level
expression (\ref{vtree}),
plus complicated one-loop corrections.
In supergravity, the tree-level potential will be different from
(\ref{vtree}), even with a minimal gauge kinetic function
and K\"ahler potential. With a non-minimal K\"ahler potential, an additional
factor will also multiply both inertial and damping terms
$\ddot{\tilde{B}}_{(-)}+3H \dot{\tilde{B}}_{(-)}$.
The evolution of background quantities has been
already studied in a similar situation in \cite{LT}.
As far as perturbations are concerned, the simplest possibility would be to
recover the generic primordial power spectrum introduced by Starobinsky
\cite{S92}, also in the case of a transition between
two slow-roll inflationary stages, with a jump in the potential derivative.
This would be possible if :
(i) $H$ was approximately constant in the vicinity of the
transition; (ii) the number of {\it e}-folds
separating both slow-roll regimes
was much smaller than one (in other terms, the time-scale of the transition
must be much smaller than $H^{-1}$). However, in the model under
consideration, (i) is
not a good approximation, because during the transition there is a jump
in the potential itself. Moreover, (ii) is in contradiction with the
equations of motion (\ref{motion}) : $B$ and $B_-$ only stabilize after
a time-scale $\delta t \sim H^{-1}$, due to
the damping factor $+3H$. This statement is very general, and holds
even in supergravity with a non-minimal K\"ahler potential. So, it seems
that only a first-order phase transition can reproduce the exact power
spectrum of \cite{S92},
as previously noticed by Starobinsky himself \cite{S98}.
A model of inflation with a first-order phase transition has been proposed
in \cite{BAFO}.
Further, we don't believe that the power spectrum
in our model can be approximated by the numerical solution of
double chaotic inflation \cite{P94}.
Indeed, in chaotic double inflation, fields masses are typically of the
same order as the Hubble parameter: so, background fields have no strong
oscillatory regime. In our case, estimating the effective mass for $B_-$
from the tree-level potential, and comparing it with
$V$ (which gives a lower bound on $H$),
we see that before stabilization, $B_-$
will undergo approximately $\sim {\mathrm M}_P/\sqrt{\xi_B} \sim 10^3$
oscillations. These oscillations are relevant for the primordial spectrum
calculation, because $B_-$ perturbations and metric perturbations will
strongly couple.
So, for each particular model, a
numerical integration of the background and perturbation equations should
be performed in order to find the shape, and even the width of the
transition in the primordial power spectrum.
The result could be either a step or a bump, and since
the fields stabilize in $\delta t \sim H^{-1}$, it is reasonable to
think that the width of the
feature will not be too large, $(k_2/k_1)\leq 10$. So, in any case, the
primordial spectrum should be in good agreement with current observations,
but precise comparison with future data requires numerical work.
\section{Conclusion}
We introduced a model of double D-term inflation, with two $U(1)$ gauge
symmetries, and two associated Fayet-Iliopoulos terms.
A phase of instability for two fields $(B, B_-)$ separates two slow-roll
inflationary stages. During this transition, $B_-$ partially cancels the
Fayet-Iliopoulos terms, causing a jump in the potential; also, since
it is charged under both symmetries, it affects the one-loop corrections
in such way that the potential can become less steep in the direction of one
inflaton field, $\tilde{A}$. As a result, for a wide window in the space
of free parameters
(the Fayet-Iliopoulos terms $\xi_{A,B}$ and the gauge
coupling constants $g_{A,B}$),
the primordial spectrum of adiabatic perturbations consists
in two approximately scale-invariant plateaus, separated by an unknown
feature, presumably of small width, and with {\it more} power on small scales.
The amplitude of the step, $p$, is given by a simple function of parameters
(\ref{p}). In the framework of $\Lambda$CDM, spectra with
$0.75 \leq p \leq 0.85 $ are likely to
fit very well current LSS and CMB data, as argued in \cite{LPS1,LPS2}.
However, for a detailed comparison of our model
with observations, we need the shape of the primordial power spectrum
between the plateaus. This issue requires a numerical
integration, and the result will be model-dependent, in contrast with
predictions for the slow-roll plateaus.
Finally, before any precise comparison with observations,
one should also consider
the production of local cosmic strings, which is a typical
feature of D-term inflation \cite{J}. Indeed, CMB anisotropies
and LSS may result from both local cosmic strings and inflationary
perturbations \cite{CHM}.
\section*{Acknowledgements}
I would like to thank D.~Polarski, A.~Starobinsky and N.~Tetradis for
illuminating discussions. G.~Dvali, E.~Gawiser and R.~Jeannerot also
provided very useful comments on this work. I am supported by the
European Community under TMR network contract No. FMRX-CT96-0090.
| proofpile-arXiv_065-9075 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In the present paper we consider a generalized Hubbard model including
correlated hopping~\cite{1,2}. In such a model the hopping integrals,
which describe hoppings of
holes and doublons are different. These hopping integrals also are
distinguished
from the hopping integral which is connected with the processes of paired
creation
and destruction of holes and doublons. In recent years the similar models
have been studied intensively~\cite{2a} - \cite{12}. In particular, some of
these models~\cite{2a,2b,4,4a,5} have been solved exactly under the
condition that the number of doubly occupied sites is conserved.
The important puzzle arising in an investigation of the generalized models is
metal-insulator transition problem.
In papers~\cite{2,13} a new mean-field approximation (MFA)
which leads to the correct description of metal-insulator transition (MIT)
has been proposed.
In the present paper we use this MFA to study MIT in a generalized Hubbard
model with correlated hopping at half-filling and zero temperature.
In Sec.~2 we
introduce the Hamiltonian of narrow-band model. The decoupling scheme of
Green
functions is described in Sec.~3. Also single-particle Green function
and energy spectrum of electon system are calculated. In Sec.~4 the
expression for energy gap width is found. With the help of this formula and
the obtained expression for the concentration of polar states MIT is studied.
Finally, Sec.~5 is devoted to the conclusions.
\section{Hamiltonian of narrow-band model}
\setcounter{equation}{0}
Theoretical analysis, on the one hand, and
available experimental data, on the other hand, point out the necessity of
the Hubbard model~\cite{14} generalization by taking into account
interelectron interactions describing intersite hoppings of electrons
(correlated hopping)~\cite{1,14a,14b}. The characteristic property of these
correlated hopping integrals is the dependence on the occupation of sites by
electrons.
So, we start from the following generalization of the Hubbard model
including correlated hopping~\cite{1,2}:
\begin{eqnarray}
H=&&-\mu \sum_{i\sigma}a_{i\sigma}^{+}a_{i\sigma}+
{\sum \limits_{ij\sigma}}'a_{i\sigma}^{+}\left(t_0+\sum_k J(ikjk)n_k\right)
a_{j\sigma}
\nonumber\\
&&+U \sum_{i}n_{i\uparrow}n_{i\downarrow},
\end{eqnarray}
where $a_{i\sigma}^{+}, a_{i\sigma}$ are the operators of creation and
destruction for the electrons with spin $\sigma$ ($\sigma=\uparrow,
\downarrow$) on $i$-site, $n_{i\sigma}=a_{i\sigma}^{+}a_{i\sigma}$,
$n_i=n_{i\uparrow}+n_{i\downarrow}$; $\mu$ is the chemical potential;
$U$ is the intra-atomic Coulomb repulsion;
$t_0$ is the matrix element which describe the hoppings of electrons
between nearest-neighbor sites of lattice in consequence of electron-ion
interaction,
\begin{eqnarray}
J(ikjk)=\int \int \phi^*({\bf r}-{\bf R}_i)\phi({\bf r}-{\bf R}_j)
{e^2\over |{\bf r}-{\bf r'}|}|\phi({\bf r'}-{\bf R}_k)|^2{\bf drdr'},
\end{eqnarray}
($\phi$-function is the Wannier function).
The prime at second sum in Eq.~(2.1) signifies that $i\neq{j}$.
In Hamiltonian~(2.1) we rewrite the sum
$\sum'_{ij\sigma k}J(ikjk)a^{+}_{i\sigma}n_ka_{j\sigma}$
in the form
\begin{eqnarray}
{\sum_{ij\sigma}}'\sum_{\stackrel{k\neq{i}}{k\neq{j}}}J(ikjk)a^{+}_{i\sigma}
n_ka_{j\sigma}+{\sum_{ij\sigma}}'\left(J(iiij)a^{+}_{i\sigma}a_{j\sigma}
n_{i{\bar \sigma}}+h.c.\right)
\end{eqnarray}
(${\bar \sigma}$ denotes the spin projection which is opposite to $\sigma$),
here we have used that $J(iiji)=J(jiii)=J(iiij)$ in consequence of the
matrix elements symmetry. Let us suppose (as in the papers~\cite{1,2}) that
\begin{eqnarray}
{\sum_{ij\sigma}}'\sum_{\stackrel{k\neq{i}}{k\neq{j}}}J(ikjk)a^{+}_{i\sigma}
n_ka_{j\sigma}=T_1{\sum_{ij\sigma}}'a^{+}_{i\sigma}a_{j\sigma}
\end{eqnarray}
with $T_1=n\sum_{\stackrel{k\neq{i}}{k\neq{j}}}J(ikjk)$
and $n=\langle n_{i\uparrow}+n_{i\downarrow}\rangle$ (sites $i$ and $j$ are
nearest neighbors); it should be noted that this supposition is exact in
the homeopolar limit ($n_i=1$).
Thus at half-filling ($n=1$) we can write Hamiltonian~(2.1) in the form
\begin{eqnarray}
H=&&-\mu \sum_{i\sigma}a_{i\sigma}^{+}a_{i\sigma}+
t{\sum \limits_{ij\sigma}}'a_{i\sigma}^{+}a_{j\sigma}+
T_2{\sum \limits_{ij\sigma}}' \left(a_{i\sigma}^{+}a_{j\sigma}n_{i\bar \sigma}+h.c.\right)
\nonumber\\
&&+U \sum_{i}n_{i\uparrow}n_{i\downarrow},
\end{eqnarray}
where $t=t_0+T_1$ and $T_2=J(iiij)$.
Note that at $n\neq 1$ the model will be described also by Hamiltonian~(2.5)
with $t=t_0+nT_1$ i.e. taking into account the correlated hopping $T_1$ leads
to the concentration dependence of the hopping integral $t$. It is the
distinction of the present model from similar
models~\cite{2a,2b,4,4a,14a,14b}. In the case $n=1$
taking into account the correlated hopping $T_1$ leads to the renormalization
of the hopping integral $t_0$ only.
Rewrite Hamiltonian~(2.5) in terms of $X_i^{kl}$-Hubbard operators~\cite{15}
using the formulae~\cite{16}
\begin{eqnarray*}
a_{i\uparrow}^{+}=X_i^{\uparrow 0}-X_i^{2\downarrow}, \ \
a_{i\uparrow}=X_i^{0\uparrow}-X_i^{\downarrow 2},
\\
a_{i\downarrow}^{+}=X_i^{2\uparrow}+X_i^{\downarrow 0}, \ \
a_{i\downarrow}=X_i^{\uparrow2}+X_i^{0\downarrow},
\end{eqnarray*}
where $X_i^{kl}$ is the transition-operator of $i$-site from state
$|l\rangle$ to
state $|k\rangle$; $|0\rangle$ denotes the site, which is not occupied
by an electron (hole), $|\sigma \rangle\equiv a_{i\sigma}^{+}|0\rangle$
denotes the singly occupied (by an electron with spin $\sigma$) $i$-site,
$|2\rangle\equiv a_{i\uparrow}^{+}a_{i\downarrow}^{+}|0\rangle$ denotes the
doubly occupied (by two electrons with the opposite spins) $i$-site (doublon).
In terms of $X_i^{kl}$-operators Hamiltonian~(2.5) takes the following
form:
\begin{eqnarray}
H=H_0+H_1+H'_1,
\end{eqnarray}
with
\begin{eqnarray*}
&&H_0=-\mu \sum_i \left(X_i^{\uparrow}+X_i^{\downarrow}+2X_i^2\right)+
U\sum_{i}X_i^2,\\
&&H_1=t{\sum \limits_{ij\sigma}}'X_i^{\sigma 0} X_j^{0\sigma} +
\tilde{t}{\sum \limits_{ij\sigma}}'X_i^{2\sigma}X_j^{\sigma 2},
\\
&&H'_1=t'{\sum \limits_{ij\sigma}}' \left(\eta_{\sigma}X_i^{\sigma 0}
X_j^{\bar{\sigma} 2}+h.c.\right),
\end{eqnarray*}
where $X_i^k=X_i^{kl}X_i^{lk}$ is the operator of the number of
$|k\rangle$-states on $i$-site, $\eta_{\uparrow}=-1,\ \eta_{\downarrow}=1$;
\begin{eqnarray}
\tilde{t}=t+2T_2,\quad
t'=t+T_2.
\end{eqnarray}
The single-particle Green function
\begin{eqnarray}
G_{pp'}^{\sigma}(E)=\langle\langle a_{p\sigma}\!|\hfill a_{p'\sigma}^{+}
\rangle\rangle
\end{eqnarray}
in terms of Hubbard operators is written
\begin{eqnarray}
G_{pp'}^{\sigma}(E)=&&\langle\langle
X_p^{0\sigma}|X_{p'}^{\sigma 0}\rangle\rangle +\eta_{\sigma}\langle\langle
X_p^{0\sigma}|X_{p'}^{2\bar{\sigma}}\rangle\rangle +\eta_{\sigma}\langle\langle
X_p^{\bar{\sigma} 2}|X_{p'}^{\sigma 0}\rangle\rangle \nonumber\\
&&+\langle\langle
X_p^{\bar{\sigma} 2}|X_{p'}^{2\bar{\sigma}}\rangle\rangle.
\end{eqnarray}
$H_0$ describes the atomic limit of narrow-band models.
$H_1$ describes the translational hopping of holes and doublons. In the present
model (in contrast to the narrow-band models of the Hubbard type) the
hopping integrals of holes $t$ and doublons $\tilde{t}$ are different.
It should be noted that in consequence of the difference of the hopping
integrals, which describe translational hopping of current carriers within
the lower (hole) band and upper (doublon) band, the energy width of the upper
band can be much
smaller and the effective mass of current carriers within this band can be
much larger than in the lower band.
Thus, within the proposed model the ideas of the ``wide'' and ``narrow''
subbands and the ``light'' and ``heavy'' current
carriers are introduced (as the result of electron-electron interactions).
$H'_1$ describes the processes of paired creation and destruction of holes
and doublons.
\section{Energy spectrum of electron system: MFA}
\setcounter{equation}{0}
Here we use MFA proposed in papers~\cite{2,13} to calculate
the energy spectrum of electron system described by Hamiltonian~(2.7).
The Green function $\langle\langle X_p^{0\sigma}|X_{p'}^{\sigma 0}\rangle\rangle$
is given by the equation
\begin{eqnarray}
(E+\mu)\langle\langle X_p^{0\sigma}|X_{p'}^{\sigma 0}\rangle\rangle=&&
{\delta_{pp'}\over 2\pi}\langle X_p^{\sigma}+X_p^{0}\rangle+
\langle\langle\left[X_p^{0\sigma}, H_1\right]|X_{p'}^{\sigma 0}\rangle\rangle
\nonumber\\
&&+\langle\langle\left[X_p^{0\sigma}, H'_1\right]|X_{p'}^{\sigma 0}\rangle
\rangle,
\end{eqnarray}
with $[A, B]=AB-BA$,
\begin{eqnarray}
\left[X_p^{0\sigma}, H_1\right]=t\sum_j\left((X_p^{\sigma}+X_p^{0})
X_j^{0\sigma}+X_p^{\bar{\sigma} \sigma}X_j^{0\bar{\sigma}}\right)-\tilde{t}
\sum_jX_p^{02}X_j^{2\sigma},
\end{eqnarray}
\begin{eqnarray}
\left[X_p^{0\sigma}, H'_1\right]=&&-t'\sum_jX_p^{02}X_j^{\bar{\sigma}0}+
t'\sum_jX_p^{\bar{\sigma} \sigma}X_j^{\sigma2}\nonumber\\
&&-t'\sum_j(X_p^{\sigma}+X_p^{0})X_j^{\bar{\sigma}2}.
\end{eqnarray}
To break off the sequence of Green function equations according to generalized
Hartree-Fock approximation~\cite{17} we suppose that
\begin{eqnarray}
\left[X_p^{0\sigma}, H_1\right]=\sum_{j}\epsilon(pj)X_j^{0\sigma},\
\left[X_p^{0\sigma}, H'_1\right]=\sum_{j}\epsilon_1(pj)X_j^{\bar{\sigma}2},
\end{eqnarray}
where $\epsilon(pj)$ and $\epsilon_1(pj)$ are the non-operator expressions.
The representation choice of the commutators in form~(3.2) and (3.3) is
prompted by the operator structure of these commutators which maps the energy
non-equivalence of the hopping processes prescribed by $H_1$ and $H'_1$.
Taking into account~(3.4) we rewrite Eq.~(3.1) in the form
\begin{eqnarray}
(E+\mu)\langle\langle X_p^{0\sigma}|X_{p'}^{\sigma 0}\rangle\rangle=&&
{\delta_{pp'}\over 2\pi}\langle X_p^{\sigma}+X_p^{0}\rangle+
\sum_{j}\epsilon(pj)\langle\langle X_j^{0\sigma}|X_{p'}^{\sigma 0}\rangle
\rangle
\nonumber\\
&&+\sum_{j}\epsilon_1(pj)\langle\langle X_j^{\bar{\sigma}2}|
X_{p'}^{\sigma 0}\rangle\rangle.
\end{eqnarray}
After anticommutation of both sides of the first of formulae~(3.4) with
$X_{k}^{\sigma 0}$ and the second formula with $X_k^{2\bar{\sigma}}$ we obtain
\begin{eqnarray}
\epsilon(pk)(X_{k}^{\sigma}+X_{k}^{0})=&&t(X_{p}^{\sigma}+X_{p}^{0})
(X_{k}^{\sigma}+X_{k}^{0})+tX_{k}^{\sigma\bar{\sigma}}
X_{p}^{\bar{\sigma}\sigma}
\nonumber\\
&&-\delta_{pk}t\sum_{j}X_{k}^{\bar{\sigma}0}X_{j}^{0\bar{\sigma}}
+\delta_{pk}\tilde{t}\sum_{j}X_{j}^{2\sigma}X_{k}^{\sigma 2}
\nonumber\\
&&-\tilde{t}X_{k}^{2 0}X_{p}^{0 2},
\end{eqnarray}
\begin{eqnarray}
\epsilon_1(pk)(X_{k}^{\bar{\sigma}}+X_{k}^{2})=&&-t'(X_{p}^{\sigma}+X_{p}^{0})
(X_{k}^{\bar{\sigma}}+X_{k}^{2})+t'X_{p}^{\bar{\sigma}\sigma}
X_{k}^{\sigma\bar{\sigma}}
\nonumber\\
&&-\delta_{pk}t'\sum_{j}X_{j}^{\bar{\sigma}0}X_{k}^{0\bar{\sigma}}
+\delta_{pk}t'\sum_{j}X_{k}^{2\sigma}X_{j}^{\sigma 2}
\nonumber\\
&&-t'X_{k}^{2 0}X_{p}^{0 2}.
\end{eqnarray}
Similarly, for the Green function $\langle\langle X_{p}^{\bar{\sigma}2}|
X_{p'}^{\sigma 0}\rangle\rangle$ we can write the equation
\begin{eqnarray}
(E+\mu-U)\langle\langle X_{p}^{\bar{\sigma}2}| X_{p'}^{\sigma 0}\rangle
\rangle=&&\sum_{j}\tilde{\epsilon}(pj)\langle\langle X_{j}^{\bar{\sigma}2}|
X_{p'}^{\sigma 0}\rangle\rangle\nonumber\\
&&+\sum_{j}\epsilon_2(pj)\langle\langle
X_{j}^{0\sigma}| X_{p'}^{\sigma 0}\rangle\rangle,
\end{eqnarray}
where $\tilde{\epsilon}(pj)$ and $\epsilon_2(pj)$ are determined
through the expressions which are analogous to (3.6) and (3.7).
Thus we obtain the closed system of equations for the Green functions
$\langle\langle X_{p}^{0\sigma}| X_{p'}^{\sigma 0}\rangle\rangle$ and
$\langle\langle X_{p}^{\bar{\sigma}2}| X_{p'}^{\sigma 0}\rangle\rangle$.
By neglecting correlated hopping and by averaging expressions~(3.6) and~(3.7)
we obtain the approximations~\cite{14,18,19}; the defects of
these approximations are
well-known (see, for example Ref.~\cite{20}). Here we use the approach
which has been proposed in the papers~\cite{2,13}.
To determine $\epsilon(pj),\ \epsilon_1 (pj)$ we rewrite $X_i^{kl}$-operator
in Eqs.~(3.6) and~(3.7) in the form~\cite{20a} $X_i^{kl}=\alpha_{ik}^{+}\alpha_{il}$,
where $\alpha_{ik}^{+},\ \alpha_{il}$ are the operators of creation and
destruction for $|k\rangle$- and $|l\rangle$-states
on $i$-site respectively (the Schubin-Wonsowsky operators~\cite{21});
thus $X_i^0=\alpha^{+}_{i0}\alpha_{i0},\
X_i^2=\alpha^{+}_{i2}\alpha_{i2},\
X_i^{\sigma}=\alpha^{+}_{i\sigma}\alpha_{i\sigma}$.
Let us substitute $\alpha$-operators by $c$-numbers in Eqs.~(3.6) and (3.7)
(here there is a partial equivalence with slave boson method~\cite{21a})
\begin{eqnarray}
\alpha_{i\sigma}^{+}=\alpha_{i\sigma}=\left({1-2d\over 2}\right)^{1/2},
\qquad
\alpha_{i 0}^{+}=\alpha_{i 0}=\alpha_{i 2}^{+}=\alpha_{i 2}=d^{1/2}
\end{eqnarray}
(we consider a paramagnetic case, electron concentration on site $n=1$);
$d$ is the concentration of polar states (holes or doublons).
The proposed approximation is based on
the following physical idea. Let us consider a paramagnetic Mott-Hubbard
insulator at $T\neq 0$. Within the wide temperature interval ($k_BT\ll U$)
the concentration of polar states is small ($d\ll 1$).
An analogous consideration is valid for a paramagnetic Mott-Hubbard semimetal
(hole and doublon subbands overlap weakly, $d\ll 1$).
So, the change of
states and polar excitations influences on $|\sigma\rangle$-states weakly.
Thus we may consider $|\sigma\rangle$-states as the
quasiclassical system and substitute the operators $\alpha^{+}_{i\sigma},\
\alpha_{i\sigma}$ by $c$-numbers. In addition, when we find
$\epsilon(pj),\ \epsilon_1 (pj)$ we substitute the creation and destruction
operators of $|0\rangle$- and $|2\rangle$-states through the respective
quasiclassical
expressions. Actually the proposed approximation is
equivalent to a separation of the charge and spin degrees of freedom. Note
that the present approach is the most justifiable when $d\to 0$.
Thus in $\bf{k}$-representation we obtain
\begin{eqnarray}
\epsilon({\bf k})=(1-2d+2d^2)t_{\bf k}-2d^2\tilde{t}_{\bf k}, \quad
\epsilon_1({\bf k})=-2dt'_{\bf k},
\end{eqnarray}
where $t_{\bf k},\ \tilde{t}_{\bf k},\ t'_{\bf k}$ are the Fourier
transforms of the hopping integral $t,\ \tilde{t},\ t'$ respectively.
Similarly, we find that
\begin{eqnarray}
\tilde{\epsilon}({\bf k})=(1-2d+2d^2)\tilde{t}_{\bf k}-2d^2t_{\bf k}, \quad
\epsilon_2({\bf k})=-2dt'_{\bf k}.
\end{eqnarray}
The Fourier transform of the Green function
$\langle\langle X_{p}^{0\sigma}| X_{p'}^{\sigma 0}\rangle\rangle$
is found from the system of equations~(3.5) and~(3.8)
\begin{eqnarray}
\langle\langle X_{p}^{0\sigma}| X_{p'}^{\sigma 0}\rangle\rangle_{\bf k}=
{1\over 4\pi}\cdot {E+\mu-U-(1-2d+2d^2)\tilde{t}_{\bf k}+2d^2t_{\bf k}\over
(E-E_1({\bf k}))(E-E_2({\bf k}))},
\end{eqnarray}
with
\begin{eqnarray}
&&E_{1,2}({\bf k})=-\mu+{(1-2d)(t_{\bf k}+\tilde{t}_{\bf k})+U\over 2}\mp
{1\over 2}F_{\bf k},
\\
&&F_{\bf k}=\sqrt{\left[B(t_{\bf k}-\tilde{t}_{\bf k})-U\right]^2+
(4dt'_{\bf k})^2},\
B=1-2d+4d^2.
\end{eqnarray}
An analogous procedure is realized also in the equations for the other Green
functions in Eq.~(2.6).
Finally, in {\bf k}-representation the single-particle Green function is
\begin{eqnarray}
&&G_{\bf k}(E)={1\over 2\pi}\left({A_{\bf k}\over E-
E_1({\bf k})}+{B_{\bf k}\over E-E_2(\bf k)}\right),\\
&&A_{\bf k}={1\over 2}-{2dt'_{\bf k}\over F_{\bf k}}, \quad
B_{\bf k}={1\over 2}+{2dt'_{\bf k}\over F_{\bf k}}.
\end{eqnarray}
Single-particle Green function~(3.15) gives the exact atomic and
band limits: if $U=0$ and $t_{\bf k}=\tilde{t}_{\bf k}=t'_{\bf k}=
t_0({\bf k})$ (it means neglecting correlated hopping) then
$G_{\bf k}(E)$ takes the band form ($d=1/4$ when $U=0$),
if $t_{\bf k}=\tilde{t}_{\bf k}=t'_{\bf k}\rightarrow 0$ then we obtain the
exact atomic limit.
The peculiarities of obtained quasiparticle energy spectrum~(3.13) of
narrow-band system
which is described by Hamiltonian~(2.5) are the dependence on the
concetration of
polar states and the non-equivalence of the lower and upper Hubbard bands.
This non-equivalence is caused by the difference of the hopping integrals $t$,
$\tilde{t}$, $t'$.
Quasiparticle energy spectrum~(3.13) allows to study MIT in the proposed
model.
\section{Metal-insulator transition}
\setcounter{equation}{0}
With the help of energy spectrum of electrons~(3.13) we find the expression
for the energy gap width (difference of energies between bottom of the upper
and top of the lower Hubbard bands):
\begin{eqnarray}
&&\Delta E=-(1-2d)(w+\tilde{w})+{1\over 2}(Q_1+Q_2),
\\
&&Q_1=\sqrt{\left[ B(w-\tilde{w})-U\right]^2+(4dzt')^2},\nonumber\\
&&Q_2=\sqrt{\left[ B(w-\tilde{w})+U\right]^2+(4dzt')^2},\nonumber
\end{eqnarray}
where $w$ and $\tilde{w}$ are the half-widths of the lower (hole) and upper
(doublon) Hubbard bands respictevely: $w=z|t|,\ \tilde{w}=z|\tilde{t}|$
($z$ is the
number of nearest neighbors to a site).
The peculiarities of the expression for energy gap~(4.1) are dependences on
the concentration of polar states, on the widths of hole and doublon bands,
on the hopping integral $t'$ (thus on external pressure). At given $U,\ t,\
\tilde{t},\ t'$ (constant external pressure) the concentration dependence of
$\Delta E$ allows to study MIT under the action of external
influences: temperature change, photoeffect and magnetic field. In particular,
$\Delta E(T)$-dependence can lead to the transition from a metallic state to
an insulating state with the increase of temperature~\cite{21b}
(in this connection the transition from the state of a paramagnetic metal to
the paramagnetic insulator state in the (V$_{1-x}$Cr$_x$)$_2$O$_3$
compound~\cite{22,23}, in NiS$_2$~\cite{24} and in
the NiS$_{2-x}$Se$_x$ system~\cite{24,24a,24b} should be noted).
Under the action of light or magnetic field the concentration of polar
states can be changed; it leads to
the fact that the energy gap width is changed also and MIT can occur.
Distinction of formulae~(3.13)--(3.16), (4.1) from earlier obtained results
(e.g., see reviews~\cite{23,25,26}) is the dependence on concentration of
polar states. Let us find the expression for its calculation.
The concentration of polar states is given by the equation
\begin{eqnarray}
d=\langle X_i^2\rangle =&&{1\over N}{\sum_{\bf k}\int\limits_{-\infty}
^{+\infty}}J_{\bf k}(E)dE
\nonumber\\
&&={1\over 2N}\sum_{\bf k}\left(
{C_{\bf k}\over \exp{E_1({\bf k})\over \theta}+1}+
{D_{\bf k}\over \exp{E_2({\bf k})\over \theta}+1}\right),
\end{eqnarray}
where
\begin{eqnarray*}
&&C_{\bf k}={1\over 2}-{B(\tilde{t}_{\bf k}-t_{\bf k})\over 2F_{\bf k}}-
{U\over 2F_{\bf k}}, \\
&&D_{\bf k}={1\over 2}+{B(\tilde{t}_{\bf k}-t_{\bf k})\over 2F_{\bf k}}+
{U\over 2F_{\bf k}},
\end{eqnarray*}
$\theta=k_BT$, $k_B$ is the Boltzmann's constant, $N$ is the number of sites,
$J_{\bf k}(E)$ is the spectral intensity of the Green function
\begin{eqnarray}
\langle\langle X_p^{{\bar \sigma}2}|X_{p'}^{2{\bar \sigma}}\rangle
\rangle_{\bf k}={1\over 4\pi}\left({C_{\bf k}\over E-E_1({\bf k})}+
{D_{\bf k}\over E-E_2(\bf k)}\right).
\end{eqnarray}
At $T=0$ and the rectangular density of states
\begin{eqnarray*}
{1\over N}\sum_{\bf k}\delta(E-t_{\bf k})={1\over 2w}\theta(w^2-E^2)
\end{eqnarray*}
($\theta(x)=1$ if $x>0,\ =0$ otherwise) from Eq.~(4.2) we obtain that
\begin{eqnarray}
-{B\over z}{\tilde{t}-t\over \lambda}\left[\varphi (\epsilon_0)-\varphi
(-\epsilon_0) \right]+{U\over z\sqrt{\lambda}}\left(1-{B^2
(\tilde{t}-t)^2\over \lambda}\right)\times\hfill
\nonumber\\
\times \ln\left|{\sqrt{\lambda}\varphi (\epsilon_0)-
\lambda \epsilon_0 -BU(\tilde{t}-t)
\over \sqrt{\lambda}\varphi (-\epsilon_0)+\lambda \epsilon_0 -
BU(\tilde{t}-t)}\right|=8d-2 \quad (U< w+\tilde{w})
\end{eqnarray}
with
\begin{eqnarray*}
&&\epsilon_0=2\sqrt{\mu U-\mu^2\over (1-2d)^2(t+\tilde{t})^2-\lambda},\quad
\mu={(1-2d+2d^2)w-2d^2\tilde{w}\over (1-2d)(w+\tilde{w})}U,
\nonumber\\
&&\varphi(\epsilon)=\left\{ \lambda \epsilon^2-2BU(\tilde{t}-t)\epsilon+
U^2\right\}^{1\over 2}, \quad
\lambda =B^2(\tilde{t}-t)^2+(4dt')^2.
\end{eqnarray*}
For narrow-band semimetal ($d\ll 1$) Eq.~(4.4) takes the following form:
\begin{eqnarray}
d={1\over 4}\left({1-{U\over w+\tilde{w}}}\right).
\end{eqnarray}
Fig.~1 shows the dependence of $d$ on $U/w$ which is obtained from
Eq.~(4.4). The parameters $\tau_1=T_1/|t_0|,\ \tau_2=T_2/|t_0|$ characterize
the value of correlated hopping. One can see that a value of $d$
depends on the parameters of correlated hopping $\tau_1,\ \tau_2$
(thus on $\tilde{w}/w$) weakly when $U/w$ is close to zero. But with the
increase of $U/w$ the
concentration of polar states becomes strongly dependent on the parameters $\tau_1,\
\tau_2$. It testifies on the fact that taking into account
the correlated hopping is important to consider the metal-insulator transition
problem.
Fig.~1 shows also that if $U\geq w+\tilde{w}$ then the concentration of polar
states $d=0$. In the special case $t+\tilde{t}=t'=0$ this consequence is in
accordance with the results of Refs.~\cite{4,4a,8}.
At $T=0$ the energy gap width $\Delta E\leq 0$ (i.e. MIT occurs)
when the condition
\begin{eqnarray}
U\leq w+\tilde{w}
\end{eqnarray}
is satisfied (in agreement with general physical ideas~\cite{23}). For the
special case $t'=0$ condition~(4.6) covers the exact results of
Refs.~\cite{2b,4,5}.
Fig.~2 which is obtained from formula~(4.1) using Eq.~(4.4) shows
that in a metallic state the overlapping of energy subbands decreases and
in an insulating state the energy gap width increases with decrease of
the parameter $\tilde{w}/w$ (at given $U/w$).
In the Hubbard model energy gap width~(4.1) takes the following form:
\begin{eqnarray}
\Delta E=-2w(1-2d)+\sqrt{U^2+(4dw)^2},
\end{eqnarray}
and the concentration of polar states~(4.4) is
\begin{eqnarray}
d=\left({1\over 4}+{U\over 32dw}\ln(1-4d)\right)\theta(2w-U).
\end{eqnarray}
In the region of metal-insulator transition $d=1/4-U/(8w)$; this dependence
is in qualitative accordance with the result of Brinkman and Rice~\cite{26a}
obtained by use of Gutzwiller variational method~\cite{26b}, those of
the general Gutzwiller-correlated wave functions in infinite
dimensions~\cite{27} and the Kotliar-Ruckenstein slave bosons~\cite{21a}.
For $U/2w\to 0$
we obtain $d=1/4+U/(8w)\ln(U/2w)$ (if we consider Coulomb repulsion as
perturbation then $d(U\to 0)=1/4-{\cal O}(U)$); in order to compare the
obtained
dependence~(4.8) $d$ on $U/w$ in the Hubbard model with other approximate
theories see e.g.~\cite{28}). $\Delta E\leq 0$ when the condition
$2w\geq U$ is satisfied.
\section{Conclusions}
In the present paper Mott-Hubbard transition has been studied in a generalized
Hubbard model with correlated hopping at half-filling using a mean-field
approximation~\cite{2,13}.
We have obtained the expression to calculate the concentration of polar states.
With the increase of the
parameter
$\tilde{w}/w$ ($0\leq \tilde{w}/w\leq 1$) the concentration of polar states
increases at given $U/w$.
Quasiparticle energy spectrum has been calculated. With the help of this
energy spectrum we have found the energy gap width. The peculiarities of the
obtained expressions are dependence on the concentration of polar states and
the non-equivalence of the upper and lower Hubbard bands. The increase of the
parameter
$\tilde{w}/w$ (at given $U/w$) leads to decreasing energy gap width.
In consequence of it, in particular, MIT occurs when
the condition $U/w=1+\tilde{w}/w$ is satisfied.
The cases $n\neq1$ and $T\neq0$, an application of the obtained results
to the interpretation of the experimental data will be considered in the next
papers.
The authors are grateful to Prof. I.~V.~Stasyuk for valuable discussions.
| proofpile-arXiv_065-9085 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{INTRODUCTION}
\label{sec1}
Wilson's lattice formulation of QCD is the most reliable and powerful
non-perturbative approach to the physics of strongly interacting particles,
but its progress was hampered by systematical errors mainly
due to the finite value of the lattice spacing $a$.
Wilson's gluonic action differs from the continuum Yang-Mills action
by order of $O(a^2)$, while the error of Wilson's quark action is bigger,
i.e., the error being of the order $O(a)$.
Due to quantum effects, there is an additional problem called ``tadpole".
If $a$ and the bare coupling constant $g$ were small enough, these errors
would be negligible. Unfortunately, even in recent 4-dimensional
lattice QCD simulations on the most powerful computers,
the lattice spacing $a$ is larger than $0.1fm$ and
the lattice coupling $g_{lat}$ is bigger than $0.9$. For these lattice parameters,
violation of scaling is still obvious and
the extrapolation of the results to the $a \to 0$ limit induces
unknown systematic uncertainties when extracting continuum physics.
One of the most efficient ways for reducing these systematic errors is
the Symanzik improvement program \cite{Symanzik83}: adding local, nearest neighbor
or next-nearest-neighbor interaction terms to the Wilson lattice theory
so that the finite $a$ errors become higher order in $a$.
During recent years,
application of the Symanzik program has been a major topic.
There have been several proposals on this subject:
\noindent
(a) For the gauge sector, Lepage proposed a tadpole improved action
\cite{Lepage96}, reducing the errors from $O(a^2)$ to $O(a^4)$.
Luo, Guo, Kr\"oger, and Sch\"utte
constructed a tadpole improved Hamiltonian \cite{Luo98_1,Luo98_2} with
the same accuracy.
\noindent
(b) For the fermionic sector, Hamber and Wu
(one author of the present paper)
proposed the first improved action \cite{Hamber83},
by adding next-nearest-neighbor
interaction terms to the Wilson quark action
so that the $O(a)$ error is canceled. There have been some
numerical simulations \cite{Lombardo93,Borici97,Fiebig97}
of hadron spectroscopy using the Hamber-Wu action.
Sheikholeslami and Wohlert \cite{Sheik85}, L\"uscher and Weisz \cite{Luscher85}
worked on fermionic improvement and recent progress has been achieved on non-perturbative improvement via implementation of PCAC by the ALPHA collaboration \cite{Luscher98}.
Luo, Chen, Xu, and Jiang (two authors of the present paper)
constructed an improved Hamiltonian \cite{Luo94},
which was tested in the Schwinger model ($\rm{QED_2}$).
There are other possibilities
for improving Wilson's quark theory \cite{Luo96_1}.
The purpose of this work is to demonstrate numerically that fermionic improvement works also in the Hamiltonian formulation for the case of QCD in two dimensions.
Like Quantum Electrodynamics in 2 dimensions ($\rm{QED_{2}}$)
also Quantum Chromodynamics in 2 dimensions ($\rm{QCD_2}$)
has the properties of chiral symmetry breaking, an anomaly and confinement.
However, the latter has gluonic self interactions, which makes it more similar
to QCD in 4 dimensions ($\rm{QCD_4}$)
than to the Schwinger model.
In addition to having confinement and chiral-symmetry breaking,
$\rm{QCD_2}$ has much richer spectrum
because of the non-abelian gauge interactions
between ``gluons" and ``quarks".
Early in 1974, 't Hooft \cite{tHooft74} did
pioneering work on $\rm{QCD_2}$ using the $1/N_C$ expansion (where $\rm{QCD}$ corresponds to the gauge symmetry group $\rm{SU(N_C)}$).
He showed in the limit $N_{C} \to \infty$ that planar diagrams are
dominant and they can be summed up in a bootstrap equation. From that he obtained a meson-like mass spectrum.
This model has been extensively studied in the large $N_{C}$ limit and also in the chiral limit $m_{q} \to 0$ (where $m_{q}$ is the free quark mass). Zhitnitsky \cite{kn:Zhit85} has pointed out that there two distinct phases: \\
(i) $N_{C} \to \infty$ firstly, and then $m_{q} \to 0$ afterwards, \\
(ii) $m_{q} \to 0$, and then $N_{C} \to \infty$. \\
The first case corrersponds to $g << m_{q}$, which is the weak coupling regime.
It descibes the phase considered by t`Hooft,
where the following relation holds,
\begin{equation}
N_{C} \to \infty, ~~~ g^{2}N_{C} = \mbox{const.} ~~~
m_{q} >> g \sim {1 \over \sqrt{N_{C}}}.
\end{equation}
t'Hooft found a spectrum of states (numbered by $n$) given by
\begin{equation}
M_{n} \sim \pi^{2} m_{0} n, ~~~ m_{0}^{2} = {g^{2}N_{C} \over \pi}.
\end{equation}
Zhitnitsky \cite{kn:Zhit85} has computed the gluon condensate and the quark condensate. The quark condensate is given by
\begin{equation}
\langle \bar{\psi} \psi \rangle = - N_{C} \sqrt{{g^{2}N_{C} \over 12 \pi}}.
\label{eq:Zhit}
\end{equation}
The second case corresponds to $g >> m_{q}$ which is the strong coupling regime.
In the strong coupling regime the observed spectrum is different \cite{kn:Balu80,kn:Bhat82,kn:Stei80}.
Steinhardt \cite{kn:Stei80} has computed the masses of the following particles: soliton (baryon), anti-soliton (anti-baryon) and soliton-antisoliton (baryon-antibaryon) bound state. Bhattacharya \cite{kn:Bhat82} has shown that
in the chiral limit there are free fields of mass
\begin{equation}
M = g \sqrt{{N_{C}+1 \over 2\pi}}.
\label{eq:Bhat}
\end{equation}
Note that for $N_C=1$ this coincides with the Boson mass in the Schwinger model.
Grandou et al. \cite{kn:Gran88} have obtained quark condensate in the chiral limit (for arbitrary $g$) and obtained
\begin{equation}
\langle \bar{\psi} \psi \rangle \sim - g N_{C}^{3/2}
\end{equation}
which confirms Zhitnitsky`s result obtained in the weak coupling regime.
\bigskip
Of course, $\rm{QCD_2}$ is much simpler than $\rm{QCD_4}$.
It has been used to mimic the properties of $\rm{QCD_4}$
such as vacuum structure, hadron scattering and decays,
and charmonium picture.
Unlike the massless Schwinger model, unfortunately, $\rm{QCD_2}$
is no longer exactly solvable.
The first lattice study of $\rm{QCD_{2}}$ was done by Hamer \cite{kn:Hame82}
using Kogut-Susskind fermions.
He computed for $\rm{SU(2)}$ the mass spectrum in the t'Hooft phase.
In 1991, Luo $et~al.$ \cite{Luo91_1}
performed another lattice field theory study of this model
($\rm{SU(2)}$ and $\rm{SU(3)}$) using Wilson fermions.
As is shown later, the results for lattice $\rm{QCD_2}$
with Wilson quarks were found
to be strongly dependent on the unphysical Wilson parameter $r$,
the coefficient of the $O(a)$ error term.
The purpose of this paper is to show that in the case of
$\rm{QCD}_2$, the improved theory \cite{Luo94}
can significantly reduce these errors.
The remaining part of the paper is organized as follows.
In Sect. \ref{sec2}, we review some features of the Hamiltonian
approach as well as the improved Hamiltonian
for quarks proposed in Ref. \cite{Luo94}.
In Sect. \ref{sec3}, the wave functions of
the vacuum and the vector meson are constructed,
and the relation between the continuum chiral condensate and
lattice quark condensate is developed.
The results for the quark condensate and the mass spectrum
are presented in Sect. \ref{sec4} and discussions
are presented in Sect. \ref{sec5}.
\section{IMPROVED HAMILTONIAN FOR QUARKS}
\label{sec2}
Although numerical simulation in the Lagrangian formulation has become
the main stream and a lot of progress has been made over the last
two decades, there are areas where progress has been quite slow and new
techniques should be explored:
for example, computation of the $S$-matrix and cross sections,
wave functions of vacuum, hadrons and glueballs,
QCD at finite baryon density, or the computation of
QCD structure functions in the region of small $x_{B}$ and $Q^{2}$.
In our opinion the Hamiltonian approach is a viable alternative
\cite{Kroger92,Guo97} and
some very interesting results
\cite{Luo96_2,Luo97,Luo98_3,Briere89,Kroger97,Kroger98,Schutte97}
have recently been obtained.
Many workers in the Lagrangian formulation nowadays have followed
ideas similar to the Hamiltonian approach
(where the time is continuos, i.e., $a_t=0$)
by considering anisotropic lattices
with lattice spacings $a_t << a_s$.
The purpose is to improve the
signal to noise ratio in the spectrum computation \cite{Lepage96}.
In the last ten years, we have done a lot of work
\cite{Luo91_1,Luo89,Luo90_1,Luo90_2,Chen90,Luo91_2,Luo92}
on Hamiltonian lattice field theory, where the conventional
Hamiltonian in the Wilson fermion sector is used:
\begin{eqnarray*}
H_{f}= H_{m} + H_{k} + H_{r},
\end{eqnarray*}
\begin{eqnarray*}
H_{m} = m\sum_{x} \bar {\psi}(x) \psi (x),
\end{eqnarray*}
\begin{eqnarray*}
H_{k}={1 \over 2a} \sum_{x,k} \bar{\psi}(x) \gamma_{k} U(x,k) \psi (x+k),
\end{eqnarray*}
\begin{eqnarray}
H_{r} = {r \over 2a} \sum_{x,k}[\bar{\psi}(x) \psi (x)-
\bar{\psi} (x) U(x,k) \psi (x+k)],
\label{unimprovedH}
\end{eqnarray}
where $a$ is now the spacial lattice spacing $a_s$,
$U(x,k)$ is the gauge link variable at site
$x$ in the direction $k= \pm j$ ($j$ is the unit vector),
and $\gamma_{-j} = -\gamma_{j}$, $H_m$, $H_k$, $H_r$ are
respectively the mass term, kinetic term and Wilson term.
The Wilson term ($r \not= 0$), proportional to $O(ra)$,
is introduced to avoid the fermion species doubling, with the price of
explicit chiral-symmetry breaking even in the vanishing bare fermion mass limit.
As discussed in Sec. \ref{sec1},
the $O(a)$ error in $H_{f}$ indeed
leads to lattice artifacts if $a$ or $g_{lat}$ is not small enough.
Similar to the Hamber-Wu action \cite{Hamber83},
where some next-nearest-neighbor interaction terms
are added to the Wilson action to cancel the $O(ra)$ error,
we proposed a $O(a^2)$ improved Hamiltonian in Ref. \cite{Luo94}:
\begin{eqnarray*}
H_{f}^{improved}= H_{m} + H_{k}^{improved} + H_{r}^{improved},
\end{eqnarray*}
\begin{eqnarray*}
H_{k}^{improved}=
{b_{1} \over 2a} \sum_{x,k}\bar{\psi} (x) \gamma_{k} U(x,k) \psi (x+k)
\end{eqnarray*}
\begin{eqnarray*}
+{b_{2} \over 2a}\sum_{x,k}\bar{\psi} (x) \gamma_{k} U(x,2k) \psi (x+2k),
\end{eqnarray*}
\begin{eqnarray*}
H_{r}^{improved}={r \over 2a} \sum_{x,k}\bar{\psi}(x) \psi (x)
\end{eqnarray*}
\begin{eqnarray*}
-c_{1}{r \over 2a} \sum_{x,k}\bar{\psi} (x) U(x,k) \psi (x+k)
\end{eqnarray*}
\begin{eqnarray}
-c_{2}{r \over 2a} \sum_{x,k}\bar{\psi} (x) U(x,2k) \psi (x+2k).
\label{ImprovedH}
\end{eqnarray}
Here $U(x,2k)=U(x,k)U(x+k,k)$ and the coefficients
$b_{1}, b_{2}, c_{1}$ and $c_{2}$ are given by
\begin{eqnarray}
b_{1}={4 \over 3}, b_{2}=-{1 \over 6},
c_{1}={4 \over 3}, c_{2}=-{1 \over 3}.
\end{eqnarray}
These coefficients are the same for any d+1 dimensions
and gauge group.
The results shown in this paper correspond to this set of parameters.
However, the following set of paramters
\begin{eqnarray}
b_{1}={1}, b_{2}=0,
c_{1}={4 \over 3}, c_{2}=-{1 \over 3},
\end{eqnarray}
where only the Wilson term is improved, gives very similar results.
With the absence of the $O(ra)$ error, we
expect that we can extract the continuum physics in a more reliable way.
Lattice $\rm{QCD_2}$ in the Hamiltonian approach
has some nice features which
simplify the computations considerably:
\noindent
(a) The magnetic interactions are absent and the tadpole factor $U_0=1$
due to the fact that $U_p=U^{\dagger}_p=1$. Therefore, there is only
color-electric energy term in the gluonic Hamiltonian:
\begin{eqnarray}
H_{g} = {g^{2} \over 2a} \sum_{x,j} E_{j}^{\alpha}(x) E_{j}^{\alpha}(x),
\end{eqnarray}
with $j=\vec{1}$ and $\alpha=1, ..., N_c^{2}-1$.
\noindent
(b) The quantum (tadpole) effects, if any, are highly suppressed
in the fermionic sector as $O(g_{lat}^2a) \to O(a^3)$
and in the gluonic sector as $O(g_{lat}^2a^2) \to O(a^4)$.
The reason is the super-renormalizability of 1+1 dimensional
theories, where
\begin{eqnarray}
g_{lat}=ga.
\end{eqnarray}
All this means that
in 1+1 dimensions, classical improvement of the fermionic
Hamiltonian is sufficient.
In Ref. \cite{Luo94}, the mass spectrum of the Schwinger model
was used to test the improved program.
In the following sections, we will provide evidence in $\rm{QCD_2}$
to support the efficiency of the improved Hamiltonian.
The results for the quark condensate
and mass spectrum further confirm
our expectation.
\section{VACUUM AND VECTOR PARTICLE STATES}
\label{sec3}
The vacuum wave function
is constructed in the same way as in \cite{Luo94}:
\begin{eqnarray}
\vert \Omega \rangle =exp(iS)\vert 0 \rangle,
\end{eqnarray}
where
\begin{eqnarray*}
S=\sum_{n=1}^{N^S_{trun}} \theta_{n}S_n
\end{eqnarray*}
\begin{eqnarray*}
S_1 = i \sum_{x,k}\psi^{\dagger} (x) \gamma_{k}
U(x,k) \psi (x+k)
\end{eqnarray*}
\begin{eqnarray*}
S_2 = i \sum_{x,k}\psi^{\dagger} (x) \gamma_{k} U(x,2k) \psi (x+2k)
\end{eqnarray*}
\begin{eqnarray*}
S_3= i \sum_{x,k}\psi^{\dagger} (x) \gamma_{k} U(x,3k) \psi (x+3k)
\end{eqnarray*}
\begin{eqnarray}
...
\label{Vacuum}
\end{eqnarray}
with $\theta_n$
determined by minimizing the vacuum
energy. $\vert 0 \rangle$ is the bare vacuum defined by
\begin{eqnarray}
\xi(x) \vert 0\rangle = \eta(x) \vert 0\rangle =
E^{\alpha}_{j}(x) \vert 0\rangle =0.
\end{eqnarray}
Here $\xi$ and $\eta^{\dagger}$ are respectively the up and down
components of the $\psi$ field.
Such a form of the fermionic vacuum (\ref{Vacuum})
has also been discussed extensively
in the literature
\cite{Luo91_1,Luo89,Luo90_1,Luo90_2,Chen90,Luo91_2,Luo92}.
One is interested in computing the wave function of the lowest lying energy state
which is the flavor-singlet vector meson $\vert V \rangle$,
similar to the case of the Schwinger model.
The wave function is created by
a superposition of some operators $V_{n}$ with the given quantum numbers
\cite{Luo94,Fang92}
\begin{eqnarray*}
V_{0}=i \sum_{x}\bar{\psi} (x) \gamma_{1} \psi (x),
\end{eqnarray*}
\begin{eqnarray*}
V_{1}=i \sum_{x,k}\bar{\psi} (x) \gamma_{1} U(x,k) \psi (x+k),
\end{eqnarray*}
\begin{eqnarray*}
V_{2}=i \sum_{x,k}\bar{\psi} (x) \gamma_{1} U(x,2k) \psi (x+2k),
\end{eqnarray*}
\begin{eqnarray*}
V_{3}=i \sum_{x,k}\bar{\psi} (x) \gamma_{1} U(x,3k) \psi (x+3k),
\end{eqnarray*}
\begin{eqnarray}
...
\label{Operator}
\end{eqnarray}
acting on the vacuum state $\vert \Omega \rangle$, i.e.,
\begin{eqnarray}
\vert V \rangle = \sum_{n=0}^{N^V_{trun}} A_{n}
[V_{n}-\langle \Omega \vert V_{n} \vert \Omega\rangle]
\vert\Omega \rangle.
\label{Vector}
\end{eqnarray}
The criterion for choosing the truncation orders
$N^S_{trun}$ and $N^V_{trun}$ is the convergence of the
results.
An estimate for the vector mass $M_V$
is the lowest eigenvalue $~ ~ min(E_V) ~ ~$ of the following equations
\begin{eqnarray*}
\sum_{n_1=0}^{N^V_{trun}}(H^{V}_{n_{2}n_{1}}-E_{V}U^{V}_{n_{2}n_{1}})A_{n_1}=0,
\end{eqnarray*}
\begin{eqnarray}
det \vert H^{V}-E_{V}U^{V} \vert =0,
\label{Eigen}
\end{eqnarray}
where the coefficients $A_{n_1}$
are determined by solving the equations, and
the matrix elements $H^{V}_{n_{2}n_{1}}$ and $U^{V}_{n_{2}n_{1}}$
are defined by
\begin{eqnarray*}
H^{V}_{n_{2}n_{1}}
=\langle V_{n_{2}}\vert [ H^{improved}_{f}+H_g ] \vert V_{n_{1}}
\rangle ^{L},
\end{eqnarray*}
\begin{eqnarray}
U^{V}_{n_{2}n_{1}}=\langle V_{n_{2}}\vert V_{n_{1}}\rangle^{L}.
\label{Matrix}
\end{eqnarray}
Here the superscript $L$ means
only the matrix elements proportional to the
lattice size $L$ are retained (those proportional to higher order of $L$ do not contribute).
Detailed discussions can be found in \cite{Fang92}. We can estimate the continuum
$M_V$ from
\begin{eqnarray}
{M_V\over g}={aM_V\over g_{lat}},
\label{Mscaling}
\end{eqnarray}
if the right hand side is independent of $g_{lat}$.
In lattice field theory with Wilson fermions,
chiral symmetry is explicitly broken even in the
bare vanishing mass limit. Therefore, the fermion condensate
$\langle \bar{\psi} \psi \rangle_{free}$
is non-vanishing and should be subtracted in a way described in
Refs. \cite{Luo91_1,Chen90,Luo91_2,Luo90_3}:
\begin{eqnarray}
\langle \bar{\psi} \psi \rangle_{sub}=
\langle \bar{\psi} \psi \rangle - \langle \bar{\psi} \psi \rangle_{free},
\end{eqnarray}
where for one flavor
\begin{eqnarray*}
\langle \bar{\psi} \psi \rangle
= {1 \over LN_c} \langle \Omega \vert \bar{\psi} \psi
\vert \Omega \rangle_{m=0}
\end{eqnarray*}
\begin{eqnarray*}
\langle \bar{\psi} \psi \rangle_{free}
= {1 \over LN_c} {\partial E_{\Omega_{free}} \over \partial m} \vert_{m=0}
\end{eqnarray*}
\begin{eqnarray*}
=- \int_{-\pi/a}^{\pi/a} d pa
\lbrace [ {r \over a} (1-c_{1}cos pa- c_{2}cos 2pa)]^{2}
\end{eqnarray*}
\begin{eqnarray*}
+[{sin pa \over a} (b_{1}+b_{2}cos pa)]^{2} \rbrace
\end{eqnarray*}
\begin{eqnarray*}
/\lbrace [ {r \over a} (1-c_{1}cos pa- c_{2}cos 2pa) ]^{2}
\end{eqnarray*}
\begin{eqnarray}
+[{sin pa \over a} (b_{1}+b_{2}cos pa)]^{2} \rbrace^{1/2}.
\end{eqnarray}
From $\langle \bar{\psi} \psi \rangle_{sub}$, we can get an estimate of the
continuum quark condensate $\langle \bar{\psi} \psi \rangle_{cont}$
\begin{eqnarray}
{\langle \bar{\psi} \psi \rangle_{cont} \over g}=
{\langle \bar{\psi} \psi \rangle_{sub} \over g_{lat}},
\label{scaling}
\end{eqnarray}
if the right hand side does not depend on the bare coupling $g_{lat}$.
It is well known that spontaneous chiral-symmetry breaking
originates from the axial anomaly and
there is no Goldstone pion in
quantum field theory in 1+1 dimensions.
Therefore, for Wilson fermions, one can not fine-tune the
$(r,m)$ parameter space as in $\rm{QCD}_4$ to reach the chiral limit.
However, the chiral limit can be approximated as the $m \to 0$ limit
as long as the lattice spacing error is sufficiently small.
For Wilson fermions, as discussed in Ref. \cite{Azcoiti96},
this is not well justified for finite $g_{lat}$ and $a$
because of the $O(ra)$ error,
but for the improved theory, as shown in Refs. \cite{Borici97,Luo94},
this would be a reasonably good approximation
since the chiral-symmetry breaking term
is much smaller, i.e., $O(ra^2)$.
\section{RESULTS FOR QUARK CONDENSATE AND VECTOR MASS}
\label{sec4}
To increase the accuracy of the techniques described in Sect. \ref{sec3},
we include higher order contributions in Eq. (\ref{Vacuum})
than those in Ref. \cite{Luo94} so that better convergence of the results
can be obtained.
Figure 1 and Figure 3 show
$\langle \bar{\psi} \psi \rangle_{sub}/g_{lat}$ as a function of $1/g_{lat}^{2}$
in 2-dimensional lattice SU(2) and SU(3), respectively, gauge theories with Wilson fermions.
Figure 5 and Figure 7 show
$aM_V/g_{lat}$ as a function of $1/g_{lat}^{2}$
in 2-dimensional lattice SU(2) and SU(3), respectively, gauge theories with Wilson fermions.
As one can see, the results for $r=1$ deviate obviously from
those for $r=0.1$,
which is attributed to the $O(ra)$ error of the Wilson term.
The corresponding results from the Hamiltonian with improvement
are presented
in Figure 2, Figure 4, Figure 6, and Figure 8.
One observes that the differences between the
results for $r=1$ and $r=0$ are significantly reduced.
Most impressively, the data for the quark condensate
coincide each other.
A similar $r$ test has also been used in \cite{Alford97}
for checking the efficiency of the improvement program.
To get an idea about how $\rm{QCD}_2$ behaves for large $N_C$,
we have computed the quark condensate $\langle \bar{\psi} \psi \rangle$ in the chiral limit for $N_C=3,4,5,6$. This is shown in Figure 9.
We have compared our numerical results with the theoretical result by Zhitnitsky \cite{kn:Zhit85}, Eq.(\ref{eq:Zhit}). Although Zhitnitsky's result was obtained in the weak coupling phase, it qualitatively agrees with the strong coupling result of Ref.\cite{kn:Gran88}. Remarkably, our results agree very well with
Zhitnitsky's weak coupling result.
Secondly, we show in Figure 10 the lowest lying mass of the mass spectrum again for $N_C=3,4,5,6$ in the chiral limit.
Note that this particle corresponds to the vector particle in the Schwinger model. In $\rm{QCD_2}$ it corresponds to free particles (the mass of which remains finite when $m_{q}$ goes to zero) distinct from
``sine-Gordon" soliton particles (the mass of which goes to zero when $m_{q}$ goes to zero, see \cite{kn:Stei80}). We compare our numerical result with the analytical strong coupling result by Bhattacharya \cite{kn:Bhat82}, Eq.(\ref{eq:Bhat}).
Again we find agreement.
\section{CONCLUSIONS}
\label{sec5}
In this work we have shown that fermionic improvement works also in the Hamiltonian lattice formulation for the case of $QCD_{1+1}$.
We have computed the quark condensate and
the flavor singlet vector mass using
the $O(a^2)$ improved Hamiltonian for quarks \cite{Luo94}
proposed by Luo, Chen, Xu, and Jiang.
In comparison with the results corresponding to the Wilson fermions without improvement, we indeed observe significant reduction of the finite lattice spacing error $O(ra)$. By comparison with analytical results for the quark condensate and the vector mass we find good agreement.
In our opinion, this is the first lattice study of $QCD_{1+1}$ which
gives results in the strong coupling phase (in contrast to the t'Hooft phase).
In particular we present results for different gauge groups ($N_{C}=2,3,4,5,6$).
We believe that the application of the Symanzik improvement program
to QCD in 3+1 dimensions will be very promising.
\acknowledgments
We are grateful to Q. Chen, S. Guo, J. Li and J. Liu for useful discussions.
X.Q.L. is supported by the
National Natural Science Fund for Distinguished Young Scholars,
supplemented by the
National Natural Science Foundation of China,
fund for international cooperation and exchange,
the Ministry of Education,
and the Hong Kong Foundation of
the Zhongshan University Advanced Research Center.
H.K. is grateful for support by NSERC Canada.
| proofpile-arXiv_065-9090 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The existence of multiphonon states in the nuclear spectrum of excitation
has been predicted since the introduction of collective
models \cite{BM75}. Examples of low-lying nuclear vibrational states have
been known for many years in nuclear spectra and are still being actively
investigated with new generation of detectors; in particular, two-phonon
multiplets and some three-phonon states based on the low-lying
collective quadrupole and
octupole modes have been found \cite{ABC87,CZ96,GVB90,KPZ96,YGM96,K92,POD97}.
Recently, it has also been beautifully demonstrated that multi-phonon
excitations of these low-lying collective vibrations strongly influence
heavy-ion fusion reactions at energies
near and below the Coulomb barrier \cite{SACN95}, through the so-called
fusion barrier distribution analysis\cite{RSS91,LDH95}.
It was pointed out that anharmonicities of vibrational excitations
can significantly alter the shape of fusion barrier distribution
and that thus sub-barrier fusion reactions offer an alternative
method for extracting the static quadrupole moments of phonon
states in spherical nuclei\cite{HKT98,HTK97}.
In the past 15 years, evidence has been collected for two-phonon
giant resonances as well\cite{ABE98}.
This evidence stems from heavy-ion reactions at
intermediate energy\cite{CF95,POL86}, pion-induced double charge exchange
reactions\cite{MO94}, and relativistic heavy-ion reactions via
electromagnetic excitations, in particular the excitation of the
double giant dipole resonance (DGDR)\cite{E94}.
In the experiments of the last type,
a puzzling problem has been reported\cite{SBE93,RBK93,AKS93,BSW96}.
Although the experimental
data show that the centroid of the DGDR is about
twice the energy of the single phonon resonance, theoretical calculations
which assume the harmonic oscillator for giant resonances
considerably underestimate the cross sections for double phonon
states. In connection with this problem, anharmonic properties of giant
resonances are attracting much
interests\cite{BF97,VCC95,LAC97,BD97,NW95,VBR95,AV92}.
Recently Bertsch and Feldmeier applied the time-dependent variational
principle (TDVP) \cite{KK76} to large amplitude collective motions and
discussed anharmonicities of various giant resonances\cite{BF97}. One
of the advantages of their approach
is that solving the resultant equations
and estimating the degree of anharmonicity are quite simple.
They found that the relative frequency of the double phonon
state scales as $\Delta\omega/\omega\sim A^{-4/3}$,
$A$ being the atomic number of a nucleus.
Earlier,
Ponomarev et al. \cite{PBBV96} noted that this quantity scales as
$A^{-2/3}$ in the nuclear field theory (NFT) \cite{BBDLSM76},
the same as the NFT result for the octupole mode \cite{BM75}.
Reference \cite{BM75} also remarks that a liquid-drop estimate
gives an A$^{-4/3}$ dependences, implying that quantal effects are
responsible for the difference.
Both NFT and TDVP are quantal theories giving different results
in refs.
\cite{BBDLSM76} and \cite{BF97}, so differences must be due
either to inadeqate approximations or differences in the underlying
Hamiltonians.
We therefore undertook to try both methods on a solvable
Hamiltonian. This will test the reliability of both methods, and if both
give correct results, the disagreement is very likely
attributable to the Hamiltonian assumptions.
The time-dependent variational approach was recently applied also to
dipole plasmon resonances of metal clusters \cite{H99}.
In Ref. \cite{H99}, it was shown that the anharmonicity of dipole
plasmon resonances is very small and scales as
$\Delta\omega/\omega\sim A^{-4/3}$, which is identical to
the result of the time-dependent variational approach for nuclear
giant resonances.
On marked contrast, Catara {\it et al.} claimed a large anharmonicity
of plasmon resonances using a boson expansion method\cite{CCG93},
which contradicts
both to the result of Ref. \cite{H99} and to recent experimental
observation \cite{SKIH98}.
This fact also motivated us to compare these two methods, together
with the nuclear field theory, on a solvable
model in order to clarify the origin of the discrepancy.
As an aside, we mention also that there is a size-dependent anharmonicity
in quantum
electrodynamics, considering the photon spectrum in a small cavity.
In QED the only
dimensional quantity is the electron mass $m_e$, and the photon-photon
interaction is proportional to $m_e^{-4}$. Thus on dimensional grounds
the relative shift of a two photon state in a cavity of side $L$ scales
as $\Delta\omega/\omega\sim 1/m_e^4L^4$.
Considering that sizes of a system scale as $R\sim A^{1/3}$,
the results of refs.
\cite{BF97,H99} are $\Delta\omega/\omega\sim R^{-4}$, identical to QED.
In this work we compare the nuclear field theory, the time-dependent
variation approach, and the boson expansion method
on the second Lipkin model, eq. (2.5) in ref.~\cite{LMG65}.
The model is finite-dimensional and can be solved exactly.
It has widely been used in literature to test a number of
many-body theories \cite{H73,FC72,BC92,KWR95,VCACL99}.
The paper is organized as follows.
In Sec. II, we first solve the model in the random phase approximation (RPA)
and discuss the harmonic limit. In Sect. III we derive the
collective Hamiltonian using the TDVP.
We requantize the collective Hamiltonian and discuss the deviation from the
harmonic limit. Numerical calculations are performed and compared with
the exact solutions.
In Sec. IV, we use the nuclear field theory as an alternative. There we
see that it gives the same result as the TDVP, to leading order
in the dependence on the number of particles.
In Sec. V, we compare those results with the boson expansion approach.
It will be shown that it leads to the identical result to
the TDVP and the NFT. Finally we summarise the paper in Sec. VI.
\section{HARMONIC LIMIT}
Lipkin, Meshkov and Glick\cite{LMG65} proposed two Hamiltonian models to
describe $N$ particles, each of which can be in two states,
making $2^N$ states in the basis.
Using Pauli matrix representation
for the operators in the two-state space, the second model has the
Hamiltonian
\begin{equation}
\label{HH}
H=\frac{1}{2}\epsilon\sigma_z - \frac{V}{2}\sigma_x\sigma_x.
\end{equation}
The first term is the single-particle Hamiltonian with an excitation energy
$\epsilon$, and the second term is a two-body interaction.
The quasi-spin operators $\sigma_z$ and $\sigma_x$ are given by
\begin{eqnarray}
\sigma_z&=&\sum_{i=1}^{N}(a^{\dagger}_{1i}a_{1i}
-a^{\dagger}_{0i}a_{0i}),\\
\sigma_x&=&\sum_{i=1}^{N}(a^{\dagger}_{1i}a_{0i}
+a^{\dagger}_{0i}a_{1i}),
\end{eqnarray}
respectively.
$a^{\dagger}_{1i}$ ($a_{1i}$) and $a^{\dagger}_{0i}$ ($a_{0i}$)
are the creation (annihilation) operators of the
$i$-th particle for the upper and the lower levels, respectively.
For small $V$, the Hartree ground state $|0>$ is the fully
spin-polarised state with matrix elements given by
\begin{equation}
<0|\sigma_i|0> = -N\delta_{i,z}.
\end{equation}
A suitable basis for the exact diagonalization of $H$ is the set of
eigenvectors
of the angular momentum operators $J^2$ and $J_{z}$ with $J=N/2$.
Then the
dimension of the matrix diagonalization is reduced from $2^N$ to
$N+1$, making the numerical problem very easy.
Before going to the anharmonicity, we note that the harmonic limit
is obtained by solving the RPA equations. This was carried out in ref.
\cite{MGL65} for the first Lipkin model.
The RPA frequency for the second Lipkin model, Eq. (\ref{HH}),
is obtained in the same manner.
Setting the RPA excitation operator $O^{\dagger}$ as
\begin{equation}
O^{\dagger}=
X\left(\sum_{i=1}^{N}a^{\dagger}_{1i}a_{0i}\right) -Y
\left(\sum_{i=1}^{N}a^{\dagger}_{0i}a_{1i}\right),
\label{RPAexc}
\end{equation}
the RPA equations read
\begin{equation}
\left(\begin{array}{cc}
A&B\\
B&A
\end{array}\right)
\left(\begin{array}{c}
X\\
Y
\end{array}\right)
=\omega
\left(\begin{array}{cc}
1&0\\
0&-1
\end{array}\right)
\left(\begin{array}{c}
X\\Y
\end{array}\right),
\end{equation}
where $A$ and $B$ are given by
\begin{eqnarray}
A&=&\frac{1}{4N}<0|[\sigma_-,[H,\sigma_+]]|0>
= \epsilon (1-\chi), \\
B&=&-\frac{1}{4N}<0|[\sigma_-,[H,\sigma_-]]|0>
= \epsilon \chi,
\end{eqnarray}
respectively. We have defined $\sigma_-$, $\sigma_+$, and $\chi$ as
\begin{eqnarray}
\sigma_-&=&2\sum_{i=1}^{N}a^{\dagger}_{0i}a_{1i}, \\
\sigma_+&=&2\sum_{i=1}^{N}a^{\dagger}_{1i}a_{0i}, \\
\chi&=&V(N-1)/\epsilon,
\end{eqnarray}
respectively.
From these equations, the RPA frequency and the amplitude of the forward
and the backward scatterings are found to be
\begin{eqnarray}
\label{RPA}
\omega&=&\sqrt{(A+B)(A-B)}=\epsilon\sqrt{1-2\chi}, \\
X&=&\frac{\omega+\epsilon}{2\sqrt{N\omega\epsilon}}, \\
Y&=&\frac{\omega-\epsilon}{2\sqrt{N\omega\epsilon}},
\end{eqnarray}
respectively.
Fig.~1 compares the exact solution for the excitation energy of the
first excited state with the RPA frequency given by eq.~(\ref{RPA}).
As a typical example, we set the strength of the interaction
$V/\epsilon$ to be 0.01.
The solid line shows the exact solutions for this particular choice
of the Hamiltonian parameters, while the dashed line shows the RPA
frequency. At $N=51,~ \chi=0.5$, the RPA frequency becomes zero and
the system undergoes phase transition from spherical
to deformed.
Figure 2 is the same as Fig.~1, but for a fixed $\chi$.
We set $\chi$ to be 0.25, which corresponds to the isoscalar giant
quadrupole resonance.
We find significant deviation of the RPA frequency from the exact
solution for small values of $N$, suggesting large anharmonicities.
We discuss now the deviation from the harmonic limit.
\section{TIME-DEPENDENT VARIATIONAL APPROACH}
The time-dependent variational approach has been applied to the first
Lipkin model by Kan et al. in ref. \cite{KA80},
but has never been applied to our knowledge to the second model.
In keeping with the procedure of ref. \cite{BF97}, we postulate
a time-dependent wave function of the form
\begin{equation}
\label{ab}
|\alpha\beta\rangle = \exp(i\alpha(t) \sigma_x)\exp(i\beta(t)
\sigma_y)|0\rangle.
\end{equation}
The motivation for this ansatz appears in ref.~\cite{BF97}. The operator
in the first term is the one that we wish to evaluate in the
transition matrix elements. The operator in the second term
is obtained by the commutator with the Hamiltonian,
\begin{equation}
[H,\sigma_x] = i \epsilon \sigma_y.
\end{equation}
The Lagrangian is given by
\begin{equation}
\label{L}
{\cal L} = -{\dot \alpha} \langle \beta|\sigma_x|\beta\rangle - \langle
\alpha\beta|H|\alpha\beta\rangle.
\end{equation}
We reduce this with the help of the identity
\begin{equation}
e^{-i\sigma_i \theta} \sigma_j e^{i\sigma_i \theta}= \cos 2\theta\, \sigma_j
+ \sin 2\theta\, \sigma \cdot (\hat i \times \hat j),
\end{equation}
where $i\ne j$ are Cartesian indices of the Pauli matrices. For example,
the bracket in the first term of eq.~(\ref{L}) is reduced as
\begin{equation}
\langle \beta|\sigma_x|\beta\rangle = \langle 0|
e^{-i\sigma_y \beta} \sigma_x e^{i\sigma_y \beta}|0\rangle
= \cos 2\beta \langle 0|\sigma_x|0\rangle -
\sin2\beta \langle 0|\sigma_z|0\rangle.
\end{equation}
The first term in the Lagrangian is
\begin{equation}
-{\dot \alpha} \langle \beta|\sigma_x|\beta\rangle
= -N\dot\alpha\sin 2\beta.
\end{equation}
The second term is
\begin{equation}
- \langle\alpha\beta|H|\alpha\beta\rangle=
\epsilon {N\over 2}\cos 2\alpha\,\cos
2\beta\, +V{N\over 2}(\cos^2 2\beta +N\sin^2 2\beta\, ).
\end{equation}
The Lagrangian may then be expressed as
\begin{equation}
\label{LL}
{\cal L} = -N{\dot \alpha} \sin2\beta +\epsilon{N\over 2} \cos 2\alpha\,\cos
2\beta\, +V{N\over 2} (\cos^2 2\beta +N\sin^2 2\beta\, ).
\end{equation}
The first Lagrangian equation is $d/dt\,\partial{\cal L}/\partial\dot\alpha
-\partial {\cal L}/\partial\alpha=0$. It reduces to
\begin{equation}
\dot\beta = {\epsilon\over 2} \sin 2 \alpha.
\end{equation}
Similarly from the second Lagrange equation,
$d/dt\,\partial{\cal L}/\partial\dot
\beta-\partial {\cal L}/\partial\beta=0$ we obtain
\begin{equation}
\dot\alpha\cos2\beta + {\epsilon\over 2} \cos 2\alpha\, \sin 2\beta
-(N-1)V \cos2\beta\,\sin 2\beta=0.
\end{equation}
Next let us linearize and see what the RPA frequencies would be. The
linearized equations are
\begin{equation}
\dot\beta = \epsilon \alpha
\end{equation}
$$
\dot\alpha + (\epsilon - 2V (N-1))\beta=0.
$$
The equation for
the frequency reads
\begin{equation}
\omega^2=\epsilon^2 - 2 (N-1) V \epsilon =\epsilon^2(1-2\chi)
\end{equation}
in agreement with the result of eq.~(\ref{RPA}).
A Hamiltonian corresponding to our Lagrangian can be
seen by inspection, comparing eq.~(\ref{LL}) to the form
\begin{equation}
{\cal L} = \dot q p - {\cal H}(p,q).
\end{equation}
Equation~(\ref{LL}) is already of this form with e.g.,
$p= - {N\over 2}\sin 2 \beta$, $q= 2\alpha$. The Hamiltonian is
then given by
\begin{equation}
{\cal H}(p,q) = -\epsilon {N\over 2} \cos q \sqrt{1-(2p/N)^2} - V{N\over 2}
\Bigl((1-(2p/N)^2) + N (2p/N)^2\Bigr).
\label{HTDV}
\end{equation}
We now expand $\cal H$ in powers of $q$ and $p$ up to fourth order.
Dropping the constant term, the expansion has the form
\begin{equation}
\label{H}
{\cal H} = {p^2\over 2 m} + {k\over 2} q^2 + a q^4 + b p^4 + c q^2 p^2
\end{equation}
with coefficients
\begin{eqnarray}
{1\over m} &=& {2\over N}\Biggl(\epsilon-2 V (N -1)\Biggr), \\
k &=& \epsilon {N\over 2}, \\
a &=& -{\epsilon N \over 48}, \\
b &=& {\epsilon\over N^3}, \\
c &=& -{\epsilon\over 2N}.
\label{c}
\end{eqnarray}
Note that we recover the linear frequency, eq.~(\ref{RPA}), immediately from
$\omega^2 = k/m$.
In ref. \cite{BF97}, the anharmonicity was determined by requantizing
the Hamiltonian with the Bohr-Sommerfeld condition,
\begin{equation}
\int_{q0}^{q1} p dq = n \pi,
\end{equation}
where $p$ and $q$ satisfy ${\cal H}(p,q)=E$ and $q_0$ and $q_1$ are
the endpoints of the motion at energy $E$.
However, here we find it more convenient to use the
equivalent formula
\begin{equation}
\label{BS}
\int_{{\cal H} < E} dp dq = 2n\pi.
\end{equation}
In the same sense as the expansion of the Hamiltonian ${\cal H}(p,q)$ as
done in eq. (\ref{H}), we apply eq.~(\ref{BS}) iteratively.
We first consider only the harmonic part of the
Hamiltonian and transform the integration region to a circle.
We also use polar coordinates and write $p'=p/\sqrt{m}=r \sin\theta,
xq=\sqrt{k}q =\cos\theta$. The radius of the circle is then
$r_0=\sqrt{2E}$ and the harmonic approximation gives
\begin{equation}
\int_{{\cal H} < E} dp dq = \sqrt{m\over k} \int_{{\cal H} < E} dp' dq'
= {1\over \omega} \int_0^{2\pi} d\theta \int_0^{r_0} r dr
= 2\pi {E\over\omega}.
\end{equation}
The nonlinearity can now be treated as a perturbation. To lowest
order in the quartic terms, the radius to the boundary surface is
given by
\begin{equation}
r_1^2\approx r_0^2 - 2 r_0^4 \Biggl({a\over k^2} \cos^4\theta+
m^2 b \sin^4\theta + {m c\over k}\sin^2\theta\cos^2\theta\Biggr).
\end{equation}
This integral is easily evaluated with the result
\begin{equation}
{1\over 2 \pi}\int_{{\cal H} < E} dp dq \approx {E\over\omega} -
{\omega\over 2}\left({E\over\omega}\right)^2
\left(3{a\over k^2}+3{b m^2}+{c m\over k}\right).
\end{equation}
Inserting the parameters from eqs.~(\ref{H})-(\ref{c}), the anharmonic
term can be expressed
\begin{equation}
{\omega\over 8\epsilon N} \left({E\over\omega}\right)^2
\left(-1 + 3(\epsilon/\omega)^4- 2(\epsilon/\omega)^2\right).
\end{equation}
Note that if there is no interaction, $\omega=\epsilon$ and
the anharmonicity vanishes. This is rather remarkable; the
Hamiltonian in this case is the first term in eq. (\ref{HTDV}), which
looks nonlinear. But the solution of the equations of motion
are independent on excitation energy. It is not a harmonic oscillator
spectrum, however, because the energy is bounded. These two
properties correspond exactly to the quantum spectrum of the
operator $\epsilon J_z$.
We next quantize the above action to get
\begin{equation}
E_n = n\omega + n^2{\omega^2\over 8\epsilon N}
\Bigl(-1 + 3(\epsilon/ \omega)^4- 2(\epsilon/\omega)^2\Bigr).
\end{equation}
Taking the second difference, this yields an anharmonicity of
\begin{equation}
\label{ANH}
\Delta^{(2)} E ={\omega^2\over 4\epsilon N} \Bigl(
-1 + 3(\epsilon/\omega)^4- 2(\epsilon/\omega)^2\Bigr)
=\frac{2\chi\epsilon^3}{N\omega^2}\left(1-\frac{\chi}{2}\right).
\end{equation}
The exact value of anharmonicity $\Delta^{(2)}E$ is compared with the
value obtained from eq.~(\ref{ANH}) in Fig. 3. We can see that the
time-dependent variational principle works very well.
\section{Nuclear Field Theory (NFT) Approach}
The NFT is a formulation of many-body perturbation theory with
vibrational modes summed to all orders in RPA.
Its building blocks are RPA phonons
and the single particle degrees of freedom
which are described in the Hartree-Fock approximation.
The coupling between them is treated diagramatically in the
perturbation theory.
For the Hamiltonian (\ref{HH}), the effective NFT Hamiltonian
is given, to the lowest order, by
\begin{equation}
H_{NFT}=\frac{1}{2}\epsilon\sigma_z + \omega O^{\dagger}O
+H_{pv}.
\label{HNFT}
\end{equation}
The first term in the $H_{NFT}$ describes single-particle spectrum. In
writing down this term, we have used the fact
that, for small value of $V$, the excitation energy
in the HF is given by $\epsilon$ and that the creation and the annihilation
operators for the HF levels are the same as those for unperturbative levels.
The second term describes the RPA phonons, with
$\omega$ and $O^{\dagger}$ given
by eqs. (\ref{RPA}) and (\ref{RPAexc}), respectively.
The particle-vibration interaction $H_{pv}$ in eq. (\ref{HNFT}) is
given as
\begin{equation}
H_{pv}=-\Lambda (O^{\dagger}+O)\sigma_x,
\end{equation}
where the coupling constant $\Lambda$ is given by
\begin{equation}
\Lambda=NV\sqrt{\frac{\epsilon}{N\omega}}=\epsilon\chi'
\sqrt{\frac{\epsilon}{N\omega}},
\end{equation}
$\chi'$ being $NV/\epsilon$.
This Hamiltonian is constructed by replacing $\sigma_x$ in the
two-body interaction in the original Hamiltonian (\ref{HH})
as in Ref. \cite{BBDLS76}.
In general, there is also a residual interaction among
particles and holes, but it
does not contribute for the Hamiltonian (\ref{HH}) to the lowest order.
Each graph in the NFT contributes to a given order in $1/N$,
but to all orders in $\chi'$.
Since the microscopic origin of the RPA phonon is a coherent sum of
particle-hole excitations, bubble diagrams have to be excluded
when one calculates physical quantities in the NFT \cite{BBDLSM76}.
To the zero-order (in $1/N$),
the phonon energy coincides with that in the RPA given by eq. (\ref{RPA}).
The anharmonicity begins with the leading $1/N$ diagrams,
which are shown in Fig. 4.
These diagrams are called \lq\lq butterfly\rq\rq graphs (see also
refs. \cite{BM75,BBDLSM76,HA74}).
For each diagram shown in fig.4, there are 5 other diagrams which are
obtained by changing the direction of the phonon lines.
As already said, for the Hamiltonian (\ref{HH}) there is no diagram of order
$1/N$ involving residual interaction among fermions.
The contribution from each diagram is most easily evaluated by
using the Rayleigh-Schr\"odinger energy denominator, which are more
suitable in the lowest order expansion \cite{BBB78}.
The four graphs in Fig. 4 have identical contributions, each given by
\begin{equation}
Graph(a)={-N \Lambda^4\over{(2\omega -\omega
-\epsilon )^2(2\omega-2\epsilon)}}.
\end{equation}
In this equation, the minus sign appears
because of the crossing of two fermion lines \cite{BBBL77}.
By summing up the contributions from all diagrams, we obtain
\begin{equation}
\Delta^{(2)}E=
-4N\Lambda^4\frac{\epsilon(\omega^2+3\epsilon^2)}{(\omega^2-\epsilon^2)^3}
=\frac{2\chi'\epsilon^3}{N\omega^2}\left(1-\frac{\chi'}{2}\right).
\label{ANHNFT}
\end{equation}
To compare with eq.~(\ref{ANH}), note that
$\chi'=\chi$ to the leading order of $1/N$.
With this substitution, the two results are identical.
\section{BOSON EXPANSION APPROACH}
In the boson expansion method, each fermionic operators is
replaced by corresponding operators which are written in terms
of boson operators. There are several prescriptions to carry
out the mapping from the fermionic to the bosonic spaces \cite{RS81}.
Here we follow Refs. \cite{BC92,VCACL99,PKD68} which discussed
the anharmonicities of the Lipkin model using the Holstein-Primakoff
mapping.
In this mapping, fermionic operators are mapped to bosonic
operators so that the commutation relations among operators
are preserved.
The quasi-spin operators in the present two-level problem are then
mapped as \cite{BC92,VCACL99,PKD68,RS81}
\begin{eqnarray}
\sigma_+ &\to & 2\sqrt{N}B^{\dagger}\sqrt{1-\frac{B^{\dagger}B}{N}} \\
\sigma_- &\to & 2\sqrt{N}\sqrt{1-\frac{B^{\dagger}B}{N}} B\\
\sigma_z &\to & -N + 2 B^{\dagger}B,
\end{eqnarray}
where the operators $B$ and $B^{\dagger}$ satisfy the boson commutation
relation, i.e., $[B, B^{\dagger}]=1$.
The Hamiltonian in the boson space which corresponds to
the Lipkin Hamiltonian (\ref{HH}) is therefore obtained as
\begin{eqnarray}
H_B &\sim& \epsilon\left(1-\frac{NV}{\epsilon}\right)B^{\dagger}B
-\frac{NV}{2}\left(B^{\dagger}B^{\dagger}+BB\right) \nonumber \\
&&
+VB^{\dagger}B+\frac{V}{4}\left(B^{\dagger}B^{\dagger}+BB\right)
+VB^{\dagger}B^{\dagger}BB
+\frac{V}{2}\left(B^{\dagger}B^{\dagger}B^{\dagger}B+B^{\dagger}BBB\right),
\label{BH}
\end{eqnarray}
to the first order in $1/N$.
A truncation of the expansion up to the leading order of the $1/N$
corresponds to the RPA which we discussed in Sec. II.
To this order, the boson Hamiltonian (\ref{BH}) is given by
\begin{equation}
H^{(2)}_B=
\epsilon\left(1-\frac{NV}{\epsilon}\right)B^{\dagger}B
-\frac{NV}{2}\left(B^{\dagger}B^{\dagger}+BB\right).
\label{BH2}
\end{equation}
As is well known,
this Hamiltonian can be diagonalised by introducing a transformation
\begin{eqnarray}
B^{\dagger}&=& X_0O^{\dagger} + Y_0O \label{RPAOD} \\
B&=& X_0O + Y_0O^{\dagger}, \label{RPAO}
\end{eqnarray}
and imposing
\begin{equation}
[H^{(2)}_B, O^{\dagger}]=\omega O^{\dagger},
\end{equation}
with a condition $X_0^2-Y_0^2=1$.
The frequency $\omega$ then reads
\begin{equation}
\omega=\epsilon\sqrt{1-2\chi'},
\end{equation}
$\chi'$ being $NV/\epsilon$, which was introduced in the
previous section, together with
\begin{eqnarray}
X_0^2&=&\frac{\omega+\epsilon(1-\chi')}{2\omega}, \\
Y_0^2&=&\frac{-\omega+\epsilon(1-\chi')}{2\omega}, \\
X_0Y_0&=&\frac{\epsilon \chi'}{2\omega}.
\end{eqnarray}
The frequency $\omega$ coincides with that in the RPA given by
eq. (\ref{RPA}) to the leading order of $1/N$.
As in the nuclear field theory approach, the anharmonicity
begins with the next order of the $1/N$ expansion.
In terms of RPA phonon creation and annihilation operators defined
in eqs. (\ref{RPAOD}) and (\ref{RPAO}),
the boson Hamiltonian (\ref{BH}) can be rewritten as
\begin{eqnarray}
H_B&=&\omega O^{\dagger}O+H_{11}O^{\dagger}O
+H_{20}\left(O^{\dagger 2}+O^2\right) \nonumber \\
&&+H_{40}\left(O^{\dagger 4}+O^4\right)
+H_{31}\left(O^{\dagger 3}O+O^{\dagger}O^3\right)
+H_{22}O^{\dagger 2}O^2,
\end{eqnarray}
where the first term is the leading order of the $1/N$ expansion
given by eq. (\ref{BH2}) and the rest are the higer order corrections.
The coefficient $H_{22}$, for example, is given by
\begin{equation}
H_{22}=V(X_0^4+3X_0^3Y_0+4X_0^2Y_0^2+3X_0Y_0^3+Y_0^4)
=\frac{V\epsilon^2}{\omega^2}\left(1-\frac{\chi'}{2}\right).
\end{equation}
In order to estimate the degree of anharmonicity, we use
the perturbation theory. The first order perturbation gives
the energy of the one and the two phonon states of
\begin{eqnarray}
E_1&=&\omega+H_{11}, \\
E_2&=&2(\omega+H_{11}) + 2H_{22},
\end{eqnarray}
respectively. Taking the second difference, the anharmonicity
reads
\begin{equation}
\Delta^{(2)}E=2H_{22}=
\frac{2\chi'\epsilon^3}{N\omega^2}\left(1-\frac{\chi'}{2}\right),
\end{equation}
which is identical to the result of the variational approach given by eq.
(\ref{ANH}) as well as that of the nuclear field theory, eq. (\ref{ANHNFT}).
\section{Conclusion}
We have shown that the nuclear field theory, the time-dependent
variational principle, and the
boson expansion method give identical leading-order anharmonicities for the
Lipkin model, and that the formulas agree well with the exact numerical
solution.
The anharmonicity is inversely proportional to the number of particles in the
system, when the other parameters are fixed to keep the harmonic frequency
the same.
This clarifies the origin of the conflicting results for the $A$-dependence
of the anharmonicity obtained in ref. \cite{BF97} and \cite{BBDLSM76}.
In ref. \cite{BF97} the time-dependent method was applied to a Skyrme-like
Hamiltonian involving all $A$ nucleons, and the result was $\Delta^{(2)}E
\propto f(\omega)/A$. In ref. \cite{BBDLSM76}, the Hamiltonian
was restricted to a space of a single major shell for particle orbitals
and similarly for the hole orbitals.
Since the number of particles in the valence shell increases as
$A^{2/3}$, the result was $\Delta^{(2)}E\propto f(\omega)/A^{2/3}$.
Finally, it should perhaps be emphasized that both methods predict that the
anharmonicity is very small for giant resonances: both $A^{2/3}$ and $A$
are large numbers. This need not be the case for low-lying collective
vibrations. The NFT can produce large effects when there are small energy
denominators. Low-lying excitations are difficult to describe with a simple
ansatz like eq.~(\ref{ab}), so the time-dependent variational principle is
not easily applied. Clearly this is an area that should be explored further.
As for the discrepancy between the time-dependent variational approach
and the boson expansion method concerning anharmonicities of
plasmon resonances of metal clusters, our study showed that
the origin of the discrepancy should not
be ascribed to the method used to solve the problem.
The origin of the discrepancy, therefore, is not traceable at
moment and further studies
may be necessary in order to reconcile the discrepancy.
\section*{Acknowledgments}
We thank A. Bulgac for showing us the Hamiltonian construction and
C. Volpe for discussions.
K.H. acknowledges support from the Japan Society for the Promotion of
Science for Young Scientists.
G.F.B. acknowledges support from the U.S.
Dept. of Energy under Grant DE-FG-06ER46561.
Both G.F.B. and P.F.B. thank
the IPN of Orsay for warm hospitality where this work was completed.
| proofpile-arXiv_065-9093 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{INTRODUCTION}
The rheological properties of dilute polymer solutions are commonly used in
industry for characterising the dissolved polymer in terms of its molecular
weight, its mean molecular size, its chain architecture, the relaxation
time spectrum, the translational diffusion coefficient and so on.
There is therefore considerable effort world wide on developing
molecular theories that
relate the microscopic structure of the polymer and its interactions
with the solvent to the observed macroscopic behavior. In this chapter,
recent theoretical progress that has been made in the
development of a coherent
conceptual framework for modelling the rheological properties
of dilute polymer solutions is reviewed.
A polymer solute molecule dissolved in a dilute Newtonian solvent is typically
represented in molecular theories by a coarse-grained mechanical model,
while the relatively rapidly varying motions of solvent molecules
surrounding the
polymer molecule are replaced by a random force field acting on the
mechanical model. The replacement of the complex polymer molecule with a
coarse-grained mechanical model is justified by the belief that such
models capture those large scale properties of the polymer molecule, such as
its stretching and orientation by the solvent flow field, that are
considered to be responsible for the solution's macroscopic behavior.
An example of a coarse-grained model frequently used to represent a
flexible polymer molecule is the {\it bead-spring chain}, which is a
linear chain of identical beads connected by elastic springs.
Progress in the development of molecular theories for dilute polymer
solutions has
essentially involved the succesive introduction, at the molecular level,
of various physical phenomena that are considered to be responsible for the
macroscopic properties of the polymer solution. For instance, the simplest theory
based on a bead-spring model assumes that the solvent influences the motion of
the beads by exerting a drag force and a Brownian force.
Since this theory fails to predict a large number of the observed
features of polymer solutions, more advanced theories have been developed
which incorporate additional microscopic phenomena. Thus, theories have
been developed which (i) include the phenomenon of `hydrodynamic interaction'
between the beads, (ii) try to account for the finite extensibility of the polymer
molecule, (iii) attempt to ensure that two parts of the polymer chain do not
occupy the same place at the same time, (iv) consider the internal
friction experienced when two parts
of a polymer chain close to each other in space move apart,
and so on. The aim of this chapter is to present the unified framework
within which these microscopic phenomena may be treated, and to focus in
particular on recent advances in the treatment of the effect of
hydrodynamic interaction. To a large extent, the notation that is used
here is the same as
that in the treatise {\it Dynamics of Polymeric Liquids} by Bird and
co-authors~\cite{birdb}.
\section{TRANSPORT PROPERTIES OF DILUTE SOLUTIONS}
\label{tpds}
\subsection{Dilute solutions}
A solution is considered dilute if the polymer chains are isolated from
each other and have negligible interactions with each other. In this regime of
concentration the polymer solution's properties are determined by the nature of
the interaction between the segments of a single polymer chain with each other,
and by the nature of the interaction between the segments and the
surrounding solvent molecules. As the concentration of polymers is increased,
a new threshold is reached where the polymer molecules begin to interpenetrate
and interact with each other. This threshold is reached at a surprisingly
low concentration, and heralds the inception of the semi-dilute regime,
where the polymer solution's properties have been found to be
significantly different. Beyond the semi-dilute regime lie concentrated
solutions and melts. In this chapter we are concerned exclusively with
the behavior of dilute solutions.
A discussion of the threshold concentration at which the semi-dilute
regime in initiated is helpful in introducing several concepts that are
used frequently in the description of polymer solutions.
A polymer molecule surrounded by solvent molecules undergoes thermal
motion. A measure of the average size of the polymer molecule is the
root mean square distance between the two ends of the polymer chain,
typically denoted by $R$. This size
is routinely measured with the help of scattering experiments, and is
found to increase with the molecular weight of the polymer chain with a
scaling law, $R \sim M^\nu$, where $M$ is the molecular weight, and
$\nu$ is the scaling exponent which depends on the nature of the
polymer--solvent interaction. In {\it good \/} solvents, solute--solvent
interactions are favoured relative to solute--solute interactions. As a
consequence the polymer chain {\it swells \/} and its size is found to scale with
an exponent $\nu = 3/5$. On the other hand, in {\it poor \/} solvents,
the situation is one in which solute--solute interactions are preferred.
There exists a particular temperature, called the {\it theta \/}
temperature, at which the scaling exponent $\nu$ changes dramatically
from 3/5 to 1/2. At this temperature, the urge to expand caused by
two parts of the chain being unable to occupy the same location
(leading to the presence of an {\it excluded volume}), is just balanced by
the repulsion of the solute molecules by the solvent molecules.
Polymer chains in a solution can be imagined to begin to interact with
each other when the solution volume is filled with closely packed
spheres representing the average size of the molecule. This occurs when
$n_p \, R^3 \approx 1$, where $n_p $ in the number of chains per unit volume.
Since $n_p =\rho_p \, N_{\rm A} / M $, where $\rho_p$ is the polymer mass density
and $N_{\rm A}$ is Avagadro's number, it follows that polymer density at
overlap, $\rho_p^*$, scales with molecular weight as,
$\rho_p^* \sim M^{1-3\nu}.$ Polymer molecules typically have
molecular weights
between $10^4$ and $10^6$ gm/mol. As a result, it is clear that
the polymer solution can be considered dilute only at very low
polymer densities. Since experimental measurements are difficult at such
low concentrations, the usual practice is to extrapolate results of
experiments carried out at decreasing concentrations to the limit of
zero concentration. For instance, in the case of dilute polymer
solutions it is conventional to report
the {\it intrinsic} viscosity, which is defined by,
\begin{equation}
\lbrack \eta \rbrack = \lim_{\rho_p \to 0} \,
{\eta_p \over \rho_p \, \eta_s}
\label{invis}
\end{equation}
where $\eta_p$ is the polymer contribution to the solution viscosity,
and $\eta_s$ is the solvent viscosity.
\subsection{Homogeneous flows}
Complex flow situations typically encountered in polymer
processing frequently involve a combination of shearing and
extensional deformations. The response of the polymer solution to these
two modes of deformation is very different. Consequently, predicting the
rheological properties of the solution under both shear and extensional
deformation is considered to be very important in order to properly
characterise the solutions behavior.
Rather than considering flows where both these modes of deformation are
simultaneously present, it is common in polymer kinetic theory to
analyse simpler flow situations called {\it homogeneous} flows,
where they may be treated separately.
A flow is called homogeneous, if the rate of strain tensor,
${\dot {{\mbox {\boldmath $\gamma$}}}} = (\nabla {{\mbox {\boldmath $v$}}}) (t) + (\nabla {{\mbox {\boldmath $v$}}})^\dagger (t)$,
where $ {{\mbox {\boldmath $v$}}}$ is the solution velocity field, is
independent of position. In other words, the solution velocity field
$ {{\mbox {\boldmath $v$}}}$ in homogeneous flows,
can always be represented as $ {{\mbox {\boldmath $v$}}} = {{\mbox {\boldmath $v$}}}_0 + {\mbox {\boldmath $\kappa$}} (t) \cdot { {{\mbox {\boldmath $r$}}}} $,
where $ {{\mbox {\boldmath $v$}}}_0$ is a constant vector, $ {\mbox {\boldmath $\kappa$}}(t) = \nabla {{\mbox {\boldmath $v$}}} (t) $
is a traceless tensor for incompressible fluids, and ${ {{\mbox {\boldmath $r$}}}}$ is the
position vector with respect to a laboratory fixed frame of reference.
While there is no spatial variation in the rate of strain tensor
in homogeneous flows,
there is no restriction with regard to its variation in time. Therefore, the
response of dilute solutions to {\it transient} shear and extensional flows is
also used to probe its character as an alternative means
of characterisation independent of the steady state material functions.
Two homogeneous flows, steady simple shear flow and
small amplitude oscillatory shear flow, that are frequently used
to validate the predictions of molecular theories which incorporate
hydrodynamic interaction, are described briefly below. A comprehensive
discussion of material functions in various flow situations can be
found in the book by Bird~{\it et~al.} \cite{birda}.
\subsection{Simple shear flows}
The rheological properties of a
dilute polymer solution can be obtained once
the stress tensor, $ {{\mbox {\boldmath $\tau$}}},$ is known. The stress tensor is considered to
be given by the sum of two contributions, $ {{\mbox {\boldmath $\tau$}}}= {{\mbox {\boldmath $\tau$}}}^s + {{\mbox {\boldmath $\tau$}}}^p$,
where $ {{\mbox {\boldmath $\tau$}}}^s$ is the contribution from the solvent, and
$ {{\mbox {\boldmath $\tau$}}}^p$ is the polymer contribution. Since the solvent is assumed to
be Newtonian, the solvent stress (using a compressive definition
for the stress tensor \cite{birda}) is given by,
$ {{\mbox {\boldmath $\tau$}}}^s= - \, \eta_s \, {\dot {{\mbox {\boldmath $\gamma$}}}}$. The nature of the
polymer contribution $ {{\mbox {\boldmath $\tau$}}}^p$ in simple shear flows is
discussed below.
Simple shear flows are described by a velocity field,
\begin{equation}
v_x={\dot \gamma_{yx}} \, y, v_y= 0, v_z=0
\label{sfvel}
\end{equation}
where the velocity gradient ${\dot \gamma_{yx}}$ can be a function of time.
From considerations of symmetry, one can show that the most general form
that the polymer contribution to the stress tensor can have in simple
shear flows is~\cite{birda},
\begin{equation}
{{\mbox {\boldmath $\tau$}}}^p= \pmatrix{
\tau^p_{xx} & \tau^p_{xy} & 0 \cr
\tau^p_{xy} & \tau^p_{yy}& 0 \cr
0 & 0 & \tau^p_{zz} \cr }
\label{sftau}
\end{equation}
where the matrix of components in a Cartesian coordinate system is
displayed.
The form of the stress tensor implies that only three
independent combinations can be measured
for an incompressible fluid. All simple shear flows are consequently
characterised by three material functions.
\subsubsection{Steady simple shear flows}
Steady simple shear flows are described by a constant shear rate,
${\dot \gamma} = {\vert \dot \gamma_{yx} \vert}$.
The tensor $ {\mbox {\boldmath $\kappa$}}$ is consequently given by the following matrix
representation in the laboratory-fixed coordinate system,
\begin{equation}
{\mbox {\boldmath $\kappa$}}={\dot \gamma} \, \pmatrix{
0 & 1 & 0 \cr
0 & 0 & 0 \cr
0 & 0 & 0 \cr }
\label{ssf1}
\end{equation}
The three independent
material functions used to characterize such flows are the viscosity,
$\eta_p$, and the first and second normal stress difference coefficients,
$\Psi_1 \,\,{\rm and}\,\, \Psi_2$, respectively.
These functions are defined by the following relations,
\begin{equation}
\tau_{xy}^p = - {\dot \gamma}\, \eta_p \, ; \quad
\tau_{xx}^p- \tau_{yy}^p = - {\dot \gamma^2}\, \Psi_1 \, ; \quad
\tau_{yy}^p- \tau_{zz}^p = - {\dot \gamma^2}\, \Psi_2
\label{ssf2}
\end{equation}
where $\tau_{xy}^p, \tau_{xx}^p, \tau_{yy}^p$ are the components of the
polymer contribution to the stress tensor $ {{\mbox {\boldmath $\tau$}}}^p$.
At low shear rates, the viscosity and the first normal
stress coefficient are observed to have constant values, $\eta_{p,0}$
and $\Psi_{1,0},$ termed the zero shear rate
viscosity and the zero shear rate first normal stress coefficient,
respectively. At these shear rates the fluid is consequently
Newtonian in its behavior.
At higher shear rates, most dilute polymer solutions show
{\it shear thinning \/} behavior. The viscosity and the first normal
stress coefficient decrease with increasing shear rate, and exhibit a
pronounced {\it power law \/} region. At very high shear rates, the
viscosity has been observed to level off and approach a constant value,
$\eta_{p,\infty}$, called the infinite shear rate visosity. A high shear
rate limiting value has not been observed for the first normal stress
coefficient. The second normal stress coefficient is much smaller in
magnitude than the first normal stress coefficient, however its sign
has not been conclusively established experimentally. Note that the
normal stress
differences are zero for a Newtonian fluid. The existence of non-zero
normal stress differences is an indication that the fluid is viscoelastic.
Experiments with very high molecular weight systems seem to suggest that
polymer solutions can also {\it shear thicken}. It has been observed
that the viscosity passes through a minimum with increasing shear rate,
and then increases until a plateau region before shear
thinning again \cite{larson}.
It is appropriate here to note that shear flow material functions are usually
displayed in terms of the reduced variables, $\eta_p / \eta_{p,0}, \,
\Psi_1 / \Psi_{1,0}$ and $\Psi_2 / \Psi_1$, versus a non-dimensional
shear rate $\beta$, which is defined by $\beta=\lambda_p {\dot \gamma},$
where, $\lambda_p = \lbrack \eta \rbrack_0 \, M \, \eta_s / N_{\rm A} \,
k_{\rm B}\, T,$ is a characteristic relaxation time. The subscript 0 on
the square bracket indicates that this quantity is
evaluated in the limit of vanishing shear rate, $k_{\rm B}$ is
Boltzmann's constant and $T$ is the absolute temperature.
For dilute solutions one can show that, $\lbrack \eta \rbrack /
\lbrack \eta \rbrack_0 = \eta_p / \eta_{p,0} $ and $\beta = \eta_{p,0}
\, {\dot \gamma} / n_p \, k_{\rm B}\, T$.
\subsubsection{Small amplitude oscillatory shear flow}
A transient experiment that is used very often to characterise polymer
solutions is {\it small amplitude oscillatory shear flow}. The upper
plate in a simple shear experiment is made to undergo
sinusoidal oscillations in the plane of flow with frequency $\omega$.
For oscillatory flow between narrow slits,
the shear rate at any position in the fluid is given by \cite{birda},
${ \dot \gamma_{yx} (t)} =
{\dot \gamma_0} \, \cos \omega t$, where ${\dot \gamma_0}$ is the
amplitude. The tensor $ {\mbox {\boldmath $\kappa$}}(t)$ is consequently given by,
\begin{equation}
{\mbox {\boldmath $\kappa$}}(t)={\dot \gamma}_0 \, \cos \, \omega t \pmatrix{
0 & 1 & 0 \cr
0 & 0 & 0 \cr
0 & 0 & 0 \cr } \label{usf3}
\end{equation}
Since the polymer contribution to the shear stress in oscillatory
shear flow, $\tau_{yx}^p$, undergoes a phase shift with respect
to the shear strain and the strain rate, it is customary to
represent its dependence on time through the relation \cite{birda},
\begin{equation}
\tau_{yx}^p=- \eta^\prime(\omega)\, {\dot \gamma}_0 \, \cos \, \omega t
- \eta^{\prime\prime}(\omega)\,
{\dot \gamma}_0 \, \sin \, \omega t \label{usf4}
\end{equation}
where $\eta^\prime$ and $ \eta^{\prime\prime}$ are the
material functions characterising oscillatory shear flow. It is common
to represent them in a combined form as the complex viscosity,
$\eta^* =\eta^\prime - i \,\eta^{\prime\prime}$.
Two material functions which are entirely equivalent to
$\eta^\prime $ and $\eta^{\prime\prime}$ and which are often used
to display experimental data, are the storage modulus
$G^\prime = \omega \eta^{\prime\prime}/(n k_B T)$ and the loss modulus
$G^{\prime\prime} = \omega \eta^{\prime}/(n k_B T)$.
Note that the term involving
$G^\prime$ in equation(\ref{usf4}) is in phase with the strain while
that involving $G^{\prime\prime}$ is in phase with the strain rate. For
an elastic material, $G^{\prime\prime}=0$, while for a Newtonian fluid,
$G^\prime=0 $. Thus, $G^{\prime}$ and $G^{\prime\prime}$ are
measures of the extent of the fluid's viscoelasticity.
In flow situations which have a small displacement gradient, termed the
{\it linear viscoelastic} flow regime, the stress tensor in polymeric
fluids is described by the linear constitutive relation,
\begin{equation}
{{\mbox {\boldmath $\tau$}}}^p= - \, \int_{- \infty}^t d\!s \, G(t-s)\,
{\dot {{\mbox {\boldmath $\gamma$}}}}(t,s)
\label{usf5}
\end{equation}
where $G(t)$ is the relaxation modulus.
When the amplitude $\dot \gamma_0$ is very small,
oscillatory shear flow is a linear viscoelastic flow and consequently
can also be described in terms of a relaxation modulus $G(t)$. Indeed,
expressions for the real and imaginary parts of the complex viscosity
can be found from the expression,
\begin{equation}
\eta^*= \int_0^\infty G(s)\, e^{-i \omega s} \, d\!s
\label{usf6}
\end{equation}
Experimental plots of $\log G^{\prime}$ and $\log G^{\prime\prime}$
versus nondimensional frequency show three distinct power law regimes.
The regime of interest is the intermediate regime \cite {larson}, where
for dilute solutions of high molecular weight polymers in good or theta
solvents, both $G^{\prime}$ and $G^{\prime\prime}$
have been observed to scale with frequency as $\omega^{2/3}$.
It is appropriate to note here that the zero shear rate viscosity
$\eta_{p,0}$ and the zero shear rate first normal stress
difference $\Psi_{1,0}$, which are linear viscoelastic properties,
can be obtained from the complex viscosity in the limit of vanishing
frequency,
\begin{equation}
\eta_{p,0} = \lim_{\omega\to 0} \, \eta^{\prime} (\omega) \, ; \quad \quad
\quad \Psi_{1,0} = \lim_{\omega\to 0} {2 \, \eta^{\prime\prime}
(\omega) \over \omega}
\label{usf8}
\end{equation}
\subsection{Scaling with molecular weight}
We have already discussed the scaling of the root mean square end-to-end
distance of a polymer molecule with its molecular weight.
In this section we discuss the scaling of the
zero shear rate intrinsic viscosity
$\lbrack \eta \rbrack_0$, and the translational diffusion coefficient
$D$, with the molecular weight, $M$. As we shall see later, these
have proven to be vitally important as experimental benchmarks in
attempts to improve predictions of molecular theories.
It has been found that the relationship between $\lbrack \eta \rbrack_0$
and $M$ can be expressed by the formula,
\begin{equation}
\lbrack \eta \rbrack_0 = K \, M^a
\label{sc1}
\end{equation}
where, $a$ is called the {\it Mark--Houwink \/} exponent, and the
prefactor $K$ depends on the polymer--solvent system. The value of the
parameter $a$ lies between 0.5 and 0.8, with the lower limit
corresponding to theta conditions, and the upper limit to a good solvent
with a very high molecular weight polymer solute. Measured intrinsic
viscosities are routinely used to determine the molecular weight of samples once
the constants $K$ and $a$ are known for a particular polymer--solvent
pair.
The translational diffusion coefficient $D$ for a flexible polymer in a
dilute solution can be measured by dynamic light scattering methods, and
is found to scale with molecular weight as \cite{birdb},
\begin{equation}
D \sim M^{-\mu}
\label{sc2}
\end{equation}
where the exponent $\mu$ lies in the range 0.49 to 0.6. Most theta
solutions have values of $\mu$ close to the lower limit. On the other
hand, there is wide variety in the value of $\mu$ reported for good solvents.
It appears that the upper limit is attained only for very large molecular weight
polymers and the intermediate values, corresponding to a {\it cross over
\/} region, are more typical of real polymers with moderate molecular
weights.
\subsection{Universal behavior}
It is appropriate at this point to discuss the most important
aspect of the behavior of polymer solutions
(as far as the theoretical modelling of these solutions is concerned)
that is revealed by the various experimental observations. When the
experimental data for high molecular weight systems is plotted
in terms of appropriately normalized coordinates,
the most noticeable feature is the exhibition of {\it universal \/}
behavior. By this it is meant that curves for different values of a
parameter, such as the molecular weight, the temperature, or even for
different types of monomers can be superposed onto a single curve.
For example, when the reduced intrinsic viscosity,
$\lbrack \eta \rbrack / \lbrack \eta \rbrack_0$
is plotted as a function of the reduced shear rate $\beta$,
the curves for polystyrene in different types of good solvents
at various temperatures collapse onto a single curve~\cite{birda}.
There is, however, an important point that must be noted. While polymers
dissolved in both theta solvents and good solvents show universal
behavior, the universal behavior is different in the two cases. An example of
this is the observed scaling behavior of various quantities with
molecular weight. The scaling is universal within the context of a particular
type of solvent. The term {\it universality class} is used to describe
the set of systems that exhibit common universal behavior \cite{strobl}.
Thus theta and good solvents belong to different universality classes.
The existence of universality classes is very significant for the
theoretical description of polymer solutions.
Any attempt made at modelling a polymer solution's properties might expect
that a proper description must incorporate the chemical structure of the
polymer into the model, since this determines its microscopic
behavior. Thus a detailed
consideration of bonds, sidegroups, {\it etc.} may be envisaged. However, the
universal behavior that is revealed by experiments suggests that
macroscopic properties of the polymer solution are determined by a few large
scale properties of the polymer molecule. Structural details may be
ignored since at length scales in the order of nanometers, different
polymer molecules become equivalent to each other, and behave in the
same manner. As a result, polymer solutions that differ from each
other with regard to the chemical structure or molecular weight
of the polymer molecules that are dissolved in it, the temperature, and
so on, still behave similarly as long as a few parameters that describe
molecular features are the same.
This universal behavior justifies the introduction of crude mechanical
models, such as the bead-spring chain, to represent real polymer molecules.
On the other hand, it is interesting to note that in many
cases, the predictions of these models are not universal. It turns
out that apart from a basic length and time scale,
there occur other parameters that need to be prescribed, for example, the
number of beads $N$ in the chain, the strength of hydrodynamic
interaction $h^*$, the finite spring extensibility parameter $b$, and so on.
It is perhaps not incorrect to state that any molecular theory that is
developed must ultimately verify that universal predictions
of transport properties are indeed obtained. The universal
predictions of kinetic theory models with hydrodynamic
interaction are discussed later on in this chapter.
\section{BEAD-SPRING CHAIN MODELS}
\label{bscm}
The development of a kinetic theory for dilute solutions has been approached
in two different ways. One of them is an intuitive approach in the
configuration space of a single molecule, with a particular mechanical
model chosen to represent the macromolecule, such as a freely rotating
bead-rod chain or a freely jointed bead-spring chain \cite{kirkwood,rouse,zimm}.
The other approach is to develop a formal theory in
the phase space of the entire solution, with the polymer molecule
represented by a general mechanical model that may have internal constraints,
such as constant bond lengths and angles \cite{kramers,cbh,birdb}.
The results of the former
method are completely contained within the latter method, and several ad hoc
assumptions made in the intuitive treatment are clarified and placed in
proper context by the development of the rigorous phase space theory.
Kinetic theories developed for {\it flexible} macromolecules in dilute
solutions have generally pursued the intuitive approach, with the
bead-spring model proving to be the most popular. This is because the lack of
internal constaints in the model makes the formulation of the theory simpler.
Recently, Curtiss and Bird \cite{cb}, acknowledging the
`notational and mathematical' complexity of the rigorous phase space
theory for general mechanical models, have summarised the results of
phase space theory for the special case of bead-spring models with
arbitrary connectivity, {\it ie.} for linear chains, rings, stars, combs and
branched chains.
In this section, since we are primarily concerned with reviewing recent
developments in theories for flexible macromolecules, we describe the
development of kinetic theories in the configuration space of a single molecule.
However, readers who wish the understand the origin of the ad hoc
expressions used for the Brownian forces and the hydrodynamic force,
and the formal development of expressions for the momentum and mass
flux, are urged to read the article by Curtiss and Bird \cite{cb}.
The general diffusion equation that governs the time evolution of
the distribution of configurations of a bead-spring chain subject to
various nonlinear effects, and the microscopic origin of the polymer
contribution to the stress tensor are discussed in this section.
The simplest bead-spring chain model, the {\it Rouse} model is also
discussed. We begin, however, by describing the equilibrium
statistical mechanical arguments that justify the
representation of a polymer molecule with a bead-spring chain model,
and we discuss the equilibrium configurations of such a model.
\subsection{Equilibrium configurations}
When a flexible polymer chain in a {\it quiescent} dilute
solution is considered at a lowered resolution,
{\it ie.} at a coarse-grained level, it would appear like a
strand of highly coiled spaghetti, and
the extent of its coiling would depend on its degree of flexibility. A
quantity used to characterise a chain's flexibility is the
{\it orientational correlation function}, whose value $K_{\rm or}\, (\Delta
\ell)$, is a measure of the correlation in the direction of the chain at
two different points on the chain which are
separated by a distance $\Delta \ell$ along the
length of the chain. At sufficiently large distances
$\Delta \ell$, it is expected that the correlations vanish. However, it is
possible to define a {\it persistence length \/} $\ell_{\rm ps}$, such
that for $\Delta \ell > \ell_{\rm ps}$, orientational
correlations are negligible \cite{strobl}.
The existence of a persistence length suggests that as far as the
global properties of a flexible polymer chain are concerned,
such as the distribution function for the end-to-end
distance of the chain, the continuous chain could be replaced by
a freely jointed chain made up of
rigid links connected together at joints that are
completely flexible, whose linear segments are each longer than
the persistence length $\ell_{\rm ps}$, and whose contour length is the
same as that of the continuous chain.
The freely jointed chain undergoing thermal motion is clearly
analogous to a random-walk in space, with each random step in the walk
representing a link in the chain assuming a random orientation. Thus all the
statistical properties of a random-walk are, by analogy,
also the statistical properties of the freely jointed chain. The
equivalence of a polymer chain with a random-walk lies at the heart
of a number of fundamental results in polymer physics.
\subsubsection{Distribution functions and averages}
In polymer kinetic theory, the freely jointed chain is assumed to have
beads at the junction points betwen the links,
and is referred to as the freely jointed bead-rod chain \cite{birdb}.
The introduction of the beads is to account for the mass of the polymer
molecule and the viscous drag experienced by the polymer molecule.
While in reality the mass and drag are distributed continuously
along the length of the chain, the model assumes
that the total mass and drag may be distributed over a finite number of
discrete beads.
For a general chain model consisting of $N$ beads, which have
position vectors ${ {{\mbox {\boldmath $r$}}}}_{\nu}, \, \nu = 1,2, \ldots, N,$ in a
laboratory fixed coordinate system, the Hamiltonian is given by,
\begin{equation}
{\cal H} = {\cal K} +
\phi \, ({ {{\mbox {\boldmath $r$}}}}_1,{ {{\mbox {\boldmath $r$}}}}_2, \ldots, { {{\mbox {\boldmath $r$}}}}_N)
\label{ham}
\end{equation}
where $\cal K$ is the kinetic energy of the system and $\phi$ is the
potential energy. $\phi$ depends on the location of all the
particles.
The center of mass ${ {{\mbox {\boldmath $r$}}}}_c$ of the chain, and its velocity
${\dot { {{\mbox {\boldmath $r$}}}}}_c$ are given by
\begin{equation}
{ {{\mbox {\boldmath $r$}}}}_c = {1 \over N }\, \sum_{\nu=1}^N \, { {{\mbox {\boldmath $r$}}}}_{\nu} \quad ; \quad
{\dot { {{\mbox {\boldmath $r$}}}}}_c = {1 \over N }\, \sum_{\nu=1}^N \, {\dot { {{\mbox {\boldmath $r$}}}}}_{\nu}
\end{equation}
where ${\dot { {{\mbox {\boldmath $r$}}}}}_{\nu} = d { {{\mbox {\boldmath $r$}}}}_{\nu}/dt$.
The location of a bead with respect to the center of mass is specified
by the vector ${ {{\mbox {\boldmath $R$}}}}_{\nu} = { {{\mbox {\boldmath $r$}}}}_{\nu} - { {{\mbox {\boldmath $r$}}}}_c$.
If $Q_1, \, Q_2, \, \ldots \, Q_d$ denote the generalised internal
coordinates required to specify the configuration of the
chain, then the kinetic energy of the chain in terms
of the velocity of the center of mass and the generalised velocities
${\dot Q}_s = d Q_s /dt$, is given by~\cite{birdb},
\begin{equation}
{\cal K} = {m \,N \over 2 } \, {\dot { {{\mbox {\boldmath $r$}}}}}_c^2 +
{1 \over 2} \, \sum_s \sum_t \,
g_{st}\, {\dot Q}_s \,{\dot Q}_t
\end{equation}
where the indices $s$ and $t$ vary from 1 to $d$,
$m$ is the mass of a bead, and
$g_{st}$ is the {\it metric matrix}, defined by,
$g_{st} = m \, \sum_{\nu}\, ({\partial { {{\mbox {\boldmath $R$}}}}_{\nu} / \partial Q_s})
\cdot ({\partial { {{\mbox {\boldmath $R$}}}}_{\nu} / \partial Q_t} ) $.
In terms of the momentum of the center of mass,
${ {{\mbox {\boldmath $p$}}}}_c = m \, N \, {\dot {{\mbox {\boldmath $r$}}}}_c$, and the generalised momenta
$P_s$, defined by,
$P_s = ({\partial {\cal K} / \partial {\dot Q}_s}), $
the kinetic energy has the form~\cite{birdb},
\begin{equation}
{\cal K} = {1 \over 2 m \,N} \, { {{\mbox {\boldmath $p$}}}}_c^2 + {1 \over 2} \sum_s \sum_t
G_{st}\, P_s \,P_t
\label{kin}
\end{equation}
where, $G_{st}$ are the components of the matrix inverse to the metric matrix,
$\sum_t \, G_{st} \, g_{tu} = \delta_{su},$ and $\delta_{su}$ is the
Kronecker delta.
The probability, ${\cal P}_{\rm eq} \, d{ {{\mbox {\boldmath $r$}}}}_c \, dQ \, d{ {{\mbox {\boldmath $p$}}}}_c \,d P$,
that an $N$-bead chain model has a configuration
in the range $ \, d{ {{\mbox {\boldmath $r$}}}}_c \, dQ \,$ about $ \, { {{\mbox {\boldmath $r$}}}}_c, \, Q \,$ and
momenta in the range
$\, d{ {{\mbox {\boldmath $p$}}}}_c \, dP \,$ about $ \, { {{\mbox {\boldmath $p$}}}}_c, \, P \,$ is given by,
\begin{equation}
{\cal P}_{\rm eq}\, \bigl( \, { {{\mbox {\boldmath $r$}}}}_c, \, Q, \, { {{\mbox {\boldmath $p$}}}}_c, \, P \, \bigr)
= {\cal Z}^{-1} \, e^{- {\cal H} / k_{\rm B} T}
\end{equation}
where $\cal Z$ is the {\it partition function}, defined by,
\begin{equation}
{\cal Z} = \int\!\!\int\!\!\int\!\!\int \, e^{- {\cal H} / k_{\rm B}T} \,
d{ {{\mbox {\boldmath $r$}}}}_c \, dQ \, d{ {{\mbox {\boldmath $p$}}}}_c \, dP
\end{equation}
The abbreviations, $Q$ and $dQ$ have been used to denote $\, Q_1, \,
Q_2, \, \ldots, \, Q_d \, $ and $\, dQ_1 \, dQ_2 \, \ldots \,
dQ_d \,$, respectively, and a similar notation has been used for the momenta.
The {\it configurational distribution function} for a general $N$-bead chain,
$\psi_{\rm eq} \, (\, Q \, ) \, dQ$,
which gives the probability that the internal configuration
is in the range $\, dQ\, $ about $\, Q \, $,
is obtained by integrating
${\cal P}_{\rm eq}$ over all the momenta and over the coordinates of
the center of mass,
\begin{equation}
\psi_{\rm eq} \, ( \, Q \, )
= {\cal Z}^{-1} \, {\int\!\!\int\!\!\int \, e^{- {\cal H} / k_{\rm B} T}
d{ {{\mbox {\boldmath $r$}}}}_c \, d{ {{\mbox {\boldmath $p$}}}}_c \, dP }
\end{equation}
For an $N$-bead chain whose potential energy does not depend on the
location of the center of mass, the following result is obtained
by carrying out the
integrations over ${ {{\mbox {\boldmath $p$}}}}_c$ and $P$ \cite{birdb},
\begin{equation}
\psi_{\rm eq} \, ( \, Q \, )
= {\sqrt{g(Q)} \, e^{- {\phi(Q)} / k_{\rm B} T} \over
\int {\sqrt{g(Q)} \, e^{- {\phi(Q)} / k_{\rm B} T} \, dQ }}
\label{confdis}
\end{equation}
where, $g(Q)=\det (g_{st}) = 1/\det (G_{st})$.
An expression that is slightly different from the random-walk distribution
is obtained on evaluating the right hand side of equation~(\ref{confdis})
for a freely jointed bead-rod chain.
Note that the random-walk distribution is obtained by assuming that
each link in the chain is oriented independently of all the other links,
and that all orientations of the link are equally likely.
On the other hand, equation~(\ref{confdis}) suggests that
the probability for the links in a freely jointed chain
being perpendicular to each other, for a given solid angle, is slightly
larger than the probability of being in the same direction.
Inspite of this result, the configurational
distribution function for a freely jointed bead-rod chain is almost
always assumed to be given by the random-walk distribution \cite{birdb}.
Here afterwards in this chapter, we shall refer to a freely
jointed bead-rod chain whose configurational distribution function is
assumed to be given by the random-walk distribution, as an {\it ideal } chain.
For future reference, note that the ran\-dom-walk dis\-tribution is given by,
\begin{equation}
\psi_{\rm eq} \, ( \,\theta_1, \ldots, \theta_{N-1},
\phi_1, \ldots, \phi_{N-1} \, ) = \Biggl( {1 \over 4 \pi} \Biggr)^{N-1}
\, \prod_{i=1}^{N-1} \, \sin \, \theta_i
\label{ranwalk}
\end{equation}
where $\theta_i$ and $\phi_i$ are the polar angles for the $i {\rm th}$
link in the chain \cite {birdb}.
Since the polymer chain explores many states in the duration of an observation
quantities observed on macroscopic length and time scales
are {\it averages} of functions of the configurations and
momenta of the polymer chain. A few definitions of averages are
now introduced that are used frequently subsequently in the chapter.
The average value of a function
$X \, \bigl( \, { {{\mbox {\boldmath $r$}}}}_c, \, Q, \, { {{\mbox {\boldmath $p$}}}}_c, \, P \, \bigr)$, defined in
the phase space of a polymer molecule is given by,
\begin{equation}
{\bigl\langle} X {\bigr\rangle}_{\rm eq} = \int\!\!\int\!\!\int\!\!\int \,
X \, {\cal P}_{\rm eq}\, d{ {{\mbox {\boldmath $r$}}}}_c \, dQ \, d{ {{\mbox {\boldmath $p$}}}}_c \, dP
\end{equation}
We often encounter quantities $X$ that depend only on the internal
configurations of the polymer chain and not on the center of mass
coordinates or momenta. In addition, if the potential energy of the
chain does not depend on the location of the center of mass, then it is
straight forward to see that the equilibrium average of X is given by,
\begin{equation}
{\bigl\langle} X {\bigr\rangle}_{\rm eq} = \int \, X \, \psi_{\rm eq} \, dQ
\label{conave}
\end{equation}
\subsubsection{The end-to-end vector}
The end-to-end vector ${ {{\mbox {\boldmath $r$}}}}$ of a general bead-rod chain can be
found by summing the vectors that represent each link in the chain,
\begin{equation}
{ {{\mbox {\boldmath $r$}}}} = \sum_{i=1}^{N-1} \, a \, { {{\mbox {\boldmath $u$}}}}_i
\end{equation}
where $a$ is the length of a rod, and ${ {{\mbox {\boldmath $u$}}}}_i$ is a unit vector in
the direction of the $i \rm{th}$ link of the chain. Note that the components of
the unit vectors ${ {{\mbox {\boldmath $u$}}}}_i, \, i=1,2,\ldots,N-1 \,$, can be
expressed in terms of the generalised coordinates $Q$ \cite{birdb}.
The probability $P_{\rm eq} ({ {{\mbox {\boldmath $r$}}}}) \, d { {{\mbox {\boldmath $r$}}}}$, that the end-to-end vector
of a general bead-rod chain is in the range $d { {{\mbox {\boldmath $r$}}}}$
about ${ {{\mbox {\boldmath $r$}}}}$ can be found
by suitably contracting the configurational distribution function
$\psi_{\rm eq} \, ( \, Q \, )$ \cite{birdb},
\begin{equation}
P_{\rm eq} ({ {{\mbox {\boldmath $r$}}}})
= \int \delta \Biggl(\, { {{\mbox {\boldmath $r$}}}} - \sum_i \, a \, { {{\mbox {\boldmath $u$}}}}_i \, \Biggr) \,
\psi_{\rm eq} \, ( \, Q \, ) \, dQ
\label{endvec}
\end{equation}
where $\delta ( . )$ represents a Dirac delta function.
With $\psi_{\rm eq} \, ( \, Q \, )$ given by the ran\-dom-walk
distribution~(\ref{ranwalk}), it can be shown that for large values
of $N$ and $r = \vert { {{\mbox {\boldmath $r$}}}} \vert < 0.5 N a $, the probability
distribution for the end-to-end vector is a Gaussian distribution,
\begin{equation}
P_{\rm eq} ({ {{\mbox {\boldmath $r$}}}})
= \Biggl( \, {3 \over 2 \pi (N-1) a^2 } \, \Biggr)^{3/2} \,
\exp \, \Biggl({-3 \, r^2 \over 2 \, (N-1) \, a^2 } \Biggr)
\label{gendvec}
\end{equation}
The distribution function for the end-to-end vector of an {\it ideal} chain
with a large number of beads $N$ is therefore given by the Gaussian
distribution~(\ref{gendvec}).
The mean square end-to-end distance, $ {\bigl\langle} \, r^2 \, {\bigr\rangle}_{\rm eq},$
for an ideal chain can then be shown to be,
$ {\bigl\langle} \, r^2 \, {\bigr\rangle}_{\rm eq} = ( N-1 ) \, a^2.$
This is the well known result that the root mean square of the
end-to-end distance of a random-walk increases as the square root of the
number of steps. In the context of the polymer chain, since the
number of beads in the chain is directly proportional to the molecular
weight, this result implies that $R \sim M^{0.5}$. We have seen earlier
that this is exactly the scaling observed in theta solvents. Thus one
can conclude that a polymer chain in a theta solvent behaves like an
ideal chain.
\subsubsection{The bead-spring chain}
Consider an isothermal system consisting of a bead-rod chain with a
constant end-to-end vector ${ {{\mbox {\boldmath $r$}}}}$, sus\-pended in a bath of so\-lvent
molecules at temperature $T$.
The partition function of such a constrained system can be
found by contracting
the partition function in the con\-straint-free case,
\begin{equation}
{\cal Z} \, ({ {{\mbox {\boldmath $r$}}}}) = \int\!\!\int\!\!\int\!\!\int \,
\delta \Biggl(\, { {{\mbox {\boldmath $r$}}}} - \sum_i \, a \, { {{\mbox {\boldmath $u$}}}}_i \, \Biggr) \,
e^{- {\cal H} / k_{\rm B}T} \,
d{ {{\mbox {\boldmath $r$}}}}_c \, dQ \, d{ {{\mbox {\boldmath $p$}}}}_c \, dP
\label{parfun}
\end{equation}
For an $N$-bead chain whose potential energy does not depend on the
location of the center of mass, the integrations over ${ {{\mbox {\boldmath $r$}}}}_c$,
${ {{\mbox {\boldmath $p$}}}}_c$ and $P$ can be carried out to give,
\begin{equation}
{\cal Z} \, ({ {{\mbox {\boldmath $r$}}}}) = C \, \int \,
\delta \Biggl(\, { {{\mbox {\boldmath $r$}}}} - \sum_i \, a \, { {{\mbox {\boldmath $u$}}}}_i \, \Biggr) \,
\psi_{\rm eq} \, ( \, Q \, )
d{ {{\mbox {\boldmath $r$}}}}_c \, dQ \end{equation}
Comparing this equation with the equation for the end-to-end
vector (\ref{endvec}), one can conclude that,
\begin{equation}
{\cal Z} \, ({ {{\mbox {\boldmath $r$}}}}) = C \, P_{\rm eq} \, ({ {{\mbox {\boldmath $r$}}}})
\label{constraintpf}
\end{equation}
In other words, the partition function of a general bead-rod chain
(except for a multiplicative factor independent of ${ {{\mbox {\boldmath $r$}}}}$)
is given by $P_{\rm eq} \, ({ {{\mbox {\boldmath $r$}}}})$. This result
is essential to establish the motivation for the introduction of the
bead-spring chain model.
At constant temperature, the change in free energy accompanying a change
in the end-to-end vector ${ {{\mbox {\boldmath $r$}}}}$ of a bead-rod chain, by an infinitesimal amount
$d{ {{\mbox {\boldmath $r$}}}}$, is equal to the work done in the process,
{\it ie.,} $dA = { {{\mbox {\boldmath $F$}}}} \cdot d{ {{\mbox {\boldmath $r$}}}}$, where ${ {{\mbox {\boldmath $F$}}}}$ is the force
required for the extension.
The Helmholtz free energy of a general bead-rod chain with fixed end-to-end
vector ${ {{\mbox {\boldmath $r$}}}}$ can be found from equation~(\ref{constraintpf}),
\begin{equation}
A( { {{\mbox {\boldmath $r$}}}})= - \, k_{\rm B} T \, \ln {\cal Z} ({ {{\mbox {\boldmath $r$}}}})
= A_0 - k_{\rm B} T \, \ln P_{\rm eq} \, ({ {{\mbox {\boldmath $r$}}}})
\label{helm}
\end{equation}
where $A_0$ is a constant independent of ${ {{\mbox {\boldmath $r$}}}}$.
For an ideal chain, it follows from
equations (\ref{gendvec}) and (\ref{helm}), that a change in the
end-to-end vector
by $d{ {{\mbox {\boldmath $r$}}}}$, leads to a change in the free energy $dA$, given by,
\begin{equation}
dA( { {{\mbox {\boldmath $r$}}}}) = {3 k_{\rm B} T \over (N-1) a^2 } \; { {{\mbox {\boldmath $r$}}}} \cdot d{ {{\mbox {\boldmath $r$}}}}
\label{spring}
\end{equation}
Equation (\ref{spring}) implies that there is a {\it tension}
${ {{\mbox {\boldmath $F$}}}}$ in the ideal chain,
$ { {{\mbox {\boldmath $F$}}}} = ( {3 k_{\rm B} T / (N-1) a^2 } ) \, { {{\mbox {\boldmath $r$}}}}, $
which resists any attempt at chain extension. Furthermore, this tension is
proportional to the end-to-end vector ${ {{\mbox {\boldmath $r$}}}}$. This implies that the
ideal chain acts like a {\it Hookean} spring, with a spring constant $H$
given by,
\begin{equation}
H = {3 k_{\rm B} T \over (N-1) a^2 }
\end{equation}
The equivalence of the behavior of an ideal chain to that of a Hookean spring
is responsible for the introduction of
the bead-spring chain model. Since long enough {\it
sub-chains} within the ideal chain also have normally distributed
end-to-end vectors, the entire ideal chain may be replaced by beads
connected to each other by springs. Note that each bead in a bead-spring chain
represents the mass of a sub-chain of the ideal chain, while the spring
imitates the behavior of the end-to-end vector of the sub-chain.
The change in the Helmholtz free energy of an ideal chain due to a
change in the end-to-end vector is purely due to entropic
considerations. The internal energy, which has only the kinetic
energy contribution, does not depend on the end-to-end vector.
Increasing the end-to-end vector of the chain decreases the
number of allowed configurations, and this change is resisted by the
chain. The entropic origin of the resistance is responsible for the
use of the phrase {\it entropic spring} to describe the springs of the
bead-spring chain model.
The potential energy $S$, of a bead-spring chain due to the presence of
Hookean springs is the sum of the
potential energies of all the springs in the chain. For a bead-spring
chain with $N$ beads, this is given by,
\begin{equation}
S = {1 \over 2} \, H \, \sum_{i=1}^{N-1} \, { {{\mbox {\boldmath $Q$}}}}_i \cdot { {{\mbox {\boldmath $Q$}}}}_i
\end{equation}
where ${ {{\mbox {\boldmath $Q$}}}}_i= { {{\mbox {\boldmath $r$}}}}_{i+1} - { {{\mbox {\boldmath $r$}}}}_{i}$
is the {\it bead connector vector} between the beads $i$ and $i+1$.
The configurational distribution function for a Hookean bead-spring chain may be
found from equation~(\ref{confdis}) by substituting $\phi(Q) = S$,
with the Cartesian components of the connector
vectors chosen as the generalised coordinates $Q_s$. The number of
generalised coordinates is consequently, $d=3N-3$, reflecting the lack of any
constraints in the model. Since $g(Q)$ is a constant independent of $Q$ for
the bead-spring chain model \cite{birdb}, one can show that,
\begin{equation}
\psi_{\rm eq} \, (\,{ {{\mbox {\boldmath $Q$}}}}_1, \ldots, \, { {{\mbox {\boldmath $Q$}}}}_{N-1} \,)
= \prod_j \, \Biggl( \, {H \over 2 \pi k_{\rm B} T } \, \Biggr)^{3/2} \,
\exp \, \Biggl( {- H \over 2 \,k_{\rm B} T } \; { {{\mbox {\boldmath $Q$}}}}_j \cdot { {{\mbox {\boldmath $Q$}}}}_j \Biggr)
\label{equidis}
\end{equation}
It is clear from equation~(\ref{equidis}) that the equilibrium
distribution function for each connector vector
in the bead-spring chain is a Gaussian distribution, and these
distributions are independent of each other. From the property of
Gaussian distributions, it follows that the vector connecting any
two beads in a bead-spring chain at equilibrium also obeys a Gaussian
distribution.
The Hookean bead-spring chain model has the unrealistic feature that the
magnitude of the end-to-end vector has no upper bound and can infact
extend to infinity. On the other hand, the real polymer molecule
has a finite fully extended length. This deficiency of the bead-spring
chain model is not serious at equilibrium, but becomes important in
strong flows where the polymer molecule is highly extended. Improved models
seek to correct this deficiency by modifying the force law between the beads of
the chain such that the chain stiffens as its extension increases. An
example of such a nonlinear spring force law that is very commonly
used in polymer literature is the {\it finitely extensible
nonlinear elastic} (FENE) force law~\cite{birdb}.
\subsubsection{Excluded volume}
The universal behavior of polymers dissolved in theta solvents can be
explained by recognising that all high molecular weight polymers
dissolved in theta solvents behave like ideal chains. However, a polymer chain
cannot be identical to an ideal chain since unlike the ideal chain,
two parts of a polymer chain cannot occupy the same location at the
same time. In the very special case of a theta solvent, the excluded
volume force is just balanced by the repulsion of the solvent molcules.
In the more commonly occuring case of good solvents, the excluded volume
interaction acts between any two parts of the chain
that are close to each other in space, irrespective of their distance
from each other along the chain length, and leads to a swelling of the chain.
This is a {\it long range interaction}, and as a result,
it seriously alters the macroscopic
properties of the chain. Indeed there is a qualitative difference, and
this difference cannot be treated as a small perturbation from the
behavior of an ideal chain~\cite{strobl}. Curiously enough
however, all swollen chains behave similarly to each other, and
modelling this universal behavior was historically one of the challenges of
polymer physics~\cite{strobl,yamakawa,degennes,doied,desclo}.
Here, we very briefly mention the manner in which the problem is
formulated in the case of bead-spring chains.
The presence of excluded volume causes the polymer chain to swell.
However, the swelling ceases when the
entropic retractive force balances the excluded
volume force. The retractive force arises due
to the decreasing number of conformational states available to the
polymer chain due to chain expansion. This picture of the microscopic
phenomenon is captured by writing the potential energy of the
bead-spring chain as a sum of the spring potential energy and
the potential energy due to excluded volume interactions.
The excluded volume potential energy is found by summing the
interaction energy over all pairs of beads $\mu$ and $\nu$,
$E = (1 / 2) \sum_{\mu,\nu = 1 \atop \mu \ne \nu}^N \, E \left( { {{\mbox {\boldmath $r$}}}}_{\nu}
- { {{\mbox {\boldmath $r$}}}}_{\mu} \right), $
where $E \left( { {{\mbox {\boldmath $r$}}}}_{\nu} - { {{\mbox {\boldmath $r$}}}}_{\mu} \right)$ is a short-range
function usually taken as,
$ E \left( { {{\mbox {\boldmath $r$}}}}_{\nu} - { {{\mbox {\boldmath $r$}}}}_{\mu} \right) = v \, k_{\rm B} T \,
\delta \left( { {{\mbox {\boldmath $r$}}}}_{\nu} - { {{\mbox {\boldmath $r$}}}}_{\mu} \right);$
$v$ being the excluded volume parameter with dimensions of volume.
The total potential energy of a Hookean bead-spring chain with
$\delta$-function excluded volume interactions is consequently,
\begin{equation}
\phi = {1 \over 2} \, H \, \sum_{i=1}^{N-1} \, { {{\mbox {\boldmath $Q$}}}}_i \cdot { {{\mbox {\boldmath $Q$}}}}_i
+ {1 \over 2} \, v \, k_{\rm B} T \, \sum_{\mu,\nu = 1 \atop \mu \ne \nu}^N \,
\delta \left( { {{\mbox {\boldmath $r$}}}}_{\nu} - { {{\mbox {\boldmath $r$}}}}_{\mu} \right)
\label{phitot}
\end{equation}
The equilibrium configurational distribution function of a polymer chain
in the presence of Hookean springs and excluded volume can be
found by substituting equation~(\ref{phitot}) into
equation~(\ref{confdis}), and all average properties of the chain can be
found by using equation~(\ref{conave}). Solutions to these equations in
the limit of long chains have been found by using a number of
approximate schemes since an exact treatment is impossible. The most
accurate scheme involves the use of field theoretic and renormalisation group
methods~\cite{desclo}. The universal scaling of a
number of equilibrium properties of
dilute polymer solutions with good solvents are correctly predicted
by this theory. For instance, the end-to-end distance is predicted to scale
with molecular weight as, $R \sim M^{0.588}$.
The spring potential in equation~(\ref{phitot}) has been
derived by considering the Helmholtz free energy of an ideal chain,
{\it ie.} under theta conditions. It seems reasonable to expect
that a more accurate derivation
of the retractive force in the chain due to entropic considerations
would require the treatment of a polymer chain
in a good solvent. This would lead to a
non-Hookean force law between the beads~\cite{degennes,ottbook}.
Such non-Hookean force laws have so far not been treated in
non-equilibrium theories for dilute polymer solutions with good
solvents.
\subsection{Non-equilibrium configurations}
Unlike in the case of equilibrium solutions
it is not possible to derive the phase space distribution function
for non-equilibrium solutions from very general arguments.
As we shall see here it is only possible to derive a partial
differential equation that governs the evolution of the configurational
distribution function by considering the conservation of
probability in phase space, and the equation of motion for
the particular model chosen. The arguments relevent to a bead-spring chain
are developed below.
\subsubsection{Distribution functions and averages}
The phase space of a bead-spring chain with $N$ beads can be chosen to
be given by the $6N - 6$ components of the bead position coordinates,
and the bead velocities such that,
\[ {\cal P} \, \bigl( \, { {{\mbox {\boldmath $r$}}}}_1, \ldots, { {{\mbox {\boldmath $r$}}}}_N, \,
{ {\dot {{\mbox {\boldmath $r$}}}}}_1, \ldots, { {\dot {{\mbox {\boldmath $r$}}}}}_N, \, t \, \bigr) \,
d{ {{\mbox {\boldmath $r$}}}}_1 \ldots d{ {{\mbox {\boldmath $r$}}}}_N \,
d { {\dot {{\mbox {\boldmath $r$}}}}}_1 \ldots d{ {\dot {{\mbox {\boldmath $r$}}}}}_N \]
is the probability that the bead-spring chain has an instantaneous
configuration in the range $d{ {{\mbox {\boldmath $r$}}}}_1, \ldots , d{ {{\mbox {\boldmath $r$}}}}_N$ about
$ { {{\mbox {\boldmath $r$}}}}_1, \ldots , { {{\mbox {\boldmath $r$}}}}_N$, and the beads in the chain
have velocities in the range
$d { {\dot {{\mbox {\boldmath $r$}}}}}_1, \ldots , d{ {\dot {{\mbox {\boldmath $r$}}}}}_N$ about
$ { {\dot {{\mbox {\boldmath $r$}}}}}_1, \ldots , { {\dot {{\mbox {\boldmath $r$}}}}}_N$.
The configurational distribution function $\Psi$,
can be found by integrating ${\cal P}$ over all the bead velocities,
\begin{equation}
\Psi \, ( \, { {{\mbox {\boldmath $r$}}}}_1, \ldots, \, { {{\mbox {\boldmath $r$}}}}_{N}, \, t \,)
= \int\! \ldots \!\int \, {\cal P} \,
d { {\dot {{\mbox {\boldmath $r$}}}}}_1 \ldots d{ {\dot {{\mbox {\boldmath $r$}}}}}_N
\end{equation}
The distribution of internal configurations $\psi $, is given by,
\begin{equation}
\psi \, ( \, { {{\mbox {\boldmath $Q$}}}}_1, \ldots, \, { {{\mbox {\boldmath $Q$}}}}_{N-1}, \, t \,)
= \int \, \Psi^\prime \, ( \, { {{\mbox {\boldmath $r$}}}}_c, \, { {{\mbox {\boldmath $Q$}}}}_1,
\ldots, \, { {{\mbox {\boldmath $Q$}}}}_{N-1}, \, t \,) \, d { {{\mbox {\boldmath $r$}}}}_c
\end{equation}
where, $ \Psi^\prime = \Psi,$ as a result of the
Jacobian relation for the configurational vectors~\cite{birdb},
\[
\left\vert {\, {\partial( \, { {{\mbox {\boldmath $r$}}}}_1, \ldots, \, { {{\mbox {\boldmath $r$}}}}_{N} \, ) \over
\partial ( \, { {{\mbox {\boldmath $r$}}}}_c, { {{\mbox {\boldmath $Q$}}}}_1, \ldots, \, { {{\mbox {\boldmath $Q$}}}}_{N-1} \,)} }\,
\right\vert = 1
\]
Note that the normalisation condition
$\int \, \psi \, d{ {{\mbox {\boldmath $Q$}}}}_1 \, d{ {{\mbox {\boldmath $Q$}}}}_2 \, \ldots \, d{ {{\mbox {\boldmath $Q$}}}}_{N-1}
= 1$ is satisfied by $\psi$.
When the configurations of the bead-spring chain do not depend on the location
of the center of mass, as in the case of homogeneous flows with no concentration
gradients, $ (1 / V) \, \psi = \Psi, $
where $V$ is the volume of the solution.
The velocity-space distribution function $\Xi$ is defined by,
\begin{equation}
{\Xi } \, \bigl( \, { {{\mbox {\boldmath $r$}}}}_1, \ldots, { {{\mbox {\boldmath $r$}}}}_N, \,
{ {\dot {{\mbox {\boldmath $r$}}}}}_1, \ldots, { {\dot {{\mbox {\boldmath $r$}}}}}_N, \, t \, \bigr) \,
= {{\cal P} \over \Psi}
\end{equation}
Note that $\Xi$ satisfies the normalisation condition
$\int \! \ldots \! \int \, \Xi \,
d { {\dot {{\mbox {\boldmath $r$}}}}}_1 \ldots d{ {\dot {{\mbox {\boldmath $r$}}}}}_N = 1$.
Under certain circumstances that are discussed later, it is common to
assume that the velocity-space distribution function is
Maxwellian about the mass-average solution velocity,
\begin{equation}
\Xi = {\cal N}_M \,
\exp \, \biggl[ \, - \, {1 \over 2 \, k_{\rm B} T } \,
\Bigl[ \, m ( { {\dot {{\mbox {\boldmath $r$}}}}}_1 -
{ {{\mbox {\boldmath $v$}}}})^2 + \ldots + m ( { {\dot {{\mbox {\boldmath $r$}}}}}_N -
{ {{\mbox {\boldmath $v$}}}})^2 \, \Bigr] \, \biggr]
\end{equation}
where ${\cal N}_M$ is the normalisation constant for the Maxwellian
distribution. Making this assumption implies that one expects
the time scales involved in equilibration processes in
momentum space to be much smaller than
the time scales governing relaxation processes in configuration space.
Averages of quantities which are functions of the bead positions and
bead velocities are defined analogously to the those in the
previous section, namely,
\begin{equation}
{\bigl\langle} X {\bigr\rangle} = \int \! \ldots \! \int \,
X \, {\cal P} \, d{ {{\mbox {\boldmath $r$}}}}_1 \ldots d{ {{\mbox {\boldmath $r$}}}}_N \,
d { {\dot {{\mbox {\boldmath $r$}}}}}_1 \ldots d{ {\dot {{\mbox {\boldmath $r$}}}}}_N
\end{equation}
is the the phase space average of $X$, while the velocity-space average is,
\begin{equation}
\lmav X \, \rmav
= \int\!\!\int \, X \, \Xi \, d { {\dot {{\mbox {\boldmath $r$}}}}}_1 \ldots d{ {\dot {{\mbox {\boldmath $r$}}}}}_N
\end{equation}
For quantities $X$ that depend only on the internal
configurations of the polymer chain and not on the center of mass
coordinates or bead velocities,
\begin{equation}
{\bigl\langle} X {\bigr\rangle} = \int \, X \, \psi \, d{ {{\mbox {\boldmath $Q$}}}}_1 d{ {{\mbox {\boldmath $Q$}}}}_2
\ldots d{ {{\mbox {\boldmath $Q$}}}}_{N-1}
\end{equation}
\subsubsection{The equation of motion}
The equation of motion for a bead in a bead-spring chain is
derived by considering the forces acting on it. The
total force ${ {{\mbox {\boldmath $F$}}}}_{\mu}$, on bead $\mu$ is,
${ {{\mbox {\boldmath $F$}}}}_{\mu} = \sum_i \, { {{\mbox {\boldmath $F$}}}}_{\mu}^{(i)}$,
where the ${ {{\mbox {\boldmath $F$}}}}_{\mu}^{(i)}, \; i =1,2, \ldots$, are the various intramolecular
and solvent forces acting on the bead.
The fundamental difference among the
various molecular theories developed so far for the description
of dilute polymer solutions lies in the
kinds of forces ${ {{\mbox {\boldmath $F$}}}}_{\mu}^{(i)}$ that are assumed to be acting on
the beads of the chain. In almost all these theories, the accelaration of
the beads due to the force ${ {{\mbox {\boldmath $F$}}}}_{\mu}$ is neglected.
A bead-spring chain model incorporating bead inertia has shown that
the neglect of bead inertia is justified in most practical
situations~\cite{schiebott}. The equation of motion is consequently obtained by
setting ${ {{\mbox {\boldmath $F$}}}}_{\mu} = { {{\mbox {\boldmath $0$}}}}$.
Here, we consider the following force balance on each bead $\mu$,
\begin{equation}
{ {{\mbox {\boldmath $F$}}}}_{\mu}^{(h)} + { {{\mbox {\boldmath $F$}}}}_{\mu}^{(b)} + { {{\mbox {\boldmath $F$}}}}_{\mu}^{(\phi)} +
{ {{\mbox {\boldmath $F$}}}}_{\mu}^{(iv)} = { {{\mbox {\boldmath $0$}}}} \quad (\mu = 1, 2, \ldots, N)
\label{eqmo}
\end{equation}
where, ${ {{\mbox {\boldmath $F$}}}}_{\mu}^{(h)}$ is the {\it hydrodynamic drag } force,
${ {{\mbox {\boldmath $F$}}}}_{\mu}^{(b)}$ is the {\it Brownian } force,
${ {{\mbox {\boldmath $F$}}}}_{\mu}^{(\phi)}$ is the {\it intramolecular } force due to
the potential energy of the chain, and
${ {{\mbox {\boldmath $F$}}}}_{\mu}^{(iv)}$ is the force due to the presence of
{\it internal viscosity}. These are the various forces
that have been considered so far in the literature, which
are believed to play a crucial role in determining the polymer solution's
transport properties.
The nature of each of these forces is discussed in greater detail below.
Note that, as is common in most theories, external forces acting on the
bead have been neglected. However, their inclusion is reasonably straight
forward~\cite{birdb}.
The hydrodynamic drag force ${ {{\mbox {\boldmath $F$}}}}_{\mu}^{(h)}$,
is the force of resistance offered by the
solvent to the motion of the bead $\mu$. It is assumed to be proportional to
the difference between the velocity-averaged bead velocity
$\lmav { {\dot {{\mbox {\boldmath $r$}}}}}_{\mu} \rmav$ and the local velocity of the solution,
\begin{equation}
{ {{\mbox {\boldmath $F$}}}}_{\mu}^{(h)} = - \zeta \, [ \, \lmav { {\dot {{\mbox {\boldmath $r$}}}}}_{\mu} \rmav
- ({ {{\mbox {\boldmath $v$}}}}_{\mu} + { {{\mbox {\boldmath $v$}}}}^\prime_{\mu}) \, ]
\end{equation}
where $\zeta$ is bead friction coefficient.
Note that for spherical beads with radius $a$, in a solvent with viscosity
$\eta_s$, the bead friction coefficient $\zeta$ is given by the Stokes
expression: $\zeta=6 \pi \eta_s a$. The velocity-average of the
bead velocity is not carried out with the Maxwellian distribution since
this is just the mass-average solution velocity. However, it turns out that
an explicit evaluation of the velocity-average is unnecessary for the
development of the theory. Note that
the velocity of the solution at bead $\mu$ has two components,
the imposed flow field ${ {{\mbox {\boldmath $v$}}}}_{\mu} = {{\mbox {\boldmath $v$}}}_0 + {\mbox {\boldmath $\kappa$}} (t) \cdot
{ {{\mbox {\boldmath $r$}}}}_{\mu}$, and the perturbation of the flow field
${ {{\mbox {\boldmath $v$}}}}^\prime_{\mu}$ due to the motion of the other beads of the
chain. This perturbation is called {\it hydrodynamic interaction}, and its
incorporation in molecular theories has proved to be of utmost
importance in the prediction of transport properties.
The presence of hydrodynamic interaction couples the motion of one bead
in the chain to all the other beads, regardless of the distance between
the beads along the length of the chain. In this sense, hydrodynamic
interaction is a long range phenomena.
The perturbation to the flow field ${ {{\mbox {\boldmath $v$}}}}^\prime ({ {{\mbox {\boldmath $r$}}}})$ at a
point ${ {{\mbox {\boldmath $r$}}}}$ due to the presence of a point force ${ {{\mbox {\boldmath $F$}}}}
({ {{\mbox {\boldmath $r$}}}}^\prime)$ at the point ${ {{\mbox {\boldmath $r$}}}}^\prime$, can be found by solving the
linearised Navier-Stokes equation~\cite{birda,doied},
\begin{equation}
{ {{\mbox {\boldmath $v$}}}}^\prime ({ {{\mbox {\boldmath $r$}}}}) = {{\mbox {\boldmath $\Omega$}}} ({ {{\mbox {\boldmath $r$}}}} - { {{\mbox {\boldmath $r$}}}}^\prime) \cdot
{ {{\mbox {\boldmath $F$}}}} ({ {{\mbox {\boldmath $r$}}}}^\prime)
\end{equation}
where $ {{\mbox {\boldmath $\Omega$}}} ({ {{\mbox {\boldmath $r$}}}})$, called the {\it Oseen-Burgers tensor}, is the
Green's function of the linearised Navier-Stokes equation,
\begin{equation}
{{\mbox {\boldmath $\Omega$}}} ({ {{\mbox {\boldmath $r$}}}}) = {1 \over 8 \pi \eta_s r} \,
\biggl( {{\mbox {\boldmath $1$}}} + { {{\mbox {\boldmath $r$}}} {{\mbox {\boldmath $r$}}} \over r^2} \biggr)
\end{equation}
The effect of hydrodynamic interaction is taken into account in polymer
kinetic theory by treating the beads in the bead-spring chain as point
particles. As a result, in response to the hydrodynamic drag force
acting on each bead, each bead exerts an equal and opposite force on
the solvent at the point that defines its location. The disturbance to
the velocity at the bead $\nu$ is the sum of the disturbances caused by
all the other beads in the chain,
${ {{\mbox {\boldmath $v$}}}}^\prime_{\nu} = - \sum_{\mu} \,
{{\mbox {\boldmath $\Omega$}}} _{\nu \mu} ({ {{\mbox {\boldmath $r$}}}_{\nu}- {{\mbox {\boldmath $r$}}}_{\mu}}) \cdot { {{\mbox {\boldmath $F$}}}}_{\mu}^{(h)}, $
where, $ {{\mbox {\boldmath $\Omega$}}} _{\mu \nu} = {{\mbox {\boldmath $\Omega$}}} _{\nu \mu} $ is given by,
\begin{equation}
{{\mbox {\boldmath $\Omega$}}} _{\mu \nu}= \cases{\displaystyle{{1 \over
8 \pi \eta_s r_{\mu \nu}}}
\biggl( \displaystyle{ {{\mbox {\boldmath $1$}}} + { {{\mbox {\boldmath $r$}}}_{\mu \nu} {{\mbox {\boldmath $r$}}}_{\mu \nu} \over
r_{\mu \nu}^2}}
\biggr), \quad {{\mbox {\boldmath $r$}}}_{\mu \nu} =
{{\mbox {\boldmath $r$}}}_{\mu}- {{\mbox {\boldmath $r$}}}_{\nu}, & for $\mu \neq \nu$ \cr
\noalign{\vskip3pt}
0 & for $\mu = \nu$ \cr}
\end{equation}
The Brownian force ${ {{\mbox {\boldmath $F$}}}}_{\mu}^{(b)}$, on a bead $\mu$ is the
result of the irregular collisions between the solvent molecules and the
bead. Instead of representing the Brownian force by a randomly varying
force, it is common in polymer kinetic theory to use an averaged
Brownian force,
\begin{equation}
{ {{\mbox {\boldmath $F$}}}}_{\mu}^{(b)} = - k_{\rm B} T \, \biggl( \, {\partial \, \ln \Psi \over
\partial { {{\mbox {\boldmath $r$}}}}_{\mu}} \, \biggr)
\label{browf}
\end{equation}
As mentioned earlier, the origin of this expression can be understood
within the framework of the complete phase space theory~\cite{birdb,cb}.
Note that the Maxwellian distribution has been used to
derive equation~(\ref{browf}).
The total potential energy $\phi$ of the bead-spring chain is the sum of the
potential energy $S$ of the elastic springs, and the potential energy
$E$ due to the presence of excluded volume interactions between the beads.
The force ${ {{\mbox {\boldmath $F$}}}}_{\mu}^{(\phi)}$ on a bead $\mu$ due to the intramolecular
potential energy $\phi$ is given by,
\begin{equation}
{ {{\mbox {\boldmath $F$}}}}_{\mu}^{(\phi)} = - {\partial \phi \over
\partial { {{\mbox {\boldmath $r$}}}}_{\mu}}
\label{potf}
\end{equation}
In addition to the various forces discussed above,
the {\it internal viscosity} force ${ {{\mbox {\boldmath $F$}}}}_{\mu}^{(iv)}$, has received
considerable attention in literature~\cite{birdott,schiebiv,wedgeiv},
though it appears not to have widespread acceptance. Various physical
reasons have been cited as
being responsible for the internal viscosity force. For instance, the
hindrance to internal rotations due to the presence of energy barriers,
the friction between two monomers on a chain that are close together in
space and have a non-zero relative velocity, and so on.
The simplest models for the internal viscosity force assume that it acts
only between neighbouring beads in a bead-spring chain, and depends on
the average relative velocities of these beads. Thus, for a bead $\mu$ that is
not at the chain ends,
\begin{equation}
{ {{\mbox {\boldmath $F$}}}}_{\mu}^{(iv)} = \varphi \, \biggl( {
({ {{\mbox {\boldmath $r$}}}}_{\mu+1} - { {{\mbox {\boldmath $r$}}}}_{\mu})\,({ {{\mbox {\boldmath $r$}}}}_{\mu+1} - { {{\mbox {\boldmath $r$}}}}_{\mu})
\over
\big\vert \, { {{\mbox {\boldmath $r$}}}}_{\mu+1} - { {{\mbox {\boldmath $r$}}}}_{\mu} \, \big\vert^2 } \biggr)
\cdot \lmav \, { {\dot {{\mbox {\boldmath $r$}}}}}_{\mu+1} - { {\dot {{\mbox {\boldmath $r$}}}}}_{\mu} \, \rmav
- \varphi \, \biggl( {
({ {{\mbox {\boldmath $r$}}}}_{\mu} - { {{\mbox {\boldmath $r$}}}}_{\mu-1})\,({ {{\mbox {\boldmath $r$}}}}_{\mu} - { {{\mbox {\boldmath $r$}}}}_{\mu-1})
\over
\big\vert \, { {{\mbox {\boldmath $r$}}}}_{\mu} - { {{\mbox {\boldmath $r$}}}}_{\mu-1} \, \big\vert^2 } \biggr)
\cdot \lmav \, { {\dot {{\mbox {\boldmath $r$}}}}}_{\mu} - { {\dot {{\mbox {\boldmath $r$}}}}}_{\mu-1} \, \rmav
\end{equation}
where $\varphi$ is the internal viscosity coefficient. A scaling theory
for a more general model that accounts for internal friction between
arbitrary pairs of monomers has also been developed~\cite{rabott}.
The equation of motion for bead $\nu$ can consequently be written as,
\begin{equation}
- \zeta \, \Bigl[ \, \lmav { {\dot {{\mbox {\boldmath $r$}}}}}_{\nu} \rmav - {{\mbox {\boldmath $v$}}}_0
- {\mbox {\boldmath $\kappa$}} \cdot { {{\mbox {\boldmath $r$}}}}_{\nu} +
\sum_{\mu} \, {{\mbox {\boldmath $\Omega$}}} _{\nu \mu} \cdot { {{\mbox {\boldmath $F$}}}}_{\mu}^{(h)} \, \Bigr]
- k_{\rm B} T \, {\partial \, \ln \Psi \over
\partial { {{\mbox {\boldmath $r$}}}}_{\nu}} + { {{\mbox {\boldmath $F$}}}}_{\nu}^{(\phi)}
+ { {{\mbox {\boldmath $F$}}}}_{\nu}^{(iv)} = { {{\mbox {\boldmath $0$}}}}
\label{motion}
\end{equation}
Since ${ {{\mbox {\boldmath $F$}}}}_{\mu}^{(h)} = k_{\rm B} T \, ( {\partial \, \ln \Psi /
\partial { {{\mbox {\boldmath $r$}}}}_{\mu}} ) - { {{\mbox {\boldmath $F$}}}}_{\mu}^{(\phi)}
- { {{\mbox {\boldmath $F$}}}}_{\mu}^{(iv)}$, equation~(\ref{motion}) can be rearranged to give,
\begin{equation}
\lmav { {\dot {{\mbox {\boldmath $r$}}}}}_{\nu} \, \rmav = {{\mbox {\boldmath $v$}}}_0 + {\mbox {\boldmath $\kappa$}} \cdot { {{\mbox {\boldmath $r$}}}}_{\nu}
+ {1 \over \zeta} \, \sum_{\mu} \, {{\mbox {\boldmath $\gamma$}}}_{\mu \nu} \cdot
\biggl( \, - k_{\rm B} T \, {\partial \, \ln \Psi \over
\partial { {{\mbox {\boldmath $r$}}}}_{\mu}} + { {{\mbox {\boldmath $F$}}}}_{\mu}^{(\phi)}
+ { {{\mbox {\boldmath $F$}}}}_{\mu}^{(iv)} \, \biggr)
\label{eqrnudot}
\end{equation}
where $ {{\mbox {\boldmath $\gamma$}}}_{\mu \nu}$ is the dimensionless
{\it diffusion tensor}~\cite{birdb},
\begin{equation}
{{\mbox {\boldmath $\gamma$}}}_{\mu \nu} = \delta_{\mu \nu } \, {{\mbox {\boldmath $1$}}} + \zeta \, {{\mbox {\boldmath $\Omega$}}} _{\nu \mu}
\end{equation}
By manipulating equation~(\ref{eqrnudot}),
it is possible to rewrite the equation of motion in terms of the
velocities of the center of mass ${ {{\mbox {\boldmath $r$}}}}_c$ and the
bead-connector vectors ${ {{\mbox {\boldmath $Q$}}}}_k$,
\begin{equation}
\lmav { {\dot {{\mbox {\boldmath $r$}}}}}_{c} \, \rmav = {{\mbox {\boldmath $v$}}}_0 + {\mbox {\boldmath $\kappa$}} \cdot { {{\mbox {\boldmath $r$}}}}_{c}
- {1 \over N \zeta} \, \sum_{\nu,\mu,k} \, \overline B_{k \mu}
\, {{\mbox {\boldmath $\gamma$}}}_{\mu \nu} \cdot
\biggl( \, k_{\rm B} T \, {\partial \, \ln \Psi \over \partial { {{\mbox {\boldmath $Q$}}}}_{k}}
+ {\partial \phi \over \partial { {{\mbox {\boldmath $Q$}}}}_k} + { {{\mbox {\boldmath $f$}}}}_{k}^{(iv)} \, \biggr)
\label{eqrcdot}
\end{equation}
\begin{equation}
\lmav {\dot {{\mbox {\boldmath $Q$}}}}_{j} \, \rmav = {\mbox {\boldmath $\kappa$}} \cdot { {{\mbox {\boldmath $Q$}}}}_{j}
- {1 \over \zeta} \, \sum_k \, {\widetilde {{\mbox {\boldmath $A$}}} }_{jk} \cdot
\biggl( \, k_{\rm B} T \, {\partial \, \ln \Psi \over \partial { {{\mbox {\boldmath $Q$}}}}_{k}}
+ {\partial \phi \over \partial { {{\mbox {\boldmath $Q$}}}}_k} + { {{\mbox {\boldmath $f$}}}}_{k}^{(iv)} \, \biggr)
\label{eqqdot}
\end{equation}
where, ${\overline B}_{k \nu}$ is defined by,
${\overline B_{k \nu}} = \delta_{k+1, \nu} - \delta_{k \nu}, $
the internal viscosity force, ${ {{\mbox {\boldmath $f$}}}}_{k}^{(iv)}$,
in the direction of the connector vector ${ {{\mbox {\boldmath $Q$}}}}_k$ is,
\begin{equation}
{ {{\mbox {\boldmath $f$}}}}_{k}^{(iv)} = \varphi \; { { {{\mbox {\boldmath $Q$}}}}_k { {{\mbox {\boldmath $Q$}}}}_k \over
\big\vert \, { {{\mbox {\boldmath $Q$}}}}_{k} \, \big\vert^2 }
\cdot \lmav \, {\dot {{\mbox {\boldmath $Q$}}}}_{k} \, \rmav
\label{ivf}
\end{equation}
and the tensor $ {\widetilde {{\mbox {\boldmath $A$}}} }_{jk}$ which accounts for the presence of
hydrodynamic interaction is defined by,
\begin{equation}
{\widetilde {{\mbox {\boldmath $A$}}} }_{jk} = \sum_{\nu, \, \mu} \, \overline B_{j \nu}
\, {{\mbox {\boldmath $\gamma$}}}_{\mu \nu} \overline B_{k \mu}
= A_{jk} {{\mbox {\boldmath $1$}}} + \zeta \bigl( {{\mbox {\boldmath $\Omega$}}} _{j,k} + {{\mbox {\boldmath $\Omega$}}} _{j+1,k+1}
- {{\mbox {\boldmath $\Omega$}}} _{j,k+1} - {{\mbox {\boldmath $\Omega$}}} _{j+1,k} \bigr)
\end{equation}
Here, $A_{jk}$ is the Rouse matrix,
\begin{equation}
A_{jk}=\cases{ 2&for $\vert {j-k} \vert = 0 $,\cr
\noalign{\vskip3pt}
-1& for $\vert {j-k} \vert =1 $,\cr
\noalign{\vskip3pt}
0 & otherwise \cr}
\end{equation}
In order to obtain the {\it diffusion} equation for a dilute solution of
bead-spring chains, the equation of motion derived here must be combined
with an equation of continuity.
\subsubsection{The diffusion equation}
The equation of continuity or `probability conservation', which states
that a bead-spring chain that disappears from one configuration must
appear in another, has the form~\cite{birdb},
\begin{equation}
{\partial \, \Psi \over \partial t} = -\sum_{\nu} \,
{\partial \over \partial { {{\mbox {\boldmath $r$}}}}_{\nu}}
\cdot \lmav { {\dot {{\mbox {\boldmath $r$}}}}}_{\nu} \rmav \, \Psi
\end{equation}
The independence of $\Psi$ from the location of the center of mass for
homogeneous flows,
and the result ${\rm tr} \, {\mbox {\boldmath $\kappa$}} = 0$, for an incompressible fluid,
can be shown to imply that the equation of continuity can be written in
terms of internal coordinates alone as~\cite{birdb},
\begin{equation}
{\partial \, \psi \over \partial t} = -\sum_j \,
{\partial \over \partial { {{\mbox {\boldmath $Q$}}}}_j}
\cdot \lmav {\dot {{\mbox {\boldmath $Q$}}}}_j \rmav \, \psi
\label{cont}
\end{equation}
The general diffusion equation which governs the time evolution
of the instantaneous configurational distribution function $\psi$,
in the presence of hydrodynamic interaction,
arbitrary spring and excluded volume forces, and an internal viscosity
force given by equation~(\ref{ivf}), is obtained by
substituting the equation of motion for $\lmav {\dot {{\mbox {\boldmath $Q$}}}}_j
\rmav$ from equation~(\ref{eqqdot}) into equation~(\ref{cont}). It has the
form,
\begin{equation}
{\partial \, \psi \over \partial t} = - \sum_j \,
{\partial \over \partial { {{\mbox {\boldmath $Q$}}}}_j} \cdot \biggl(
{\mbox {\boldmath $\kappa$}} \cdot { {{\mbox {\boldmath $Q$}}}}_{j}
- {1 \over \zeta} \, \sum_k \, {\widetilde {{\mbox {\boldmath $A$}}} }_{jk} \cdot
\, \Bigl[ \, {\partial \phi \over \partial { {{\mbox {\boldmath $Q$}}}}_k}
+ { {{\mbox {\boldmath $f$}}}}_{k}^{(iv)} \, \Bigr] \, \biggr) \, \psi
+ {k_{\rm B} T \over \zeta} \, \sum_{j, \, k} \,
{\partial \over \partial { {{\mbox {\boldmath $Q$}}}}_j} \cdot {\widetilde {{\mbox {\boldmath $A$}}} }_{jk}
\cdot {\partial \psi \over \partial { {{\mbox {\boldmath $Q$}}}}_k}
\label{diff}
\end{equation}
Equations such as~(\ref{diff}) are also referred to as
{\em Fokker-Planck} or {\em Smoluchowski} equations in the literature.
The diffusion equation~(\ref{diff}) is the most fundamental equation of the
kinetic theory of dilute polymer solutions since a knowledge of $\psi$,
for a flow field specified by $ {\mbox {\boldmath $\kappa$}}$, would make it possible to evaluate
averages of various configuration dependent quantities and thereby permit
comparison of theoretical predictions with experimental observations.
The diffusion equation can be used to derive the time evolution equation
of the average of any arbitrary configuration dependent quantity,
$X (\,{ {{\mbox {\boldmath $Q$}}}}_1, \, \ldots, \, { {{\mbox {\boldmath $Q$}}}}_{N-1} \,), $
by multiplying the left and right hand sides
of equation~(\ref{diff}) by $X$ and integrating both sides over all
possible configurations,
\begin{equation}
{d \, {\bigl\langle} X {\bigr\rangle} \over dt} =
\sum_j \, {\bigl\langle} \, {\mbox {\boldmath $\kappa$}} \cdot { {{\mbox {\boldmath $Q$}}}}_{j}
\cdot {\partial \, X \over \partial { {{\mbox {\boldmath $Q$}}}}_{j}} \, {\bigr\rangle}
- { k_{\rm B} T \over \zeta} \, \sum_{j, \, k} \, {\bigl\langle} \, {\widetilde {{\mbox {\boldmath $A$}}} }_{jk} \cdot
{\partial \, \ln \psi \over \partial { {{\mbox {\boldmath $Q$}}}}_{k}}
\cdot {\partial \, X \over \partial { {{\mbox {\boldmath $Q$}}}}_{j}} \, {\bigr\rangle}
- { 1 \over \zeta} \, \sum_{j, \, k} \, {\bigl\langle} \, {\widetilde {{\mbox {\boldmath $A$}}} }_{jk} \cdot
\Bigl[ \, {\partial \phi \over \partial { {{\mbox {\boldmath $Q$}}}}_k} +
{ {{\mbox {\boldmath $f$}}}}_{k}^{(iv)} \, \Bigr]
\cdot {\partial \, X \over \partial { {{\mbox {\boldmath $Q$}}}}_{j}} \, {\bigr\rangle}
\label{avgx}
\end{equation}
Except for a situation where nearly all the important
microscopic phenomena are neglected, the diffusion equation~(\ref{diff})
is unfortunately in general analytically
insoluble. There have been very few attempts to directly solve
diffusion equations with the help of a numerical solution
procedure~\cite{fan1,fan2}.
In this context it is worth bearing in mind that what are usually required are
averages of configuration dependent quantities. However,
in general even averages cannot be obtained exactly by solving
equation~(\ref{avgx}). As a result, it is common in most molecular
theories to obtain the averages by means of various approximations.
In order to examine the validity of
these approximations it is vitally important to compare the approximate
predictions of transport properties with the exact predictions of the models.
One of the ways by which exact numerical results may be
obtained is by adopting a numerical procedure based on the mathematical
equivalence of diffusion equations in polymer configuration space and
stochastic differential equations for the polymer configuration~\cite{ottbook}.
Instead of numerically solving the analytically intractable difusion equation for
the distribution function, stochastic trajectories can be generated
by {\it Brownian dynamics simulations} based on a numerical integration of the
appropriate stochastic differential equation. Averages calculated
from stochastic trajectories
(obtained as a solution of the stochastic differential equations), are
identical to the averages calculated from distribution functions
(obtained as a solution of the diffusion equations).
It has now become fairly common for any new approximate molecular theory of a
microscopic phenomenon to establish the accuracy of the results with the
help of Brownian dynamics simulations. In this chapter, while results of
such simulations are cited, details of the development of
the appropriate stochastic differential equations are not discussed. A
comprehensive introduction to the development of stochastic differential
equations which are equivalent to given diffusion equations
for the probability density in configuration space, can be found in
the treatise by {\"O}ttinger~\cite{ottbook}.
\subsubsection{The stress tensor}
The expression for the stress tensor in a dilute polymer solution was
originally obtained by the use of simple physical arguments
which considered the various mechanisms that contributed to
the flux of momentum across an
oriented surface in the fluid~\cite{birdb}. The major mechanisms
considered were the transport of momentum by beads crossing the surface,
and the tension in the springs that straddle the surface.
These physical arguments help to provide an intuitive understanding of
the origin of the different terms in the stress tensor expression. On the
other hand, such arguments are difficult to pursue in the presence of
complicated microscopic phenomena, and there is uncertainity about the
completeness of the final expression. An alternative approach to the
derivation of the expression for the stress tensor has been to use
more fundamental arguments that consider the complete phase space
of the polymeric fluid~\cite{birdb,cb}.
A very general expression for the polymer contribution to the stress tensor,
derived by adopting the complete phase space approach,
for models without constraints such as the bead-spring chain model,
in the presence of hydrodynamic interaction and an arbitrary intramolecular
potential force, is the {\it modified Kramers} expression~\cite{birdb},
\begin{equation}
{{\mbox {\boldmath $\tau$}}}^p = n_p \, \sum_{\nu} \, {\bigl\langle} \, ({ {{\mbox {\boldmath $r$}}}}_{\nu} - { {{\mbox {\boldmath $r$}}}}_{c}) \,
{ {{\mbox {\boldmath $F$}}}}_{\nu}^{(\phi)}\, {\bigr\rangle} + (N-1) \, n_p k_{\rm B} T \, {{\mbox {\boldmath $1$}}}
\label{modkram}
\end{equation}
When rewritten in terms of the internal coordinates of a
bead-spring chain, equation~(\ref{modkram}) assumes a form called
the {\it Kramers} expression,
\begin{equation}
{{\mbox {\boldmath $\tau$}}}^p = - n_p \, \sum_j \, {\bigl\langle} \, { {{\mbox {\boldmath $Q$}}}}_j \,
{\partial \phi \over \partial { {{\mbox {\boldmath $Q$}}}}_j} \, {\bigr\rangle}
+ (N-1) \, n_p k_{\rm B} T \, {{\mbox {\boldmath $1$}}}
\label{kram}
\end{equation}
It is important to note that the presence of internal viscosity
has not been taken into account in the phase space theories
used to derive the modified Kramers expression~(\ref{modkram}).
When examined from the standpoint of thermodynamic
considerations, the proper form of the stress tensor in the
presence of internal viscosity appears to be the Giesekus expression
rather than the Kramers expression~\cite{schiebottiv}. Since
predictions of models with internal viscosity are not
considered in this chapter, the Giesekus expression is not
discussed here.
In order to evaluate the stress tensor, for various
choices of the potential energy $\phi$, it turns out that
it is usually necessary to
evaluate the second moments of the bead connector vectors,
$ {\bigl\langle} \, { {{\mbox {\boldmath $Q$}}}}_j { {{\mbox {\boldmath $Q$}}}}_k \, {\bigr\rangle}$. An equation that governs the
time evolution of the second moments can be obtained with the help
of equation~(\ref{avgx}). It has the form,
\begin{eqnarray}
{d \over dt} \, {\bigl\langle} \, { {{\mbox {\boldmath $Q$}}}}_j { {{\mbox {\boldmath $Q$}}}}_k \, {\bigr\rangle} &=&
{\mbox {\boldmath $\kappa$}} \cdot {\bigl\langle} \, { {{\mbox {\boldmath $Q$}}}}_j { {{\mbox {\boldmath $Q$}}}}_k \, {\bigr\rangle} +
{\bigl\langle} \, { {{\mbox {\boldmath $Q$}}}}_j { {{\mbox {\boldmath $Q$}}}}_k \, {\bigr\rangle} \cdot {\mbox {\boldmath $\kappa$}}^\dagger +
{ 2 k_{\rm B} T \over \zeta} \, {\bigl\langle} \, {\widetilde {{\mbox {\boldmath $A$}}} }_{jk} \, {\bigr\rangle}
\nonumber \\
&-& { 1 \over \zeta} \, \sum_m \, \biggl\{
{\bigl\langle} \, { {{\mbox {\boldmath $Q$}}}}_j \,
\Bigl[ \, {\partial \phi \over \partial { {{\mbox {\boldmath $Q$}}}}_m}
+ { {{\mbox {\boldmath $f$}}}}_{m}^{(iv)} \, \Bigr] \cdot {\widetilde {{\mbox {\boldmath $A$}}} }_{mk} \, {\bigr\rangle}
+ {\bigl\langle} \, {\widetilde {{\mbox {\boldmath $A$}}} }_{jm} \cdot
\Bigl[ \, {\partial \phi \over \partial { {{\mbox {\boldmath $Q$}}}}_m} +
{ {{\mbox {\boldmath $f$}}}}_{m}^{(iv)} \, \Bigr] \cdot { {{\mbox {\boldmath $Q$}}}}_k \, {\bigr\rangle} \; \biggr\} \hfill
\label{secmom}
\end{eqnarray}
The second moment equation~(\ref{secmom}), which is an ordinary
differential equation, is in general not a closed
equation for $ {\bigl\langle} \, { {{\mbox {\boldmath $Q$}}}}_j { {{\mbox {\boldmath $Q$}}}}_k \, {\bigr\rangle}$,
since it invoves higher order moments on
the right hand side.
Within the context of the molecular theory developed thus far, it is
clear that the prediction of the rheological properties of dilute
polymer solutions with a bead-spring chain model usually
requires the solution
of the second moment equation~(\ref{secmom}). To date however,
there are no solutions to the general second moment equation~(\ref{secmom})
which simultaneously incorporates the microscopic phenomena of
hydrodynamic interaction, excluded volume, non-linear spring forces and
internal viscosity. Attempts have so far been restricted to treating a
smaller set of combinations of these phenomenon. The simplest
molecular theory, based on a bead-spring chain
model, for the prediction of the
transport properties of dilute polymer solutions is the Rouse model. The
Rouse model neglects all the microscopic phenomenon listed above, and
consequently fails to predict many of the observed features of dilute
solution behavior. In a certain sense, however, it provides the framework
and motivation for all further improvements in the molecular theory.
The Rouse model and its predictions are introduced below,
while improvements in the treatment of hydrodynamic interactions
alone are discussed subsequently.
\subsection{The Rouse model}
The Rouse model assumes that the springs of the bead-spring chain are
governed by a Hookean spring force law. The only solvent-polymer
interactions treated are that of hydrodynamic drag and Brownian bombardment.
The diffusion equation~(\ref{diff}) with the effects of
hydrodynamic interaction, excluded volume and internal viscosity
neglected, and with a Hookean spring force law, has the form,
\begin{equation}
{\partial \, \psi \over \partial t} = - \sum_j \,
{\partial \over \partial { {{\mbox {\boldmath $Q$}}}}_j} \cdot \biggl(
{\mbox {\boldmath $\kappa$}} \cdot { {{\mbox {\boldmath $Q$}}}}_{j}
- {H \over \zeta} \, \sum_k \, A_{jk} \, { {{\mbox {\boldmath $Q$}}}}_k
\, \biggr) \, \psi
+{k_{\rm B} T \over \zeta} \, \sum_{j, \, k} \, A_{jk} \;
{\partial \over \partial { {{\mbox {\boldmath $Q$}}}}_j} \cdot
{\partial \psi \over \partial { {{\mbox {\boldmath $Q$}}}}_k}
\label{rousdiff}
\end{equation}
The diffusion equation~(\ref{rousdiff}) has an analytical solution
since it is linear in the bead-connector vectors. It is satisfied by a
Gaussian distribution,
\begin{equation}
\psi \, ({ {{\mbox {\boldmath $Q$}}}}_1, \ldots, { {{\mbox {\boldmath $Q$}}}}_{N-1}) \, = \, {\cal N} (t) \,
\exp \big[ - {1 \over 2} \sum_{j, \, k} \, { {{\mbox {\boldmath $Q$}}}}_j \cdot
({ {{\mbox {\boldmath $\sigma \! $}}}}^{- 1})_{jk}
\cdot { {{\mbox {\boldmath $Q$}}}}_k \big]
\label{gauss}
\end{equation}
where ${\cal N}(t)$ is the normalisation factor, and the tensor $ {\bs_{jk}}$
which uniquely characterises the Gaussian distribution is identical to the
second moment,
\begin{equation}
{\bs_{jk}} \equiv {\bigl\langle} { {{\mbox {\boldmath $Q$}}}}_j{ {{\mbox {\boldmath $Q$}}}}_k {\bigr\rangle}
\end{equation}
Note that the tensors $ {\bs_{jk}}$ are not symmetric, but satisfy the
relation $ {\bs_{jk}}= {{\mbox {\boldmath $\sigma \! $}}}_{kj}^T $.
(Further information on linear diffusion equations and Gaussian
distributions can be obtained in the extended discussion in Appendix A
of~\cite{ottca2}).
Since the intramolecular potential in the Rouse model is only due to the
presence of Hookean springs, it is straight forward to see that
the Kramers expression for the stress tensor $ {{\mbox {\boldmath $\tau$}}}^p$, is given by,
\begin{equation}
{{\mbox {\boldmath $\tau$}}}^p=- n_p H \, {\sum_j \, } \, {{\mbox {\boldmath $\sigma \! $}}}_{jj} + (N-1) \, n_p k_{\rm B} T \, {{\mbox {\boldmath $1$}}}
\label{rkram}
\end{equation}
The tensors $ {{\mbox {\boldmath $\sigma \! $}}}_{jj}$ are obtained by solving the
second moment equation~(\ref{secmom}), which becomes a closed equation
for the second moments when the Rouse assumptions are made.
It has the form,
\begin{equation}
{d \over dt} \, {\bs_{jk}} - {\mbox {\boldmath $\kappa$}} \cdot {\bs_{jk}}
- {\bs_{jk}} \cdot {\mbox {\boldmath $\kappa$}}^\dagger =
{ 2 k_{\rm B} T \over \zeta} \, A_{jk} \, {{\mbox {\boldmath $1$}}}
- { H \over \zeta} \, \sum_m \, \bigl[ \; {{\mbox {\boldmath $\sigma \! $}}}_{jm} \, A_{mk}
+ A_{jm} \, {{\mbox {\boldmath $\sigma \! $}}}_{mk} \; \bigr]
\label{rsm}
\end{equation}
Note that the solution of
equation~(\ref{rsm}) also leads to the complete specification of
the Gaussian configurational distribution function $\psi$.
A {\it Hookean dumbbell } model, which is the simplest example of a bead-spring
chain, is obtained by setting $N=2$. It is often used for preliminary
calculations since its simplicity makes it possible to obtain analytical
solutions where numerical solutions are unavoidable for longer chains.
For such a model, substituting for $ {{\mbox {\boldmath $\sigma \! $}}}_{11}$ in terms of $ {{\mbox {\boldmath $\tau$}}}^p$ from
equation~(\ref{rkram}) into equation~(\ref{rsm}), leads to following
equation for the polymer contribution to the stress tensor,
\begin{equation}
{{\mbox {\boldmath $\tau$}}}^p + \lambda_H \, {{\mbox {\boldmath $\tau$}}}^p_{(1)}
= - n_p k_{\rm B} T \lambda_H \, {\dot {{\mbox {\boldmath $\gamma$}}}}
\label{hooktau}
\end{equation}
where $ {{\mbox {\boldmath $\tau$}}}^p_{(1)} = {d \, {{\mbox {\boldmath $\tau$}}}^p / dt} -
{\mbox {\boldmath $\kappa$}} \cdot {{\mbox {\boldmath $\tau$}}}^p - {{\mbox {\boldmath $\tau$}}}^p \cdot {\mbox {\boldmath $\kappa$}}^\dagger, $ is the
convected time derivative~\cite{birdb} of $ {{\mbox {\boldmath $\tau$}}}^p$, and
$\lambda_H= (\zeta /4 H)$ is a time constant. Equation~(\ref{hooktau})
indicates that a Hookean dumbbell model with the Rouse assumptions
leads to a convected Jeffreys model or Oldroyd-B model as
the constitutive equation for a dilute polymer solution. This is perhaps
the simplest coarse-grained microscopic model capable of reproducing
some of the macroscopic rheological properties of dilute polymer
solutions.
In the case of a bead-spring chain with $N > 2$,
it is possible to obtain a similar
insight into the nature of the stress tensor by introducing {\it normal
coordinates}. These coordinates help to decouple the connector vectors
${ {{\mbox {\boldmath $Q$}}}}_1, \ldots,{ {{\mbox {\boldmath $Q$}}}}_{N-1}$, which are coupled to each other
because of the Rouse matrix.
The connector vectors are mapped to a new set of normal
coordinates, ${ {{\mbox {\boldmath $Q$}}}}_1^\prime,\ldots,{ {{\mbox {\boldmath $Q$}}}}_{N-1}^\prime$
with the transformation,
\begin{equation}
{ {{\mbox {\boldmath $Q$}}}}_j= {\sum_k \,} \, {\Pi_{jk}} { {{\mbox {\boldmath $Q$}}}}_k^\prime
\label{normap}
\end{equation}
where, $ {\Pi_{jk}}$ are the elements of an orthogonal matrix
with the property
\begin{equation}
(\Pi^{-1})_{jk}=\Pi_{kj}, \, \, {\rm such \; that,} \, \,
{\sum_m \,} \, {\Pi_{mj}} {\Pi_{mk}}= \delta_{jk}
\end{equation}
The orthogonal matrix $ {\Pi_{jk}}$, which will henceforth be referred to as
the Rouse orthogonal matrix, diagonalises the Rouse matrix $A_{jk}$,
\begin{equation}
\sum_{j, \, k} \, \Pi_{ji} A_{jk} \Pi_{kl} = a_l \delta_{il}
\label{roueig}
\end{equation}
where, the Rouse eigenvalues $a_l$ are given by $a_l = 4 \sin^2 (l \pi /2 N) $.
The elements of the Rouse orthogonal matrix are given by the expression,
\begin{equation}
{\Pi_{jk}}=\sqrt{2 \over N} \, \sin
\Biggl({jk \pi \over N } \Biggr)
\end{equation}
The diffusion equation in terms of these normal coordinates, admits a
solution for the configurational distribution function of the form~\cite{birdb}
\begin{equation}
\psi \, ({ {{\mbox {\boldmath $Q$}}}}_1^\prime,\ldots,{ {{\mbox {\boldmath $Q$}}}}_{N-1}^\prime)= \prod_{k=1}^{N-1}
\, \psi_k \, ({ {{\mbox {\boldmath $Q$}}}}_k^\prime)
\end{equation}
As a consequence, the diffusion equation becomes uncoupled and can be
simplified to $(N-1)$ diffusion equations, one for each of
the $\psi_k \, ({ {{\mbox {\boldmath $Q$}}}}_k^\prime)$. Since the
${ {{\mbox {\boldmath $Q$}}}}_k^\prime$ are independent of each other, all the covariances
$ {\bigl\langle} { {{\mbox {\boldmath $Q$}}}}_j^\prime { {{\mbox {\boldmath $Q$}}}}_k^\prime {\bigr\rangle}$ with $j \ne k$ are zero,
and only the $(N-1)$ variances $ {{\mbox {\boldmath $\sigma \! $}}}^\prime_{j} \equiv
{\bigl\langle} { {{\mbox {\boldmath $Q$}}}}_j^\prime { {{\mbox {\boldmath $Q$}}}}_j^\prime {\bigr\rangle}$ are non-zero. Evolution equations
for the variances $ {{\mbox {\boldmath $\sigma \! $}}}^\prime_{j} $ can then be derived from these
uncoupled diffusion equations with the help of a procedure
similar to that used for the derivation
of equation~(\ref{secmom}).
The stress tensor is given in terms of $ {{\mbox {\boldmath $\sigma \! $}}}^\prime_{j} $ by the expression,
\begin{equation}
{{\mbox {\boldmath $\tau$}}}^p= {\sum_j \, } \, {{\mbox {\boldmath $\tau$}}}_j^p
\label{tausum}
\end{equation}
where,
\begin{equation}
{{\mbox {\boldmath $\tau$}}}_j^p=- n_p H \, {{\mbox {\boldmath $\sigma \! $}}}_j^\prime + n_p k_{\rm B} T \, {{\mbox {\boldmath $1$}}}
\label{tauj}
\end{equation}
On substituting for $ {{\mbox {\boldmath $\sigma \! $}}}_j^\prime$ in terms of $ {{\mbox {\boldmath $\tau$}}}_j^p$
from equation~(\ref{tauj}) into the evolution equation for $ {{\mbox {\boldmath $\sigma \! $}}}_j^\prime$,
one obtains,
\begin{equation}
{{\mbox {\boldmath $\tau$}}}_j^p + \lambda_j \, {{\mbox {\boldmath $\tau$}}}^p_{j \, (1)}
= - n_p k_{\rm B} T \lambda_j \, {\dot {{\mbox {\boldmath $\gamma$}}}}
\label{rautau}
\end{equation}
where, the relaxation times $\lambda_j$ are given by
$\lambda_j = (\zeta /2 H \, a_j)$. Consequently,
each of the $ {{\mbox {\boldmath $\tau$}}}_j^p$ satisfy an equation identical to
equation~(\ref{hooktau}) for the polymer contribution to the stress tensor in
a Hookean dumbbell model. The Rouse model, therefore, leads to a constitutive
equation that is a multimode generalization of the convected Jeffreys or
Oldroyd B model.
It is clear from above discussion that the process of
transforming to normal coordinates enables one to derive a closed form
expression for the stress tensor, and to
gain the insight that the Rouse chain with $N$ beads has
$N$ independent relaxation times which describe the different relaxation
processes in the chain, from the entire chain to successively smaller
sub-chains. It is straight forward to show that for large $N$,
the longest relaxation times $\lambda_j$ scale with chain length as $N^2$.
A few important transport property predictions which
show the limitations of the Rouse model
are considered briefly below. It is worth noting that since the Rouse
model does not include the effect of excluded volume, its predictions
are restricted to dilute solutions of polymers in theta solvents. This
restriction is infact applicable to all the models of hydrodynamic interaction
treated here.
In steady simple shear flow, with $ {\mbox {\boldmath $\kappa$}}(t)$ given by
equation~(\ref{ssf1}), the three independent material
functions that characterise such flows are~\cite{birdb},
\begin{equation}
\eta_{p} = n_p k_{\rm B} T \, {\sum_j \, } \lambda_j \, ; \quad
\Psi_{1} = 2 n_p k_{\rm B} T \, {\sum_j \, } \lambda^2_j \, ; \quad
\Psi_{2} = 0
\label{etap}
\end{equation}
It is clear that the Rouse model accounts for the presence of viscoelasticity
through the prediction of a nonzero first normal stress difference in
simple shear flow. However, it does not predict the nonvanishing of
the second normal stress difference, and the shear rate dependence
of the viscometric functions.
From the definition of intrinsic viscosity~(\ref{invis}) and the fact
that $\rho_p \sim N \, n_p$, it follows from equation~(\ref{etap})
that for the Rouse model,
$\lbrack \eta \rbrack_0 \sim N$. This is at variance with the
experimental results discussed earlier, and displayed in equation~(\ref{sc1}).
It is also straight forward to see that the Rouse model predicts that the
characteristic relaxation time scales as the square of the chain length,
$\lambda_p \sim N^2$.
In small amplitude oscillatory shear, $ {\mbox {\boldmath $\kappa$}}(t)$ is given by
equation~(\ref{usf3}), and expressions for the material functions
$G^\prime$ and $G^{\prime \prime}$ in terms of the relaxation times
$\lambda_j$ can be easily obtained~\cite{birdb}. In the intermediate
frequency range, where as discussed earlier, experimental results
indicate that both $G^{\prime}$ and $G^{\prime\prime}$
scale as $\omega^{2/3}$, the Rouse model predicts a scaling
$\omega^{1/2}$~\cite{larson}.
The translational diffusion coefficient $D$ for a bead-spring chain
at equilibrium can be obtained by finding the average friction
coefficient $Z$ for the entire chain in a quiescent solution,
and subsequently using the Nernst-Einstein equation,
$D=k_{\rm B} T \, Z^{-1}$~\cite{birdb}. It can be shown that
for the Rouse model $Z = \zeta \, N$, {\it ie.} the
total friction coefficient of the chain is a sum of the individual bead friction
coefficients. As a result, the Rouse model predicts that the diffusion
coefficient scales as the inverse of the molecular weight. This is not
observed in dilute solutions. Instead experiments indicate the scaling
depicted in equation~(\ref{sc2}).
The serious shortcomings of the Rouse model highlighted above have been
the motivation for the development of more refined molecular theories.
The scope of this chapter is restricted to reviewing recent advances
in the treatment of hydrodynamic interaction.
\section{HYDRODYNAMIC INTERACTION}
Hydrodynamic interaction, as pointed out earlier, is a long range
interaction between the beads
which arises because of the solvent's capacity to propagate one bead's
motion to another through perturbations in
its velocity field. It was first introduced into framework of polymer
kinetic theory by Kirkwood and Riseman~\cite{kirkrise}. As we have seen in
the development of the general diffusion equation above, it is reasonably
straight forward to include hydrodynamic interaction into the framework of the
molecular theory. However, it renders the resultant equations analytically
intractable and as a result, various attempts have been made in order to solve
them approximately.
In this section, we review the various approximation schemes introduced
over the years. The primary test of an approximate model is of
course its capacity to
predict experimental observations. The accuracy of the approximation
however, can only be assessed by checking the proximity of the
approximate results to the exact numerical results obtained by
Brownian dynamics simulations. Finally, the usefulness of an approximation
depends on its computational intensity. The individual features and
deficiencies of the different approximations will be examined in the
light of these observations.
In the presence of hydrodynamic interaction, and with excluded volume
and internal viscosity neglected, a bead-spring chain with Hookean
springs has a configurational distribution function $\psi$ that must
satisfy the following simplified form of the diffusion equation~(\ref{diff}),
\begin{equation}
{\partial \psi \over \partial t} = - {\sum_j \, } {\partial \over \partial { {{\mbox {\boldmath $Q$}}}}_j}
\cdot \biggl( {\mbox {\boldmath $\kappa$}} \cdot { {{\mbox {\boldmath $Q$}}}}_j - {H \over \zeta}
\, {\sum_k \,} {\widetilde {{\mbox {\boldmath $A$}}} }_{jk} \cdot { {{\mbox {\boldmath $Q$}}}}_k \biggr) \psi
+ {k_B T \over \zeta} \, \sum_{j, \, k}
{\partial \over \partial { {{\mbox {\boldmath $Q$}}}}_j} \cdot {\widetilde {{\mbox {\boldmath $A$}}} }_{jk} \cdot
{\partial \psi \over \partial { {{\mbox {\boldmath $Q$}}}}_k}
\label{hidiff}
\end{equation}
while the second moment equation~(\ref{secmom}) assumes the form,
\begin{equation}
{d \over dt} {\bigl\langle} { {{\mbox {\boldmath $Q$}}}}_j{ {{\mbox {\boldmath $Q$}}}}_k {\bigr\rangle} =
{\mbox {\boldmath $\kappa$}} \cdot {\bigl\langle} { {{\mbox {\boldmath $Q$}}}}_j{ {{\mbox {\boldmath $Q$}}}}_k {\bigr\rangle}+ {\bigl\langle} { {{\mbox {\boldmath $Q$}}}}_j
{ {{\mbox {\boldmath $Q$}}}}_k {\bigr\rangle} \cdot {\mbox {\boldmath $\kappa$}}^\dagger
+ {2 k_B T \over \zeta}\, {\bigl\langle} {\widetilde {{\mbox {\boldmath $A$}}} }_{jk} {\bigr\rangle}
- {H \over \zeta} \, {\sum_m \,} \biggl\lbrack \, {\bigl\langle} { {{\mbox {\boldmath $Q$}}}}_j{ {{\mbox {\boldmath $Q$}}}}_m \cdot
{\widetilde {{\mbox {\boldmath $A$}}} }_{mk} {\bigr\rangle} + {\bigl\langle} {\widetilde {{\mbox {\boldmath $A$}}} }_{jm} \cdot { {{\mbox {\boldmath $Q$}}}}_m{ {{\mbox {\boldmath $Q$}}}}_k {\bigr\rangle} \,
\biggr\rbrack
\label{hisecmom}
\end{equation}
Equation~(\ref{hisecmom}) is not a
closed equation for the second moments since it involves more complicated
moments on the right hand side. This is the central problem of
all molecular theories which attempt to predict the rheological properties of
dilute polymer solutions and that incorporate hydrodynamic
interaction. The different approximate treatments of hydrodynamic interaction,
which are discussed roughly chronologically below,
basically reduce to finding a suitable
closure approximation for the second moment equation.
\subsection{ The Zimm model}
The Zimm model was the first attempt at improving the Rouse model by
introducing the effect of hydrodynamic interaction in a
preaveraged or equilibrium-averaged form.
The preaveraging approximation has been very frequently used
in polymer literature since its introduction by Kirkwood and
Riseman~\cite{kirkrise}. The approximation consists of evaluating the
average of the hydrodynamic tensor with the equilibrium
distribution function~(\ref{equidis}), and
replacing the hydrodynamic interaction tensor $ {\widetilde {{\mbox {\boldmath $A$}}} }_{jk}$, wherever
it occurs in the governing equations,
with its equilibrium average ${\widetilde A}_{jk}$.
(Note that the incorporation of the effect of hydrodynamic interaction
does not alter the equilibrium distribution function, which is still given
by~(\ref{equidis}) for bead-spring chains with Hookean springs.)
The matrix ${\widetilde A}_{jk}$ is called the modified Rouse matrix,
and is given by,
\begin{equation}
{\widetilde A}_{jk}=A_{jk}+\sqrt{2} \, h^*
\Biggl({2 \over \sqrt{\vert j-k \vert}}-{1 \over \sqrt{\vert j-k-1 \vert}}
-{1 \over \sqrt{\vert j-k+1 \vert}} \Biggr)
\end{equation}
where, $h^*=a {\sqrt {(H / \pi k_B T)}}$ is the hydrodynamic interaction
parameter. The hydrodynamic interaction parameter is approximately equal
to the ratio of the bead radius to the equilibrium root mean square
length of a single spring of the bead-spring chain.
This implies that $h^* < 0.5$, since the beads cannot overlap.
Typical values used for $h^*$ are in the range
$0.1 \le h^* \le 0.3$~\cite{ottca2}.
By including the hydrodynamic interaction in an averaged form,
the diffusion equation remains linear in the connector vectors,
and consequently is satisfied by a Gaussian distribution~(\ref{gauss})
as in the Rouse case. However, the covariance tensors $ {\bs_{jk}}$ are
now governed by the set of differential equations~(\ref{rsm})
with the Rouse matrix $A_{jk}$ replaced with the modified Rouse
matrix ${\widetilde A}_{jk}$. Note that this modified second moment equation
is also a closed set of equations for the second moments.
As in the Rouse case, it is possible to simplify the solution of the
Zimm model by carrying out a diagonalisation procedure. This is
achieved by mapping the connector vectors to normal coordinates, as
in~(\ref{normap}), but in this case the Zimm orthogonal matrix
$\Pi_{jk}$, which diagonalises the modified Rouse matrix,
\begin{equation}
\sum_{j, \, k} \, \Pi_{ji} {\widetilde A}_{jk} \Pi_{kl} =
{\widetilde a}_l \delta_{il}
\label{zimeig}
\end{equation}
must be found numerically for $N > 4$. Here, ${\widetilde a}_l$ are
the so called Zimm eigenvalues. The result of this procedure is to
render the diffusion equation
solvable by the method of separation of variables.
Thus, as in the Rouse case, only the $(N-1)$ transformed coordinate
variances $ {{\mbox {\boldmath $\sigma \! $}}}^\prime_{j} $ are non-zero, and differential equations governing
these variances can be derived by manipulating the uncoupled
diffusion equations.
The diagonalisation procedure enables the polymer contribution to
the stress tensor $ {{\mbox {\boldmath $\tau$}}}^p$ in the Zimm model to be expressed
as a sum of partial stresses $ {{\mbox {\boldmath $\tau$}}}_j^p$ as in equation~(\ref{tausum}),
but the $ {{\mbox {\boldmath $\tau$}}}_j^p$ now satisfy equation~(\ref{rautau})
with the `Rouse' relaxation times $\lambda_j$ replaced with `Zimm'
relaxation times ${\widetilde \lambda}_j$. The Zimm relaxation times
are defined by ${\widetilde \lambda}_j = (\zeta /2 H \, {\widetilde a}_j)$.
From the discussion above, it is clear that the Zimm model differs from
the Rouse model only in the spectrum of relaxation times. As we shall
see shortly, this leads to a significant improvement in the prediction
of linear viscoelastic properties and the scaling of
transport properties with molecular weight in theta solvents.
The Zimm model therefore establishes
unequivocally the importance of the microscopic phenomenon of
hydrodynamic interaction. On the other hand, it does not lead to any
improvement in the prediction of nonlinear properties, and consequently
subsequent treatments of hydrodynamic interaction have concentrated
on improving this aspect of the Zimm model.
By considering the long chain limit of the Zimm model,
{\it ie.,} $N \to \infty$, it is possible to discuss the universal
properties predicted by the model. The various power law dependences
of transport properties on molecular weight, characterised by universal
exponents, and universal ratios formed from the prefactors of these
dependences can be obtained. These predictions are
ideal for comparison with experimental data on high molecular weight
polymer solutions since they are parameter free. We shall discuss some
universal exponents predicted by the Zimm model below, while universal
ratios are discussed later in the chapter.
As mentioned above, the first noticeable change upon
the introduction of hydrodynamic interaction is the change
in the relaxation spectrum. In the long chain limit, the longest relaxation
times ${\widetilde \lambda}_j$ scale with
chain length as $N^{3/2}$~\cite{ottbook},
whereas we had found earlier that the chain length dependence of the
longest relaxation times in the Rouse model was $N^2$.
In steady simple shear flow, the Zimm model like the Rouse model, fails
to predict the experimentally observed occurance of non-zero
second normal stress differences and the experimentally observed
shear rate dependence of the
viscometric functions. It does however lead to an improved prediction of the
scaling of the zero shear rate intrinsic viscosity with molecular
weight, $\lbrack \eta \rbrack_0 \sim N^{1/2}$. This prediction is in
agreement with experimental results for the Mark-Houwink exponent
in theta solvents (see equation~(\ref{sc1})). As with the longest
relaxation times, the characteristic relaxation time
$\lambda_p \sim N^{3/2}$.
In small amplitude oscillatory shear, the Zimm model predicts that
the material functions $G^\prime$ and $G^{\prime \prime}$
scale with frequency as $\omega^{2/3}$ in the intermediate
frequency range. This is in exceedingly good agreement with experimental
results~\cite{birdb,larson}.
The translational diffusion coefficient $D$ for chainlike molecules
at equilibrium, with preaveraged hydrodynamic interaction,
was originally obtained by Kirkwood~\cite{kirkrise}. Subsequently,
several workers obtained a correction to the Kirkwood diffusion
coefficient for the Zimm model~\cite{otttd}. The exact results differ
by less than 2\% from the Kirkwood value for all values of the chain
length and $h^*$. Interestingly, three different approaches
to obtaining the diffusion coefficient, namely, the Nernst-Einstein equation,
the calculation of the mean-square displacement caused by Brownian forces, and
the study of the time evolution of concentration gradients, lead to
identical expressions for the diffusion coefficient~\cite{otttd}. In
the limit of very long chains, it can be shown that $D \sim N^{- 1/2}$.
The Zimm model therefore gives the correct dependence of translational
diffusivity on molecular weight in theta solvents.
The Zimm result for the translational diffusivity has been
traditionally interpreted to mean that the polymer coil in
a theta solvent behaves like a rigid
sphere, with radius equal to the root mean square end-to-end distance.
This follows from the fact that the diffusion coefficient for a
rigid sphere scales as the inverse of the radius of the sphere, and
in a theta solvent,
$ {\bigl\langle} \, r^2 \, {\bigr\rangle}_{\rm eq}$ scales with chain length as $N$.
The solvent inside the coil is believed to be dragged along with the
coil, and the inner most beads of the bead-spring chain
are considered to be shielded from the velocity field due to the presence of
hydrodynamic interaction~\cite{yamakawa,larson}.
This intuitive notion has been used to point out the difference between
the Zimm and the Rouse model, where all the $N$ beads of the polymer
chain are considered to be exposed to the applied velocity field.
Recently, by explicitly calculating the velocity field inside a
polymer coil in the Zimm model, {\"{O}ttinger $\!\!$ }~\cite{ottvel} has shown
that the solvent motion inside a polymer coil is different
from that of a rigid sphere throughout the polymer coil, and that
shielding from the velocity field occurs only to a certain extent.
\subsection{ The consistent averaging approximation}
The first predictions of shear thinning were obtained when
hydrodynamic interaction was treated in a more precise manner than
that of preaveraging the hydrodynamic interaction tensor.
In order to make the diffusion equation~(\ref{hidiff}) linear in the
connector vectors, as pointed out earlier,
it is necessary to average the hydrodynamic interaction
tensor. However, it is not necessary to
preaverage the hydrodynamic interaction tensor with the equilibrium
distribution. On the other hand, the average can be carried out with
the non-equilibrium distribution function~(\ref{gauss}). The linearity
of the diffusion equation ensures that its solution is a
Gaussian distribution. {\"{O}ttinger $\!\!$ }~\cite{ottca1,ottca2} suggested that the
hydrodynamic interaction tensor occuring in the diffusion equation
be replaced with its non-equilibrium average. Since it is necessary to know
the averaged hydrodynamic interaction tensor in order to find the
non-equilibrium distribution function, both the averaged
hydrodynamic interaction tensor and the non-equilibrium distribution
function must be obtained in a {\it self-consistent} manner.
Several years ago, Fixman~\cite{fixman} introduced an iterative
scheme (beginning with the equilibrium distribution function),
for refining the distribution function with which to carry out
the average of the hydrodynamic interaction tensor. The
self-consistent scheme of {\"{O}ttinger $\!\!$ } is recovered if the iterative procedure
is repeated an infinite number of times. However, Fixman carried out the
iteration only upto one order higher than the preaveraging stage.
The average of the hydrodynamic interaction tensor evaluated with
the Gaussian distribution~(\ref{gauss})
is an $(N-1) \times (N-1)$ matrix with tensor components, $ {\bAb_{jk}}$, defined by,
\begin{equation}
{\bAb_{jk}} = A_{jk}\, {{\mbox {\boldmath $1$}}} +
\sqrt{2} h^* \, \Biggl\lbrack
\, { {{\mbox {\boldmath $H$}}}( {\hat \bs}_{j,k})\over\sqrt{\vert j-k \vert}}
+ { {{\mbox {\boldmath $H$}}}( {\hat \bs}_{j+1,k+1})\over\sqrt{\vert j-k \vert}}
- { {{\mbox {\boldmath $H$}}}( {\hat \bs}_{j,k+1})\over\sqrt{\vert j-k-1 \vert}}
- { {{\mbox {\boldmath $H$}}}( {\hat \bs}_{j+1,k})\over\sqrt{\vert j-k+1 \vert}}
\, \Biggr\rbrack
\label{Abar}
\end{equation}
where the tensors $ {\hat \bs_{\mu \nu}}$ are given by,
\begin{equation}
{\hat \bs_{\mu \nu}}= {\hat \bs_{\mu \nu}}^\dagger = {\hat \bs}_{\nu \mu} = {1 \over \vert \mu - \nu \vert }\,
{H \over k_B T }
\,\, \sum_{j,k \,= \,\min(\mu,\nu)}^{ \max(\mu,\nu)-1} \, {\bs_{jk}}
\end{equation}
and the function of the second moments, ${ {{\mbox {\boldmath $H$}}} }( {{\mbox {\boldmath $\sigma \! $}}})$ is,
\begin{equation}
{{\mbox {\boldmath $H$}}}( {{\mbox {\boldmath $\sigma \! $}}})={3 \over 2 (2 \pi)^{3/2}} \int d { {{\mbox {\boldmath $k$}}}} \,{1 \over k^2}\,
\biggl( {{\mbox {\boldmath $1$}}} - {{ {{\mbox {\boldmath $k$}}}}{ {{\mbox {\boldmath $k$}}}} \over k^2} \biggr) \,\exp(- {1 \over 2} { {{\mbox {\boldmath $k$}}}}
\cdot {{\mbox {\boldmath $\sigma \! $}}} \cdot { {{\mbox {\boldmath $k$}}}})
\label{hfunc}
\end{equation}
Note that the convention $ {{\mbox {\boldmath $H$}}}( {\hat \bs}_{jj})/0=0$ has been adopted in
equation~(\ref{Abar}) above.
The self-consistent closure approximation therefore consists of replacing
the hydrodynamic interaction tensor $ {\widetilde {{\mbox {\boldmath $A$}}} }_{jk}$ in
equation~(\ref{hisecmom}) with its non-equilibrium
average $ {\bAb_{jk}}$. As in the earlier approximations, this leads to
a system of $(N-1)^2$
coupled ordinary differential equations for the components of the covariance
matrix $ {\bs_{jk}}$. Their solution permits the evaluation of the
stress tensor through the Kramers expression~(\ref{rkram}), and as a
consequence all the relevant material functions.
Viscometric functions in steady simple shear flows
were obtained by {\"{O}ttinger $\!\!$ }~\cite{ottca1} for chains with $N \leq 25$ beads,
while material functions in start-up of steady shear flow, cessation
of steady shear flow, and stress relaxation after step-strain were
obtained by Wedgewood and {\"{O}ttinger $\!\!$ }~\cite{wedgeott} for chains with
$N \leq 15$ beads. The latter authors also include consistently-averaged
FENE springs in their model.
Shear rate dependent viscometric functions, and a nonzero
{\it positive} second normal stress difference are predicted by the
self-consistent averaging approximation; a marked improvement over
the predictions of the Zimm model. Both the reduced
viscosity and the reduced first normal stress difference initially
decrease with increasing shear rate. However, for long enough chains,
they begin to rise monotonically at higher values of the reduced shear
rate $\beta$. This rise is a consequence of the weakening of
hydrodynamic interaction in strong flows due to an increase in
the separation between the beads of the chain.
With increasing shear rate, the material functions
tend to the shear rate independent Rouse values,
which (for long enough chains), are higher than the zero shear rate
consistently-averaged values. The prediction of
shear thickening behavior is not in agreement with the shear thinning that is
commonly observed experimentally. However, as mentioned earlier,
some experiments with very high molecular weight systems seem to suggest
the existence of shear thinning followed by shear thickening followed
again by shear thinning as the shear rate is increased. While only
shear thickening at high shear rates is predicted with Hookean springs, the
inclusion of consistently-averaged FENE springs in the model leads to
predictions which are qualitatively in agreement with these observations,
with the FENE force becoming responsible for the shear thinning at very
high shear rates~\cite{wedgeott,kish}.
The means of examining the accuracy of various approximate treatments of
hydrodynamic interaction was established when the problem was solved
exactly with the help of Brownian dynamics simulations with full
hydrodynamic interaction included~\cite{zylottga,zylkaga}. These
simulations reveal that while the
predictions of the shear rate dependence of the viscosity and
first normal stress difference by the self-consistent averaging procedure
are in qualitative agreement with the Brownian dynamics simulations, they
do not agree quantitatively. Further, in contrast to the
consistent-averaging prediction, at low shear rates, a
negative value for the second normal stress difference is obtained.
As noted earlier, the sign of the second normal stress difference
has not been conclusively established~\cite{birdott}.
The computational intensity of the consistent-averaging approximation
leads to an upper bound on the length of chain that can be examined. As a
result, it is not possible to discuss the universal shear rate
dependence of the viscometric functions predicted by it. On the other hand,
it is possible to come to certain general conclusions regarding the
nature of the stress tensor in the long chain limit, and to predict
the zero shear rate limit of certain universal ratios~\cite{ottca1}.
Thus, it is possible to show the important result that the polymer
contribution to the stress tensor depends only on a
length scale and a time scale, and not on the strength of the hydrodynamic
interaction parameter $h^*$. In the long chain limit,
$h^*$ can be absorbed into the basic time constant, and it does not occur
in any of the non-dimensional ratios. Indeed this is also true of the
finite extensibility parameter $b$, which can also be shown to have
no influence on the long chain rheological properties~\cite{ottca2}.
The long chain limit of the consistent-averaging
approximation is therefore a parameter free model.
It is possible to obtain an explicit representation of the modified
Kramers matrix for infinitely long chains by introducing continuous
variables in place of discrete indices~\cite{ottca1}. This enables the
analytical calculation of various universal ratios predicted by the
consistent-averaging approximation. These predictions are discussed
later in this chapter. However, two results are worth highlighting here.
Firstly, it can be shown explicitly that the leading order corrections
to the large $N$ limit of the various universal ratios are of
order $(1/\surd N)$, and secondly, there is a special value
of $h^*=0.2424...,$ at which the leading order corrections are
of order $(1/ N)$. These results have proven
to be very useful for subsequent numerical exploration of the long
chain limit in more accurate models of the hydrodynamic interaction.
Short chains with consistently-averaged hydrodynamic interaction,
as noted earlier, do not show shear thickening behavior;
this aspect is revealed only with increasing chain length. Furthermore,
it is not clear with the kind of chain lengths that can be
examined, whether the minimum in the viscosity and first normal stress
curves continue to exist in the long chain limit~\cite{ottca1}.
The examination of long chain behavior is therefore important since
aspects of polymer solution behavior might be revealed that are
otherwise hidden when only short chains are considered.
The introduction of the {\it decoupling approximation}
by Magda, Larson and Mackay~\cite{magda} and Kishbaugh and McHugh~\cite{kish}
made the examination of the shear rate dependence of long chains feasible. The
decoupling approximation retains the accuracy of the self-consistent
averaging procedure, but is much more computationally efficient.
\subsection{The decoupling approximation}
The decoupling approximation introduced by Magda {\it et al.}~\cite{magda}
and Kishbaugh and McHugh~\cite{kish} (who use FENE springs in place of
the Hookean springs of Magda {\it et al.})
consists of extending the `diagonalise and decouple' procedure
of the Rouse and Zimm theories to the case of the self consistently
averaged theory. They first transform the connector vectors $ {{\mbox {\boldmath $Q$}}}_j$
to a new set of coordinates $ {{\mbox {\boldmath $Q$}}}_j^\prime$
using the time-invariant Rouse orthogonal matrix $ {\Pi_{jk}}$.
(Kishbaugh and McHugh also use the Zimm orthogonal matrix).
The same orthogonal matrix $ {\Pi_{jk}}$ is then assumed to
diagonalise the matrix of {\it tensor} components $ {\bAb_{jk}}$.
While the process of diagonalisation was exact in the Rouse and Zimm theories,
it is an approximation in the case of the decoupling approximation.
It implies that even in the self consistently averaged theory the
diffusion equation can be solved by the method of separation of variables,
and only the $(N-1)$ transformed coordinate variances $ {{\mbox {\boldmath $\sigma \! $}}}^\prime_{j} $
are non-zero. The differential equations governing these variances
can be derived from the uncoupled diffusion equations and solved
numerically. The appropriate material functions are then obtained
using the Kramers expression in terms of the transformed coordinates, namely,
equations~(\ref{tausum}) and~(\ref{tauj}).
The decrease in the number of differential equations to be solved,
from $(N-1)^2$ for the covariances $ {\bs_{jk}}$ to $(N-1)$ for the variances
$ {{\mbox {\boldmath $\sigma \! $}}}^\prime_j$, is suggested by Kishbaugh and McHugh as the
reason for the great reduction in computational time achieved by the
decoupling approximation. Prakash and {\" O}ttinger~\cite{prakott}
discuss the reasons why this argument is incomplete, and point out
the inconsistencies in the decoupling procedure. Furthermore,
since the results are
only as accurate as the consistent averaging approximation, the decoupling
approximation is not superior to the consistent averaging method.
However, these papers are important
since the means by which a reduction in computational intensity may
be achieved, without any significant sacrifice in accuracy,
was first proposed in them. Further, the persistence of the
minimum in the viscosity and first normal stress curves even for
very long chains, and the necessity of including
FENE springs in order to generate predictions in qualitative agreement
with experimental observations in high molecular weight systems, is clearly
elucidated in these papers.
\subsection{The Gaussian approximation}
The closure problem for the second moment equation is solved in the
preaveraging assumption of Zimm, and in the self consistent averaging
method of {\" O}ttinger, by replacing the tensor $ {\widetilde {{\mbox {\boldmath $A$}}} }_{jk}$ with an average.
As a result, fluctuations in the hydrodynamic interaction are neglected.
The Gaussian approximation~\cite{ottga,zylottga,wedgega,zylkaga}
makes no assumption with regard to the hydrodynamic interaction,
but assumes that the solution of the diffusion equation~(\ref{hidiff})
may be approximated by a Gaussian distribution~(\ref{gauss}).
Since all the complicated averages on the right hand side of the
second moment equation~(\ref{hisecmom}) can be reduced to functions
of the second moment with the help of the Gaussian distribution,
this approximation makes it a closed equation for the second moments.
The evolution equation for the covariances $ {\bs_{jk}}$ is given by,
\begin{eqnarray}
{d \over dt} {\bs_{jk}} &=& {\mbox {\boldmath $\kappa$}} \cdot {\bs_{jk}}
+ {\bs_{jk}} \cdot {\mbox {\boldmath $\kappa$}}^T
+{2 k_B T \over \zeta}\,\, {\bAb_{jk}} - {H \over \zeta} \, {\sum_m \,} \, \lbrack
{{\mbox {\boldmath $\sigma \! $}}}_{jm} \cdot {\overline {{\mbox {\boldmath $A$}}} }_{mk} + {\overline {{\mbox {\boldmath $A$}}} }_{jm} \cdot {{\mbox {\boldmath $\sigma \! $}}}_{mk} \rbrack
\nonumber \\
&-& {H \over \zeta}\,{H \over k_B T}\, \sum_{m, \, l, \, p} \, \lbrack
{{\mbox {\boldmath $\sigma \! $}}}_{jl} \cdot {\bf \Gamma}_{lp,mk} : {{\mbox {\boldmath $\sigma \! $}}}_{pm} + {{\mbox {\boldmath $\sigma \! $}}}_{mp} : {\bf \Gamma}_{lp,jm} \cdot
{{\mbox {\boldmath $\sigma \! $}}}_{lk} \rbrack
\label{gasecmom}
\end{eqnarray}
where, the $(N-1)^2 \times (N-1)^2$ matrix with fourth rank
tensor components, $ {\bf \Gamma}_{lp,jk}$, is defined by,
\begin{eqnarray}
{\bf \Gamma}_{lp,jk}&=&{3 \sqrt{2} \, h^* \over 4} \,\Biggl\lbrack \,
{\theta(j,l,p,k)\, {{\mbox {\boldmath $K$}}}( {\hat \bs}_{j,k})+\theta(j+1,l,p,k+1)\, {{\mbox {\boldmath $K$}}}( {\hat \bs}_{j+1,k+1})
\over \sqrt{{\vert j-k \vert}^3}} \nonumber \\
&-& {\theta(j,l,p,k+1)\, {{\mbox {\boldmath $K$}}}( {\hat \bs}_{j,k+1})\over\sqrt{{\vert j-k-1 \vert}^3}}
-{\theta(j+1,l,p,k) \, {{\mbox {\boldmath $K$}}}( {\hat \bs}_{j+1,k})\over\sqrt{{\vert j-k+1 \vert}^3}}
\, \Biggr\rbrack
\label{Gam}
\end{eqnarray}
while the function $ {{\mbox {\boldmath $K$}}}( {{\mbox {\boldmath $\sigma \! $}}})$ is defined by the equation,
\begin{equation}
{{\mbox {\boldmath $K$}}}( {{\mbox {\boldmath $\sigma \! $}}})={-2 \over (2 \pi)^{3/2}} \int d { {{\mbox {\boldmath $k$}}}} \,{1 \over k^2} \,{ {{\mbox {\boldmath $k$}}}}\,
\biggl( {{\mbox {\boldmath $1$}}} - {{ {{\mbox {\boldmath $k$}}}}{ {{\mbox {\boldmath $k$}}}} \over k^2} \biggr) \,{ {{\mbox {\boldmath $k$}}}} \,\,
\exp( - { 1 \over 2} \, { {{\mbox {\boldmath $k$}}}} \cdot {{\mbox {\boldmath $\sigma \! $}}} \cdot { {{\mbox {\boldmath $k$}}}})
\label{kfunc}
\end{equation}
The function $\theta(j,l,p,k)$ is unity if $l$ and $p$ lie
between $j$ and $k$, and zero otherwise,
\begin{equation}
\theta(j,l,p,k)=\cases{1& if $j \leq l,p < k$ \quad or\quad
$k \leq l,p < j$\cr
\noalign{\vskip3pt}
0& otherwise\cr}
\end{equation}
The convention $ {{\mbox {\boldmath $K$}}}( {\hat \bs}_{jj})/0=0$, has been adopted in equation~(\ref{Gam}).
Both the hydrodynamic interaction functions $ {{\mbox {\boldmath $H$}}}( {{\mbox {\boldmath $\sigma \! $}}})$ and $ {{\mbox {\boldmath $K$}}}( {{\mbox {\boldmath $\sigma \! $}}})$
can be evaluated analytically in terms of elliptic integrals.
The properties of these functions are discussed in great detail in the
papers by {\"{O}ttinger $\!\!$ } and coworkers~\cite{ottca2,ottga,ottrab,zylkaga,zylottrg}.
All the approximations discussed earlier (with the exception of
the decoupling approximation) can be derived by a process of
succesive simplification of the explicit results for the Gaussian
approximation given above. The equations that govern the self
consistently averaged theory can be obtained by dropping the last term
in equation~(\ref{gasecmom}), which accounts for the presence of
fluctuations. Replacing $ {{\mbox {\boldmath $H$}}}( {{\mbox {\boldmath $\sigma \! $}}})$ by $ {{\mbox {\boldmath $1$}}} $ in these truncated equations
leads to the governing equations of the Zimm model, while setting
$h^* = 0$ leads to the Rouse model.
Material functions predicted by the Gaussian approximation
in any arbitrary homogeneous flow may be obtained
by solving the system of $(N-1)^2$ coupled ordinary differential equations
for the components of the covariance matrix $ {\bs_{jk}}$~(\ref{gasecmom}).
Small amplitude oscillatory shear flows and steady shear flow in the
limit of zero shear rate have been examined by {\"{O}ttinger $\!\!$ }~\cite{ottga} for
chains with $N \leq 30$ beads, while Zylka~\cite{zylkaga} has
obtained the material functions in steady shear flow for chains
with $N \leq 15$ beads and compared his results with those
of Brownian dynamics simulations (the comparison was made for chains with
$N=12$ beads).
The curves predicted by the Gaussian approximation for the storage and
loss modulus, $G^\prime$ and $G^{\prime\prime}$, as a function of the
frequency $\omega$, are nearly indistinguishable from the
predictions of the Zimm theory, suggesting that the Zimm approximation
is quite adequate for the prediction of linear visco-elastic properties.
There is, however, a significant difference in the prediction of the
relaxation spectrum. While the Zimm model predicts a set of $(N-1)$
relaxation times with equal relaxation weights, the Gaussian approximation
predicts a much larger set of relaxation times than the number of springs
in the chain, with relaxation weights that are different and
dependent on the strength of the hydrodynamic interaction~\cite{ottzylspk}.
These results indicate that entirely different relaxation spectrum
lead to similar curves for $G^\prime$ and $G^{\prime\prime}$, and calls into
question the common practice of obtaining the relaxation spectrum
from experimentally measured curves for $G^\prime$ and
$G^{\prime\prime}$ (see also the discussion in~\cite{prakcamb}).
The zero shear rate viscosity and first normal stress difference
predicted by the Gaussian approximation are found to be smaller than the
Zimm predictions for all chain lengths. By extrapolating finite chain
length results to the infinite chain limit, {\"{O}ttinger $\!\!$ } has shown that this
reduction is by a factor of 72\% -- 73\%, independent of the strength of
the {hydrodynamic interaction } parameter. Other universal ratios predicted by the Gaussian
approximation in the limit of zero shear rate are discussed later
in the chapter.
A comparison of the predicted shear rate dependence of
material functions in simple shear flow with the results of Brownian dynamics
simulations reveals that of all the approximate treatments of
hydrodynamic interaction introduced so far, the Gaussian approximation
is the most accurate~\cite{zylkaga}. Indeed, at low shear rates, the
{\it negative} second normal stress difference predicted by the
Gaussian approximation is in accordance with the simulations results.
Inspite of the accuracy of the Gaussian approximation, its main drawback
is its computational intensity, which renders it difficult to
examine chains with large values of $N$. Apart from the need to
examine long chains for the reason cited earlier, it also necessary
to do so in order to obtain the universal predictions of the model.
A recently introduced approximation which
enables the evaluation of universal viscometric functions
in shear flow is discussed in the section below.
Before doing so, however, we first discuss the significant difference
that a refined treatment of hydrodynamic interaction makes to the prediction
of translational diffusivity in dilute polymer solutions.
The correct prediction of the scaling of the diffusion coefficient
with molecular weight upon introduction of pre-averaged hydrodynamic
interaction in the Zimm model demonstrates the significant influence that
{hydrodynamic interaction } has on the translational diffusivity of the macromolecule. While the
pre-averaging assumption appears adequate at equilibrium, it predicts a
shear rate independent {\it scalar} diffusivity even in the presence of a
flow field. On the other hand, both the improved treatments of hydrodynamic
interaction, namely, consistent averaging and the Gaussian
approximation, reveal that the translational diffusivity of a Hookean
dumbbell in a flowing homogeneous solution is described by an
anisotropic diffusion tensor which is flow rate
dependent~\cite{otttdca,otttdca2,otttdga}.
Indeed, unlike in the Zimm case, the three different approaches
mentioned earlier for calculating the translational diffusivity do not
lead to identical expressions for the diffusion
tensor~\cite{otttdca,otttdca2}. Insight into the
origin of the anisotropic and flow rate dependent behavior of the
translational diffusivity is obtained when the link between the
polymer diffusivity and the shape of the polymer molecule
in flow~\cite{hoag} is explored~\cite{prakmas1,prakmas2}. It is found that
the solvent flow field alters the distribution of mass about the centre of
the dumbbell. As a consequence, the dumbbell experiences an
average friction that is anisotropic and flow rate dependent. The discussion of
the influence of improved treatments of hydrodynamic interaction on the
translational diffusivity has so far been confined to the Hookean dumbbell
model. This is because the concept of the center of resistance, which is
very useful for simplifying calculations for bead-spring chains
in the Zimm case, cannot be employed in these improved
treatments~\cite{otttdca}.
\subsection{The twofold normal approximation}
The twofold normal approximation borrows ideas from the decoupling
approximation of Magda {\it et al.}~\cite{magda} and Kishbaugh and
McHugh~\cite{kish} in order to reduce the computational intensity of
the Gaussian approximation. As in the case of the
Gaussian approximation, and unlike in the case of the consistent-averaging and
decoupling approximations where it is neglected, fluctuations in the
hydrodynamic interaction are included.
In a sense, the twofold normal approximation is to the Gaussian
approximation, what the decoupling approximation is to
the consistent-averaging approximation. The computational efficiency of
the decoupling approximation is due both to the reduction in
the set of differential equations that
must be solved in order to obtain the stress tensor, and to the
procedure that is adopted to solve them~\cite{prakott}. These aspects
are also responsible for the computational efficiency of the twofold
normal approximation. However, the derivation of the reduced set of
equations in the twofold normal approximation is significantly different
from the scheme adopted in the decoupling approximation; it is more
straight forward, and avoids the inconsistencies that are present in the
decoupling approximation.
Essentially the twofold normal approximation, (a) assumes that the
configurational distribution function $\psi$ is Gaussian, (b) uses the
Rouse or the Zimm orthogonal matrix $ {\Pi_{jk}}$ to map
$ {{\mbox {\boldmath $Q$}}}_j$ to `normal' coordinates $ {{\mbox {\boldmath $Q$}}}_j^\prime$, and (c) assumes that the
covariance matrix $ {\bs_{jk}} \,$ is diagonalised by the same orthogonal matrix,
{\it ie.} $\sum_{j, \, k} \, \Pi_{jp}\, {\bs_{jk}}\, \Pi_{kq}= {\bigl\langle} {{\mbox {\boldmath $Q$}}}_p^\prime
{{\mbox {\boldmath $Q$}}}_q^\prime {\bigr\rangle}= {{\mbox {\boldmath $\sigma \! $}}}_p^\prime \, \delta_{pq}. $
This leads to the following equations for the
$(N-1)$ variances $ {{\mbox {\boldmath $\sigma \! $}}}_j^\prime$,
\begin{equation}
{d \over dt} {{\mbox {\boldmath $\sigma \! $}}}_j^\prime = {\mbox {\boldmath $\kappa$}} \cdot {{\mbox {\boldmath $\sigma \! $}}}_j^\prime
+ {{\mbox {\boldmath $\sigma \! $}}}_j^\prime \cdot {\mbox {\boldmath $\kappa$}}^T
+{2 k_B T \over \zeta}\,\, {\bf \Lambda}_{j} - {H \over \zeta} \, \biggl\lbrack
{{\mbox {\boldmath $\sigma \! $}}}_j^\prime \cdot {\bf \Lambda}_{j} + {\bf \Lambda}_{j} \cdot {{\mbox {\boldmath $\sigma \! $}}}_j^\prime
\biggr\rbrack
- {H \over \zeta}\,{H \over k_B T}\, {\sum_k \,} \, \biggl\lbrack
{{\mbox {\boldmath $\sigma \! $}}}_j^\prime \cdot {\bf \Delta}_{jk} : {{\mbox {\boldmath $\sigma \! $}}}_k^\prime + {{\mbox {\boldmath $\sigma \! $}}}_k^\prime : {\bf \Delta}_{jk} \cdot
{{\mbox {\boldmath $\sigma \! $}}}_j^\prime \biggr\rbrack
\label{tfnsecmom}
\end{equation}
where, $ {\bf \Lambda}_j \equiv {\widetilde {\bf \Lambda}_{jj}}$ are the diagonal tensor
components of the matrix ${\widetilde {\bf \Lambda}_{jk}}$,
\begin{equation}
{\widetilde {\bf \Lambda}_{jk}}=\sum_{l, \, p} \, \Pi_{lj}\, {\bAb_{lp}}\, \Pi_{pk}
\label{Lam}
\end{equation}
and the matrix $ {\bf \Delta}_{jk}$ is given by,
\begin{equation}
{\bf \Delta}_{jk}=\sum_{l, \, m, \, n, \, p} \, \Pi_{lj}\,\Pi_{pk}\,
{\bf \Gamma}_{lp,mn}\, \Pi_{mj}\,\Pi_{nk}
\label{Del}
\end{equation}
In equations~(\ref{Lam}) and~(\ref{Del}), the tensors $ {\bAb_{jk}}$ and
$ {\bf \Gamma}_{lp,mn}$ are given by equations~(\ref{Abar}) and~(\ref{Gam}),
respectively. However, the argument
of the hydrodynamic interaction functions is now given by,
\begin{equation}
{\hat \bs_{\mu \nu}}={1 \over \vert \mu - \nu \vert }\, {H \over k_B T }
\,\, \sum_{j,k \, = \, \min(\mu,\nu)}^{ \max(\mu,\nu)-1} \,\, {\sum_m \,}
\Pi_{jm}\, \Pi_{km}\, {{\mbox {\boldmath $\sigma \! $}}}_m^\prime
\end{equation}
The decoupling approximation is recovered from the twofold normal
approximation when the last term in equation~(\ref{tfnsecmom}),
which accounts for fluctuations in hydrodynamic interaction,
is dropped. Thus the two different routes for finding
governing equations for the quantities $ {{\mbox {\boldmath $\sigma \! $}}}_j^\prime$
lead to the same result. However, Prakash and
{\" O}ttinger~\cite{prakott} have shown that this is in
some sense a fortuitous result, and indeed the key assumption made in the
decoupling approximation regarding the diagonalisation of $ {\bAb_{jk}}$
is not tenable. The Zimm model in terms of normal modes may be obtained from
equation~(\ref{tfnsecmom}) by dropping the last term, and substituting
${\widetilde A}_{jk}$ in place of $ {\bAb_{jk}}$. Of course the Zimm orthogonal
matrix must be used to carry out the diagonalisation in equation~(\ref{Lam}).
The diagonalised Zimm model reduces to the diagonalised Rouse model
upon using the Rouse orthogonal matrix and on setting $h^* = 0 $.
The evolution equations~(\ref{tfnsecmom}) have been solved to obtain the
zero shear rate properties for chains with $ N \le 150$, when the
Zimm orthogonal matrix is used for the purpose of diagonalisation,
and for chains with $ N \le 400$, when the Rouse orthogonal matrix is
used. Viscometric functions at finite shear rates in simple shear flows
have been obtained for chains with $ N \le 100$~\cite{prakott}. The results
are very close to those of the Gaussian approximation; this implies that
they must also lie close to the results of exact
Brownian dynamics simulations. The reasons for the reduction in
computational intensity of the twofold normal approximation
are discussed in some detail in~\cite{prakott}. The most important
consequence of introducing the twofold normal approximation is
that rheological data accumulated for chains
with as many as 100 beads can be extrapolated to the limit $N \to
\infty$, and as a result, universal predictions may be obtained.
\begin{figure}[t]
\centerline{ \epsfysize=4.5in
\epsfbox{bkc.ps} }
\caption{Universal viscometric functions in theta solvents.
Reproduced from~\protect\cite{prakott}. }
\label{unifig}
\end{figure}
\subsection{Universal properties in theta solvents}
One of the most important goals of examining the influence of hydrodynamic
interactions on polymer dynamics in dilute solutions is the calculation
of universal ratios and master curves. These properties do not depend
on the mechanical model used to represent the polymer molecule.
Consequently, they reflect the most general consequence of the way in which
{hydrodynamic interaction } has been treated in the theory. They are also the best means to compare
theoretical predictions with experimental observations since they
are parameter free.
There appear to be two routes by which the
universal predictions of models with hydrodynamic interaction
have been obtained so far, namely, by extrapolating finite chain
length results to the limit of infinite
chain length where the model predictions become parameter free, and
by using renormalisation group theory methods.
In the former method, there are two essential requirements.
The first is that rheological data for finite chains must
be generated for large enough values of $N$ so as to be
able to extrapolate reliably, {\it ie.} with
small enough error, to the limit $N \to \infty$. The second is that
some knowledge of the leading order corrections to the infinite chain
length limit must be obtained in order to carry out the extrapolation in an
efficient manner. It is clear from the discussion of the various
approximate treatments of {hydrodynamic interaction } above that it is possible to obtain
universal ratios in the zero shear rate limit in all the cases.
Four universal ratios that are
frequently used to represent the rheological behavior of dilute
polymer solutions in the limit of zero shear rate are~\cite{ottbook},
\begin{eqnarray}
U_{\eta \lambda } &=& { \eta_{p, 0} \over {n k_B T \lambda_1} }
\quad \quad \quad \quad \quad \quad
U_{\eta R}=\lim_{n \to 0} \, {\eta_{p, 0} \over {n \eta_s (4 \pi R_g^3/3)}}
\nonumber \\
U_{\Psi \eta } &=& {n k_B T \Psi_{1, 0} \over { \eta_{p, 0}^2}}
\quad \quad \quad \quad \quad \, U_{\Psi \Psi }={\Psi_{2, 0}
\over \Psi_{1, 0} }
\label{unirat}
\end{eqnarray}
where, $\lambda_1$ is the longest relaxation time, and
$R_g$ is the root-mean-square radius of gyration at equilibrium.
With regard to the leading order corrections to these ratios, it has been
possible to obtain them explicitly only in the consistently-averaged
case~\cite{ottca1}. In both the Gaussian approximation and
the twofold normal approximation it is
assumed that the leading order corrections are of the same order,
and extrapolation is carried out numerically by plotting the data as
a function of $(1/\surd N)$. Because of their computational intensity,
it is not possible to to obtain the universal shear rate dependence of
the viscometric functions predicted by the consistent-averaging and
Gaussian approximations. However, it is possible to obtain these
master curves with the twofold normal approximation.
Table~\ref{unitab} presents the prediction of the universal
ratios~(\ref{unirat}) by the various approximate treatments.
Miyaki {\it et al.}~\cite{miyaki} have experimentally obtained
a value of $U_{\eta R} = 1.49 \, (6)$ for polystyrene in cyclohexane
at the theta temperature. Figure~\ref{unifig} displays the viscometric
functions predicted by the two fold normal approximation.
The coincidence of the curves for the different values of $h^*$
indicate the parameter
free nature of these results. Divergence of the curves at high
shear rates implies that the data accumulated for chains
with $N \le 100$ is insufficient to carry out an accurate
extrapolation at these shear rates. The incorporation of
the effect of hydrodynamic interaction into kinetic theory clearly
leads to the prediction of
shear thickening at high shear rates even in the long chain limit.
\begin{table}[bt]
\caption{Universal ratios in the limit of zero shear rate.
The exact Zimm values and the Gaussian approximation (GA) values
for $U_{\eta R}$ and $ U_{\Psi \eta }$ are reproduced
from~\protect\cite{ottbook},
the exact consistent-averaging values
from~\protect\cite{ottca1}, and
the renormalisation group (RG) results
from~\protect\cite{ottrab}.
The twofold normal approximation values with the Zimm orthogonal
matrix (TNZ) and the remaining GA values are reproduced
from~\protect\cite{prakott}.
Numbers in parentheses indicate the uncertainity in the last figure. }
\label{unitab}
\begin{tabular*}{\textwidth}{@{}l@{\extracolsep{\fill}}llll}
& & & & \\
\hline
&$U_{\eta \lambda}$ & $U_{\eta R}$ & $U_{\Psi \eta}$ &
$U_{\Psi \Psi}$ \\
\hline
& & & & \\
Zimm & 2.39 & 1.66425 & 0.413865 & 0.0 \\
CA & 2.39 & 1.66425 & 0.413865 & 0.010628 \\
GA &$1.835 \, (1)$ & $1.213 \, (3) $ & $0.560 \, (3) $ &
$ - 0.0226 \, (5)$ \\
RG & - & 1.377 & 0.6096 & $- 0.0130$ \\
TNZ &$1.835 \, (1)$ & $1.210 \, (2) $ & $0.5615 \, (3)$ &
$ - 0.0232 \, (1)$ \\
& & & & \\
\hline
\end{tabular*}
\vskip10pt
\end{table}
In both table~\ref{unitab} and figure~\ref{unifig}, the results of
renormalisation group calculations (RG) are also
presented~\cite{ottrab,zylottrg}. As mentioned earlier, the
renormalisation group theory approach is an alternative procedure for
obtaining universal results. It is essentially a method for refining
the results of a low-order perturbative treatment of hydrodynamic
interaction by introducing higher order effects so as to remove the
ambiguous definition of the bead size. All the infinitely many
interactions for long chains are brought in through the idea of
self-similarity. It is a very useful procedure by which a
low-order perturbation result, which can account
for only a few interactions, is turned into something meaningful.
However, systematic results can only be obtained near four dimensions,
and one cannot estimate the errors in three dimensions reliably.
The Gaussian and twofold normal approximations on the other hand are
non-perturbative in nature, and are essentially `uncontrolled' approximations
with an infinite number of higher order terms.
It is clear from the figures that the two methods lead to significantly
different results at moderate to high shear rates. A minimum in the
viscosity and first normal stress difference curves is not predicted
by the renormalisation group calculation, while the twofold normal
approximation predicts a small decrease from the zero shear rate value
before the monotonic increase at higher shear rates. The good
comparison with the results of Brownian dynamics simulations for
short chains indicates that the twofold normal approximation
is likely to be more accurate than the renormalisation group calculations.
\section{CONCLUSIONS}
This chapter discusses the development of a unified basis for the treatment
of non-linear microscopic phenomena in molecular theories of dilute
polymer solutions and reviews the recent advances in the treatment of
hydrodynamic interaction. In particular, the successive refinements
which ultimately lead to the prediction of universal viscometric
functions in theta solvents have been highlighted.
| proofpile-arXiv_065-9097 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{Acknowledgement}
I would like to express my gratitude to Prof.H.Kleinert for kindly
suggesting this work to me and for his kind hospitality at the
Institut fur theoretische physik at Berlin Freie Universitat. Thanks
are also due Prof. P.S.Letelier for helpful discussions on the subject
of this Letter.Special thanks go to Prof.M.O.Katanaev and Prof.James Sethna for sending me their papers.Financial support from CNPq.(Brazil) and DAAD (Bonn) is
grateful ackowledged.
\newpage
| proofpile-arXiv_065-9100 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Analytic results}
The Dirac-Kogut-Susskind operator of QCD at finite chemical potential can be
written as
\begin{equation}\label{1}
2 \Delta = 2mI + e^\mu G + e^{-\mu} G^\dagger + V
\end{equation}
\noindent
where $G$ ($G^+$) contains all forward (backward) temporal links and V all
space-like links.
The determinant of $\Delta$ in the integration
measure can be replaced, at large fermion masses $m$, by
\begin{equation}\label{2}
\det \Delta = m^{3V_sL_t}\det \left( I + \frac{e^\mu}{2m} G \right)
\end{equation}
If the fugacity $e^\mu$ is much smaller than $2m$, the second factor of (2)
can be replaced by 1 and the theory is independent of the chemical potential.
Therefore, in order to get a non trivial $\mu$ dependence, we need to go
to a region of large chemical potential in which the fugacity is of the
order of $2m$ \cite{TOUS}.
Since all space-like links have disappeared in equation (2), the determinant
of $\Delta$ factorizes as a product of $V_s$ determinants for the single
temporal chains. A straightforward calculation allow us to write
\begin{equation}\label{3}
\det \Delta = e^{3V_sL_t\mu} \prod_{i=1}^{V_s} \det (c + L_i )
\end{equation}
\noindent
with $c=({2m\over{e^\mu}})^{L_t}$, $L_t$ is the lattice
temporal extent and $L_i$
the SU(3) variable representing the forward straight Polyakov loop starting
from the spatial site $i$ and circling once the lattice in the temporal
direction. The determinants in (3) are gauge invariant quantities which can
therefore be written as functions of the trace and the determinant of $L_i$.
Since the gauge group is a unitary group, $\det(L_i)=1$
and therefore the only contributions depending on the gauge configuration
will be functions of $Tr(L_i)$. In fact simple algebra allows to write
\begin{equation}\label{4}
\det (c + L_i ) = c^3 + c^2 Tr (L_i) + c Tr (L_i^*) + 1
\end{equation}
In the infinite gauge coupling limit, the integration over the gauge group
is trivial since we get factorization \cite{LAT98}.
The final result for the partition
function at $\beta=0$ is
\begin{equation}\label{5}
{\cal Z} =
V_G e^{3V_sL_t\mu}
\left( \left(\frac{2m}{e^\mu}\right)^{3L_t} +1 \right)^{V_s}
\end{equation}
\noindent
where $V_G$ is a constant irrelevant factor diverging exponentially
with the lattice
volume which accounts for the gauge group volume. Equation (5)
gives for the free energy density $f={1\over{3V_sL_t}}\log{\cal Z}$
\begin{equation}\label{6}
f = \mu +
\frac{1}{3L_t} \log \left( \left(\frac{2m}{e^\mu}\right)^{3L_t} +1 \right)
\end{equation}
The first contribution in (6) is an analytical function of $\mu$. The second
contribution has, in the limit of infinite temporal lattice extent,
a non analyticity at $\mu_c=\log(2m)$ which induces in the number density
a step jump, indication of a saturation transition of first order at the
value of $\mu_c$ previously given.
This is an expected result on physical grounds. In fact in the infinite
fermion mass limit baryons are point-like particles, and
pion exchange interaction vanishes, since pions are also very heavy.
Therefore we are dealing with a system of very heavy free
fermions (baryons) and by increasing the
baryon density in such a system we expect an onset at
$\mu_c={1\over3}m_b$, i.e., $\mu_c=\log(2m)$ since $3\log(2m)$ is the baryon
mass at $\beta=0$ for large $m$ \cite{SACLAY}.
Let us now discuss the relevance of the phase of the
fermion determinant at $\beta=0$. The standard wisdom based on random matrix
model results is that the phase of the fermion determinant plays a fundamental
role in the thermodynamics of $QCD$ at finite baryon density \cite{RMT}
and that if
the theory is simulated by replacing the determinant by its absolute value,
one neglects a contribution to the free energy density which could be
fundamental in order to understand the critical behavior of this model.
We are going to show now that, contrary to this wisdom, the phase of the
determinant can be neglected in the large $m$ limit at $T=0$.
Equations (3) and (4) imply that an upper bound for the absolute
value of the fermion determinant is given by the determinant of the free
gauge configuration. Therefore the mean value of the phase factor in the
theory defined taking the absolute value of the determinant
\begin{equation}\label{7}
\left\langle e^{i\phi} \right\rangle_\| =
\frac{\int [dU] e^{-\beta S_G(U)}\det\Delta}
{\int [dU] e^{-\beta S_G(U)} | \det\Delta |}
\end{equation}
\noindent
is, at $\beta=0$, bounded from below by the ratio
\begin{equation}\label{8}
\left( \frac
{\left( \frac{2m}{e^\mu}\right)^{3L_t} + 1 }
{\left( \left( \frac{2m}{e^\mu}\right)^{L_t} + 1 \right)^3 }
\right)^{V_s}
\end{equation}
At zero temperature $(L_t=L, V_s=L^3)$, and letting $L\rightarrow\infty$,
it is straightforward to verify that the ratio (8)
goes to 1 except at $\mu_c=\log(2m)$
(at $\mu=\mu_c$ the ratio
goes to zero but it is bounded from below by $(1/4)^{V_s}$).
Therefore the mean value of
the cosine of the phase in the theory where the fermion determinant
is replaced by its absolute value gives zero contribution.
At $T \neq 0$, i.e. taking the infinite $V_s$ limit by keeping
fixed $L_t$, the lower bound
(8) for the mean value of the phase factor (7) goes to zero exponentially
with the spatial lattice volume $V_s$. This suggests that the
phase will contribute in finite temperature $QCD$.
In fact, it
is easy to convince oneself that expression (7), at $\beta=0$, vanishes
also exponentially with the lattice spatial volume at finite temperature
(see fig. 1).
The contribution of the phase is therefore non zero (in the limit considered
here) in simulations of $QCD$ at finite temperature.
The free energy density at finite temperature (equation (6)) is an analytic
function of the fermion mass and chemical potential. It develops a
singularity only in the limit of zero temperature $(T={1\over{L_t}})$.
Therefore $QCD$ at large $m$ and finite temperature does not show
phase transition in the
chemical potential but a crossover at $\mu=\log(2m)$ which becomes a true
first order phase transition at $T=0$.
The standard way to define the theory at zero temperature is to consider
symmetric lattices.
However a more natural way to define the theory at $T=0$
is to take the limit of finite temperature $QCD$ when the physical
temperature $T\rightarrow 0$.
In other words, we should take first the infinite spatial
volume limit and then the infinite temporal extent limit.
We will show here that, as expected, physical results are independent of
the procedure choosen.
The free energy density of the
model can be written as the sum of two contributions $f=f_1+f_2$. The first
contribution $f_1$ is the free energy density of the theory where the fermion
determinant in the integration measure is replaced by its absolute value.
The second contribution $f_2$, which comes from the phase of the fermion
determinant, can be written as
\begin{equation}\label{9}
f_2 = {1\over{V_sL_t}}\log\left\langle e^{i\phi} \right\rangle_\|.
\end{equation}
\noindent
Since the mean value of the phase factor (7) is less or equal than 1,
$f_2$ is bounded from above by zero and from below by
\begin{equation}\label{10}
{1\over{L_t}}\log{\left( \frac
{\left( \frac{2m}{e^\mu}\right)^{3L_t} + 1 }
{\left( \left( \frac{2m}{e^\mu}\right)^{L_t} + 1 \right)^3 }
\right)}
\end{equation}
When $L_t$ goes to infinity, expression (10) goes to zero for all the
values of $\mu$ and therefore the
only contribution to the free energy density which survives in the zero
temperature limit is $f_1$.
Again, we conclude that zero temperature QCD in the strong coupling
limit at finite chemical potential
and for large fermion masses is well described by the theory obtained by
replacing the fermion determinant by its absolute value.
These results are not surprising as follows from the fact that at $\beta=0$
and for large $m$ the system factorizes as a product of $V_s$ noninteracting
$0+1$ dimensional $QCD's$ and from the relevance (irrelevance) of the phase
of the fermion determinant in $0+1$ QCD at finite (zero) "temperature"
\cite{LAT97}.
More surprising maybe is that, as we will see in the following,
some of these results do not change when we put a finite gauge coupling.
The inclusion of a non trivial pure gauge Boltzmann factor in the integration
measure of the partition function breaks the factorization property. The
effect of a finite gauge coupling is to induce correlations between the
different temporal chains of the determinant of the Dirac operator. The
partition function is given by
\begin{equation}\label{11}
{\cal Z} = \int [dU] e^{-\beta S_G(U)} \prod_{i=1}^{V_s}
(c^3 + 1 + c Tr (L_i^*) + c^2 Tr (L_i) )
\end{equation}
\noindent
and can be written as
\begin{equation}
{\cal Z}(\beta,\mu) = {\cal Z}_{pg}\cdot {\cal Z}(\beta=0,\mu)\cdot
R(\beta,\mu)
\end{equation}
\noindent
where ${\cal Z}_{pg}$ is the pure gauge partition function,
${\cal Z}(\beta=0,\mu)$ the strong coupling partition function
(equation (5)) and $R(\beta,\mu)$ is given by
\begin{equation}\label{12}
R(\beta,\mu) = \frac
{\int [dU] e^{-\beta S_G(U)} \prod_{i=1}^{V_s} \left(
1 + \frac{c Tr (L_i) + c^2 Tr (L_i^*)}{c^3 + 1} \right)}
{\int [dU] e^{-\beta S_G(U)}}
\end{equation}
In the zero temperature limit ($L_t=L, L_s=L^{3}, L\rightarrow\infty$) the
productory in the numerator of (13) goes to 1 independently of the gauge
configuration. In fact each single factor has an absolute value equal to
1 up to corrections which vanish exponentially with the lattice size $L$
and a phase which vanishes also exponentially with $L$. Since the total number
of factors is $L^3$, the productory goes to 1 and therefore $R=1$ in the
zero temperature limit.
The contribution of $R$ to the free energy
density vanishes therefore in the infinite volume limit at zero temperature.
In such a case, the free energy density is the sum of the free energy density
of the pure gauge $SU(3)$ theory plus the free energy density of the model at
$\beta=0$ (equation (6)). The first order phase transition found at $\beta=0$
is also present at any $\beta$ and its location and properties do not
depend on $\beta$ since all $\beta$ dependence in the partition function
factorizes in the pure gauge contribution.
Again at finite gauge coupling
the phase of the fermion determinant is irrelevant at zero temperature.
At finite temperature and finite gauge coupling the first order phase
transition induced by the contribution (6) to the free energy density at
zero temperature disappears and becomes a crossover.
Furthermore expression (13) gives also a non vanishing contribution
to the free energy density if $L_t$ is finite.
The common physical interpretation for
the theory with the absolute value of the fermion determinant
is that it possesses
quarks in the {\bf 3} and {\bf 3}$^*$ representations of SU(3),
having baryonic states made up of two quarks which would give account for the
physical differences respect to real QCD. We have proven analytically (at
$\beta=0$) that the relation between modulus and real QCD is
temperature dependent, $i.e.$ they are different only at $T \ne 0$,
a feature that does not support the above interpretation.
\section{Numerical results}
From the point of view of simulations, work has been done by
several groups mainly to develop numerical algorithms capable to overcome
the non positivity of the fermionic determinant.
The most promising of these algorithms \cite{BAR}, \cite{NOI1}
are based on the GCPF formalism and try to calculate extensive quantities
(the canonical partition functions at fixed baryon number).
Usually they measure quantities that, with actual statistics, do not
converge.
In a previous paper \cite{NOI2} we have given arguments to conclude that,
if the phase is relevant, a statistics exponentially increasing
with the system volume
is necessary to appreciate its contribution to the observables (see also
\cite{BAR2} ).
What happens if we consider a case where the phase is not relevant
($i.e.$ the large mass limit of QCD at zero temperature, as discussed
in the previous section)?
To answer this question we have reformulated the GCPF formalism by
writing the partition function as a polynomial in
$c$ and studied the convergence properties of the coefficients at $\beta=0$
using an ensemble of (several thousands) random configurations.
This has been done as in standard numerical simulations ({\it i.e.}
without using the factorization property) for lattices $4^4$ (fig. 2a),
$4^3\times 20$ (fig. 2b), $10^3\times 4$ (fig. 2c) \cite{LAT98} and the
results compared with the analytical predictions (\ref{5}) (solid lines
in the figures).
From these plots we can see that, unless we consider a large lattice temporal
extension, our averaged coefficients in the infinite coupling limit still
suffer from sign ambiguities $i.e.$ not all of them are positive.
For large $L_t$ the {\it sign problem}
tends to disappear because the determinant of the one dimensional system
(\ref{4}) becomes an almost real quantity for each gauge configuration and
obviously the same happens to the determinant of the Dirac operator
(\ref{3}) in the four dimensional lattice.
It is also interesting to note that the sign of the averaged
coefficients is very stable and a different set of random configurations
produces almost the same shape.
However, the sign of the determinant is not the only problem: in fact,
as one can read from fig. 2, even considering the modulus of
the averaged coefficients we do not get the correct result.
We used the same configurations to calculate the average of the modulus of
the coefficients. We expect this quantity to be larger
than the analityc results reported in fig. 2.
The data, however, contrast with this scenario:
the averages of the modulus are always smaller (on a logarithmic scale)
than the analityc results from formula (\ref{5}).
In fact these averages are indistinguishable from the absolute values of
the numerical results reported in fig. 2.
In conclusion, even if the phase of the fermion
determinant is irrelevant in QCD at finite density ($T=0$ and heavy quarks)
the numerical evaluation of the Grand Canonical Partition Function
still suffers from sampling problems.
A last interesting feature which can be discussed on the light of our results
concerns the validity of the quenched approximation in finite density $QCD$.
An important amount of experience in this field \cite{KOGUT} suggests that
contrary to what happens in $QCD$ at finite and zero temperature, the quenched
approximation does not give correct results in $QCD$ at finite chemical
potential. Even if
the zero flavour limit of the theory with the absolute value of the
fermion determinant and of actual $QCD$ are the same (quenched approximation),
the failure of this approximation has been assigned in the past \cite{RMT}
to the fact that it corresponds to the zero flavour limit of the theory
with $n$ quarks in the fundamental and $n$ quarks in the complex
representation of
the gauge group. In fig. 3 we have plotted the number density at $\beta=0$ and
for heavy quarks in three interesting cases: actual $QCD$, the theory
with the absolute value of the fermion determinant and quenched $QCD$.
It is obvious that the quenched approximation produces results far from those
of actual $QCD$ but also far from those of $QCD$ with the modulus of the
determinant of the Dirac operator. The former results are furthermore very
near to those of actual $QCD$. In other words, even if the phase is relevant
at finite temperature, its contribution to the number density is almost
negligible.
It seems unplausible on the light of these results to assign the failure of
the quenched approximation to the feature previously discussed \cite{RMT}.
It seems more natural to speculate that it fails because does not
incorporate correctly in the path integral the Fermi-Dirac statistics and
we do expect that Pauli exclusion principle play, by far, a more relevant
role in finite density $QCD$ than in finite temperature $QCD$.
\vskip 0.3truecm
\noindent
{\bf Acknowledgements}
\vskip 0.3truecm
This work has been partially supported by CICYT and INFN.
\vskip 1 truecm
| proofpile-arXiv_065-9111 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:intro}
In the standard cosmological model galaxies and clusters form by
hierarchical clustering and merging of small density perturbations
that grow by gravitational instability. In this standard picture the
mass of the Universe is dominated by dissipationless dark matter which
collapses to form dark halos, inside of which the luminous galaxies
form. It was once assumed that the previous generation of substructure
was erased at each level of the hierarchy (White \& Rees 1978).
However, high resolution $N$-body simulations have recently shown that
some substructure is preserved at all levels (Moore, Katz \& Lake
1996; Klypin {et al.~} 1997; Brainerd, Goldberg \& Villumsen 1998; Moore
{et al.~} 1998; Ghigna {et al.~} 1998). This is consistent with observations
which reveal substructure in a variety of systems: globular clusters
within galaxies, distant satellites and globulars in the halos of
galaxies, and galaxies within clusters. This substructure evolves as
it is subjected to the forces that try to dissolve it: dynamical
friction, tides from the central objects and impulsive collisions with
other substructure. The timescales for survival and the nature of the
dissolution guide our understanding of the formation processes, most
of which depend on the nature of the orbits. Tides strip satellites
on elongated orbits. An elongated orbit and a circular one clearly
evolve differently owing to dynamical friction, especially if there is
a disk involved. The disk heating similarly depends on the orbits of
the satellites. The nature of the orbits also effects the nature and
persistence of tidal streams at breakup, and the mutual collisions of
structures as considered by galaxy harassment.
Since the clustering and merging of halo substructures is one of the
cornerstones of the hierarchical structure formation scenario, a
comprehensive understanding of their orbital properties is of
invaluable importance when seeking to understand the formation and
evolution of structure in the Universe. We have found that the
properties of orbits within spherical, fully relaxed systems has
received little attention and is often misrepresented. Hence, the
first goal of this paper is to derive a statistical characterization
of the orbits in potential/density distributions that describe
galaxies within clusters, globular clusters within galaxies, etc. We
find that orbits are far more elongated than typically characterized.
Recently, Ghigna {et al.~} (1998) used high resolution $N$-body
simulations to investigate the orbital properties of halos within
clusters. They found that orbits of the subhalos are strongly
elongated with a median apo-to-pericenter ratio of approximately $6$.
We compare our results on equilibrium spherical potentials with the
distribution of orbits in their cluster that was simulated in a
cosmological context, and show that the orbital eccentricities of the
subhalos are consistent with an isothermal halo that is close to
isotropic.
In the second part of this paper, we use high resolution $N$-body
simulations to calculate the dynamical friction on eccentric orbits.
Past studies have compared numerical simulations with Chandrasekhar's
dynamical friction formula (e.g., White 1978, 1983; Tremaine 1976,
1981; Lin \& Tremaine 1983; Tremaine \& Weinberg 1984; Bontekoe \& van
Albada 1987; Zaritsky \& White 1988; Hernquist \& Weinberg 1989; Cora,
Muzzio \& Vergne 1997). Comprehensive overviews of these studies with
discussions regarding the local versus global nature of dynamical
friction can be found in Zaritsky \& White (1988) and Cora {et al.~}
(1997). Most of the studies followed the decay of circular or only
slightly eccentric orbits. The two exceptions are Bontekoe \& van
Albada (1987) and Cora {et al.~} (1997). The former examined the orbital
decay of a `grazing encounter' of a satellite on an elliptical orbit
that grazes a larger galaxy at its pericenter. In this case,
dynamical friction occurs only near pericenter. The pericentric
radius remains nearly fixed with significant circularization of the
orbit in just a few dynamical times (see also Colpi (1998) for an
analytical treatment based on linear response theory). Cora {et al.~}
followed satellites on eccentric orbits that were completely embedded
in a dark halo, but they didn't discuss the dependence of decay time
on orbital eccentricity or the change in orbital eccentricity with
time. We use fully self-consistent $N$-body simulations with $50,000$
halo particles to calculate dynamical friction on eccentric orbits.
We show that the timescale for dynamical friction is shorter for more
eccentric orbits, but that the dependence is considerably weaker than
claimed previously by Lacey \& Cole (1993) based on an analytical
integration of Chandrasekhar's formula. In addition, we show that,
contrary to common belief, dynamical friction does not lead to
circularization. All in all, dynamical friction only leads to a very
moderate change in the distribution of orbital eccentricities over
time.
In Section~\ref{sec:orbecc} we derive the distributions of orbital
eccentricities for a number of spherical densities/potentials using
both analytical and numerical methods. Section~\ref{sec:friction}
describes our $N$-body simulations of dynamical friction on eccentric
orbits in an isothermal halo. In Section~\ref{sec:applic} we discuss
a number of applications. Our results and conclusions are presented in
Section~\ref{sec:conc}.
\section{Orbital eccentricities}
\label{sec:orbecc}
\subsection{The singular isothermal sphere}
\label{sec:sing}
The flat rotation curves observed for spiral galaxies suggest that
their dark halos have density profiles that are not too different from
isothermal. Hence, we start our investigation with the singular
isothermal sphere, whose potential, $\Phi$, and density, $\rho$, are
given by
\begin{equation}
\label{potdens}
\Phi(r) = V_c^2 \; {\rm ln}(r),\;\;\;\;\;\;\;\;\;\;\;\;\;
\rho(r) = {V_c^2 \over 4 \pi G r^2}.
\end{equation}
Here $V_c$ is the circular velocity, which is constant with radius.
\subsubsection{Analytical method}
\label{sec:analyt}
For non-Keplerian potentials, in which the orbits are not simple
ellipses, it is customary to define a generalized orbital eccentricity
\footnote{Throughout this paper we use $e$ to denote the orbital
eccentricity, ${\rm e}$ refers to the base of the natural logarithm,
and $E$ refers to the energy.}, $e$, as:
\begin{equation}
\label{eccen}
e = {r_{+} - r_{-} \over r_{+} + r_{-}}.
\end{equation}
Here $r_{-}$ and $r_{+}$ are the peri- and apocenter respectively.
For an orbit with energy $E$ and angular momentum $L$ in a spherical
potential, $r_{-}$ and $r_{+}$ are the roots for $r$ of
\begin{equation}
\label{rooteq}
{1 \over r^2} + {2[\Phi(r) - E] \over L^2} = 0,
\end{equation}
(Binney \& Tremaine 1987). For each energy, the maximum angular
momentum is, for a singular isothermal sphere, given by $L_c(E) =
r_c(E) V_c$. Here $r_c(E)$ is the radius of the circular orbit with
energy $E$ and is given by
\begin{equation}
\label{circrad}
r_c(E) = \exp\left[{{E - V_c^2/2} \over V_c^2}\right].
\end{equation}
Upon writing $L = \eta L_c(E)$ ($0 \leq \eta \leq 1$)\footnote{The
quantity $\eta$ is generally called the orbital circularity} , one
can rewrite equation (\ref{rooteq}) for a singular isothermal sphere
such that the apo- and pericenter are given by the roots for $x =
r/r_c$ of
\begin{equation}
\label{rootred}
{1 \over x^2} + {2 \over \eta^2} {\rm ln}(x) - {1 \over \eta^2} = 0.
\end{equation}
As might be expected in this scale free case, the ratio $r_{+}/r_{-}$
depends only on the orbital circularity $\eta$ and is independent of
energy. This dependence is shown in Figure~\ref{fig:eps}.
\placefigure{fig:eps}
At a certain radius, the average of any quantity $S$ is determined by
weighting it by the distribution function (hereafter DF) and
integrating over phase space. For the singular isothermal sphere this
yields
\begin{equation}
\label{avs1}
\overline{S}(r) = {4 \pi \over r^2 \rho(r)}
\int\limits_{\Phi(r)}^{\infty} dE \int\limits_0^{r\sqrt{2(E-\Phi)}}
\; f(E,L) \; S(E,L) {L \; dL \over \sqrt{2(E-\Phi) - L^2/r^2}}.
\end{equation}
In what follows we consider the family of quasi-separable DFs
\begin{equation}
\label{quasidf}
f(E,L) = g(E) \, h_a(\eta).
\end{equation}
This approach makes the solution of equation~(\ref{avs1}) analytically
tractable. The general properties of spherical galaxies with this
family of DFs are discussed in detail by Gerhard (1991, hereafter
G91). We adopt a simple parameterization for the function
$h_a(\eta)$:
\begin{equation}
\label{anisotropy}
h_a(\eta) = \left\{ \begin{array}{lll}
\tanh \bigl( {\eta \over a} \bigr) /
\tanh \bigl( {1 \over a} \bigr) & \mbox{$a > 0$}\\
1 & \mbox{$a = 0$}\\
\tanh \bigl( { 1-\eta \over a} \bigr) /
\tanh \bigl( {1 \over a} \bigr) & \mbox{$a < 0$}
\end{array}
\right.
\end{equation}
For $a=0$, the DF is isotropic. Radially anisotropic models have $a <
0$, whereas positive $a$ correspond to tangential anisotropy. For a
quasi-separable DF of the form~(\ref{quasidf}), and for the
eccentricity $e$ which depends on $\eta$ only, equation (\ref{avs1})
yields
\begin{equation}
\label{avs2}
\overline{e}(r) = {4 \pi \over r \rho(r)}
\int\limits_{\Phi(r)}^{\infty} dE \; g(E) \; L_c(E)
\int\limits_0^{\eta_{\rm max}} h_a(\eta) \; e(\eta)
{\eta \; d\eta \over \sqrt{\eta_{\rm max}^2 - \eta^2}},
\end{equation}
where
\begin{equation}
\label{etamax}
\eta_{\rm max} = {r \sqrt{2 (E - \Phi)} \over L_c(E)}.
\end{equation}
For a singular isothermal sphere, G91 has shown that the energy
dependence of the DF is given by
\begin{equation}
\label{df}
g(E) = { {\rm e} \over {16 \, \pi^2 \, G \, V_c \, u_H} }
\exp\left[-{2 E \over V_c^2}\right],
\end{equation}
where
\begin{equation}
\label{uh}
u_H = \int\limits_{0}^{\infty} du \; {\rm e}^{-u}
\int\limits_{0}^{\eta_{\rm max}} h_a(\eta) \; {\eta \; d\eta \over
\sqrt{\eta_{\rm max}^2 - \eta^2}}.
\end{equation}
Here $\eta_{\rm max}$ depends on $u$ only and is given by
\begin{equation}
\label{etamax_u}
\eta_{\rm max} = \sqrt{2 {\rm e}} \; \sqrt{u} \; {\rm e}^{-u}.
\end{equation}
Substitution of (\ref{df}) and (\ref{uh}) in (\ref{avs2}) yields (upon
substituting $u = (E - \Phi)/V_c^2$)
\begin{equation}
\label{avs3}
\overline{e}(r) = \overline{e} = {1 \over u_H}
\int\limits_{0}^{\infty} du \; {\rm e}^{-u} \int\limits_0^{\eta_{\rm
max}} h_a(\eta) \; e(\eta) {\eta \; d\eta \over \sqrt{\eta_{\rm
max}^2 - \eta^2}}.
\end{equation}
Note that, due to the scale-free nature of the problem, this average
is independent of radius.
\placefigure{fig:integral}
We have numerically solved this integral as a function of the
anisotropy parameter $a$. The results are shown in
Figure~\ref{fig:integral}. As expected, the average eccentricity
decreases going from radial to tangential anisotropy. The average
orbital eccentricity of an isotropic, singular isothermal sphere is
$\overline{e} = 0.55$. An orbit with this eccentricity has
$r_{+}/r_{-} = 3.5$.
\subsubsection{Numerical method}
\label{sec:numer}
To examine the actual distribution of eccentricities, rather than just
calculate moments of the distribution, we Monte Carlo sample the
quasi-separable DF of equation~(\ref{quasidf}) for orbits in a
singular isothermal potential and calculate their eccentricities. We
provide a detailed description of this method in the Appendix.
\placefigure{fig:anis}
The normalized distribution functions of orbital eccentricities for
three different values of the anisotropy parameter $a$ are shown in
Figure~\ref{fig:anis} (upper panels). The lower panels show the
corresponding distributions of apo-to-pericenter ratios. Each
distribution is computed from a Monte Carlo simulation with $10^6$
orbits. The thin vertical lines in Figure~\ref{fig:anis} show the 20th
(dotted lines), 50th (solid lines), and 80th (dashed lines) percentile
points of the distributions. The average eccentricity for the
isotropic case ($a=0$) computed from the Monte Carlo simulations is
$\overline{e} \simeq 0.55$, in excellent agreement with the value
determined in Section~\ref{sec:analyt}. About 15 percent of the orbits
in the isotropic singular isothermal sphere have apo- to pericenter
ratios larger than $10$, whereas only $\sim 20$ percent of the orbits
have $r_{+}/r_{-} < 2$. Note however that these numbers depend
strongly on the velocity anisotropy.
\subsection{Tracer populations}
\label{sec:tracers}
In the previous two sections, we concentrated on the self-consistent
case of a singular isothermal halo and the corresponding density
distribution that follows from the Poisson equation. Tracer
populations, however, do not necessarily follow the self-consistent
density distribution. Consider a tracer population in a singular
isothermal sphere potential with a density distribution given by
\begin{equation}
\label{tracerho}
\rho_{\rm trace}(r) = \rho_0 \left({r \over r_0}\right)^{-\alpha},
\end{equation}
(the self-consistent case corresponds to $\alpha = 2.0$). If we
consider the same quasi-separable DF as in Section~\ref{sec:sing}
(i.e., equation~[\ref{quasidf}]), the energy dependence of the DF
becomes
\begin{equation}
\label{tracedf}
g(E) = {\sqrt{\rm e} \, \rho_0 \, r_0^{\alpha} \over 4 \, \pi \, V_c^3
\, u_{H,\alpha}} \; {\rm exp}\left[- {\alpha E \over V_c^2}\right],
\end{equation}
with
\begin{equation}
\label{uhalpha}
u_{H,\alpha} = \int\limits_{0}^{\infty} du \; {\rm e}^{(1-\alpha) u}
\int\limits_{0}^{\eta_{\rm max}} h_a(\eta) \; {\eta \; d\eta \over
\sqrt{\eta_{\rm max}^2 - \eta^2}}.
\end{equation}
The average eccentricity can be computed by substituting
equations~(\ref{tracedf}) and~(\ref{uhalpha}) in~(\ref{avs2}), thereby
using the expression of $\Phi(r)$ given by equation~(\ref{potdens}).
In Figure~\ref{fig:trace} we plot the average eccentricity thus
computed as a function of the power-law slope $\alpha$. As can be
seen, the orbital eccentricities depend only mildly on the slope of
the density distribution. Note that for $\alpha > 3$, the mass within
any radius $r>0$ is infinite, whereas in the case $\alpha < 3$, the
mass outside any such radius is infinite. These differing properties
in the two regimes are apparent in the behavior of the median
eccentricity seen in Figure~\ref{fig:trace}, with the intermediate
case of $\alpha = 3$ yielding a minimum. Due to the infinities
involved the cases examined may not accurately represent true dark
halos. After giving an example of how such infinities cause problems,
the subsequent sections we examine other, more realistic, spherical
potentials with finite masses.
\placefigure{fig:trace}
\subsection{Stellar hydrodynamics and the virial theorem in the
isothermal potential}
\label{sec:virial}
The difference between infinite and finite samples is evident when
comparing the equation of stellar hydrodynamics to the virial theorem.
For a sphere with an isotropic DF, the former takes the simple form
for the one dimensional velocity dispersion $\sigma$ :
\begin{equation}
\label{stellhydro}
{d~ \over dr} \left( \rho \sigma^2 \right) = -\rho { d\Phi \over dr}
\end{equation}
For the tracer population of equation~(\ref{tracerho}) and an
isothermal potential, this simplifies to:
\begin{equation}
\label{hydroans}
\sigma^2 = -V_c^2 \left( {d\,{\rm ln} \rho \over d\, {\rm ln}
r}\right)^{-1} = {V_c^2 \over \alpha}
\end{equation}
In contrast, the virial theorem states that twice the kinetic energy
is equal to the virial. Adopting particle masses of unity yields:
\begin{equation}
\label{virialthm}
\sum v^2 = \sum \vec F \cdot \vec r = \sum {V_c^2 \over r} r
\end{equation}
Since, the expectation value of $v^2$ is $3 \sigma^2$, this reduces
to:
\begin{equation}
\label{virialans}
\sigma^2 = {V_c^2 \over 3}
\end{equation}
This difference owes to the assumption of finite versus infinite
tracers. The answers match only for $\alpha = 3$ where the divergence
in mass is only logarithmic at both $r \rightarrow 0$ and $r
\rightarrow \infty $. Even the ``self-consistent'' case of $\alpha
=2$ has a problem that is pointed out in problem [4-9] of Binney \&
Tremaine (1987). The kinetic energy per particle is $V_c^2$ in a
model with only circular orbits and $ 3 V_c^2 /2$ if they are
isotropic. Yet, they have the same potential and must satisfy the
virial theorem. In Section~\ref{sec:omegacen}, we return to this
issue as we examine a case where the equations of stellar
hydrodynamics have been used on astrophysical objects where a
subsample of a finite number of global tracers has been observed. We
now turn to the calculation of the distribution of eccentricities for
finite sets of tracers.
\subsection{Truncated isothermal sphere with core}
\label{sec:trunc}
Given the infinite mass and central singularity of the singular
isothermal sphere, dark halos are often modeled as truncated,
non-singular, isothermal spheres with a core:
\begin{equation}
\label{trunciso}
\rho(r) = {M \over 2 \, \pi^{3/2} \, \gamma \, r_t} \;
{\exp(-r^2/r_t^2) \over r^2 + r_c^2}.
\end{equation}
Here $M$, $r_t$, and $r_c$ are the mass, truncation radius and core
radius, respectively, and $\gamma$ is a normalization constant given
by
\begin{equation}
\label{alpha}
\gamma = 1 - \sqrt{\pi} \; \left( {r_c \over r_t} \right)
\exp(r_c^2 / r_t^2) \left[ 1 - {\rm erf}(r_c/r_t) \right].
\end{equation}
Since for this density distribution the DF is not known analytically
(i.e., this requires the knowledge of $\rho(\Phi)$ in order to solve
the Eddington equation), and since this density distribution is no
longer scale-free, we have to use a different approach in order to
determine the distribution of orbital eccentricities. We employ the
method described by Hernquist (1993). We randomly draw positions
according to the density distribution. The radial velocity dispersion
is computed from the second-order Jeans equations, assuming isotropy,
i.e. assuming $f = f(E)$:
\begin{equation}
\label{radv}
\overline{v_r^2}(r) = {1 \over \rho(r)} \int_r^{\infty} \rho(r) {d\Phi
\over dr} dr.
\end{equation}
We compute local velocities $v$ by drawing randomly a unit vector and
then a magnitude from a Gaussian whose second moment is equal to
$\overline{v_r^2}$, truncated at the local escape speed. Once the six
phase-space coordinates are known, the energy and angular momentum are
calculated, providing the apo- and pericenter of the orbit by solving
equation~(\ref{rooteq}). The resulting distributions of orbital
eccentricities are not rigorous, since the velocity field has not been
obtained from a stationary DF, but rather only used the second
moments. However, as demonstrated by Hernquist (1993), $N$-body
simulations run from these initial conditions are nearly in
equilibrium (see also Section~\ref{sec:simul}), which suggests that
the eccentricity distributions derived are sufficiently close to the
actual equilibrium distributions.
\placefigure{fig:trunc}
In Figure~\ref{fig:trunc}, we plot the 20th (dotted lines), 50th
(solid lines), and 80th (dashed lines) percentile points of the
distributions of eccentricity (left panel) and apo- to pericenter
ratios (right panel), as functions of $r_t/r_c$. For $r_c = r_t$ the
distribution of eccentricities is almost symmetric, with the median
equal to $0.50$. When $r_t/r_c$ increases, the distribution becomes
more and more skewed towards high eccentricity orbits; the
distribution closely approaches that of the singular isothermal sphere
in the limit $r_t/r_c \rightarrow \infty$.
\placefigure{fig:jafhern}
\subsection{Steeper halo profiles}
\label{sec:nfw}
To examine the dependence of the orbital eccentricities on the actual
density distribution of the halos we determine the eccentricity
distributions of two well-known models with steeper outer density
profiles ($\rho \propto r^{-4}$):
\begin{equation}
\label{rhojaf}
\rho_J(r) = {M \over 4 \pi} \; {a \over r^2 (r + a)^2},
\end{equation}
and
\begin{equation}
\label{rhohern}
\rho_H(r) = {M \over 2 \pi} \; {a \over r (r + a)^3},
\end{equation}
where $M$ is the total mass. These profiles differ only in the
steepness of the central cusp: the former one, known as the Jaffe
(1983) profile, has a $r^{-2}$-cusp, whereas the latter, known as the
Hernquist (1990) profile, has a shallower $r^{-1}$-cusp. We use the
technique described in Section~\ref{sec:trunc} to compute the
distributions of orbital eccentricities for isotropic DFs $f(E)$.
The results are shown in Figure~\ref{fig:jafhern}, where we plot the
normalized eccentricity distributions for the Jaffe and Hernquist
spheres. For comparison the results for the singular isothermal sphere
with isotropic DF are reproduced as well. The three distributions are
remarkably similar (the differences are best appreciated by comparing
the thin lines indicating the percentile points). The distributions
are progressively skewed toward higher eccentricities in the sequence
isothermal, $\rho_H(r)$, $\rho_J(r)$, but only moderately so.
Navarro, Frenk \& White (1995, 1996, 1997) have used cosmological
simulations to argue that the outer density profiles of dark halos
decline as $r^{-3}$. Their profiles are likely created by a variety
of numerical artifacts (Moore {et al.~} 1998). However, our results
suggest that such debates will not significantly alter the expected
distributions of eccentricities; velocity anisotropy is far more
important than the details of the density profile.
\section{Orbital decay of eccentric orbits by dynamical friction}
\label{sec:friction}
The orbits of substructures within halos change owing to dynamical
friction. Chandrasekhar (1943) derived the local deceleration of a
body with mass $M$ moving through an infinite and homogeneous medium
of particles with mass $m$. The deceleration is proportional to the
mass $M$, such that more massive sub-halos sink more rapidly. As long
as $M \gg m$ the frictional drag is proportional to the mass density
of the medium, but independent of the mass $m$ of the constituents.
For an isotropic, singular isothermal sphere the deceleration is
\begin{equation}
\label{dvdt}
{d \vec v \over d t} = -{G M_s \over r^2} \; {\rm ln}\Lambda \;
\left({v \over V_c}\right)^{-2} \left\{ {\rm erf}\left({v\over
V_c}\right) - {2 \over \sqrt{\pi}} \left({v\over V_c}\right)
{\rm exp}\left[ -\left({v\over V_c}\right)^2 \right] \right\}
\vec{e}_v
\end{equation}
with $M_s$ and $v$ the mass and velocity of the object being
decelerated, $r$ the distance of that object from the center of the
halo, ${\rm ln} \Lambda$ the Coulomb logarithm, and $\vec{e}_v$ the
unit velocity vector (Tremaine 1976; White 1976a).
As mentioned earlier, remarkably little attention has been paid to the
effects of dynamical friction on eccentric orbits---our goal for the
rest of this section. Our main objective is to use both analytical
and numerical tools to investigate the change of orbital eccentricity
with time, and the dependence of the dynamical friction timescale on
the (intrinsic) eccentricity. Unlike most previous studies, we will
not focus on testing Chandrasekhar's dynamical friction formula, or on
studying the exact cause of the deceleration (i.e., local or global),
as this has been the topic of discussion in many previous papers.
\subsection{The time-dependence of orbital eccentricity}
We investigate the rate at which orbital eccentricity changes due to
dynamical friction. For simplicity, we focus on the evolution of
orbital eccentricity in the singular isothermal sphere, for which
\begin{equation}
\label{dedt}
{d e \over d t} = {d \eta \over d t} \; {d e \over d \eta},
\end{equation}
with $\eta = L/L_c(E)$ (see Section~\ref{sec:sing}). Using
equation~(\ref{circrad}), we find
\begin{equation}
\label{detadt}
{d \eta \over d t} = \eta \left\{ {1 \over L} \; {d L\over d t} - {1
\over V_c^2} \; {d E \over d t}\right\}.
\end{equation}
Because of dynamical friction the energy and angular momentum are no
longer conserved, and
\begin{equation}
\label{denergydt}
{d E \over d t} = v \; {d v \over d t}
\end{equation}
and
\begin{equation}
\label{dangmomdt}
{d L \over d t} = r \; {d v_{\bot} \over d t}.
\end{equation}
Since the frictional force acts in the direction opposite of the
velocity
\begin{equation}
\label{dvperpdt}
{d v_{\bot} \over d t} = {v_{\bot} \over v} \; {d v \over d t}.
\end{equation}
and, upon combining equations~(\ref{dedt}) - (\ref{dvperpdt}), we find
\begin{equation}
\label{deccdt}
{d e \over d t} = {\eta \over v} \; {d e \over d \eta} \;
\left[ 1 - \left({v \over V_c}\right)^2 \right] \; {d v \over d t}.
\end{equation}
Here $d v / d t$ is the frictional deceleration given by
equation~(\ref{dvdt}). Since both $d v / d t < 0$ and $d e / d \eta <
0$ (see Figure~\ref{fig:eps}), we can immediately derive the sign of
$d e / d t$ at apo- and pericenter. At apocenter $v < V_c$, such that
$d e / d t > 0$, whereas at pericenter $v > V_c$ and thus $d e / d t <
0$. This explains the circularization found for `grazing encounters'
(Bontekoe \& van Albada 1987), as dynamical friction happens only near
pericenter. Equation~(\ref{dvdt}) shows that $d v / d t \propto
r^{-2}$, and the change in eccentricity is thus larger at pericenter
than at apocenter. However, the time spent near pericenter is shorter
than near apocenter, so that the evolution of the eccentricity can not
be determined by inspection. In the next sections, we use numerical
simulations.
\subsection{$N$-body simulations}
\label{sec:simul}
We perform a set of fully self-consistent $N$-body simulations with a
large number of particles in order to examine the effects of dynamical
friction on eccentric orbits in a massive halo. The halo is modeled by
a truncated isothermal sphere (equation~[\ref{trunciso}]), with total
mass of unity, a core-radius $r_c = 1$, and a truncation radius $r_t =
50$. Scaled to the Milky Way, we adopt a unit of mass of
$10^{12}\>{\rm M_{\odot}}$ and a unit of length of $4$ kpc. With the
gravitational constant set to unity, the units of velocity and time
are $1037 \>{\rm km}\,{\rm s}^{-1}$ and $3.8$ Myr, respectively.
The initial velocities of the halo particles are set up as described
in Section~\ref{sec:trunc}, following the procedure of Hernquist
(1993). Because of this particular method the halo is not necessarily
in equilibrium, nor is it expected to be virialized. In order to
remove effects of the halo's virialization on the decay of the
orbiting substructure, we first simulate the halo in isolation for 10
Gyrs. At the end, the halo has nicely settled in virial equilibrium.
Figure~\ref{fig:densprof} shows the initial density profile compared
to that after 10 Gyr. As can be seen, and as already pointed out by
Hernquist (1993), the density profile has not changed significantly
after 10 Gyr.
\placefigure{fig:densprof}
We are interested in the effects of dynamical friction on galactic
objects that range from $\sim 10^6 \>{\rm M_{\odot}}$ (globular clusters) to $\sim
10^{10} \>{\rm M_{\odot}}$ (a massive satellite). To simulate dynamical friction
on an object of mass $M_s$ orbiting in a halo of mass $M_h$, we
require $M_s \gg m$. In our simulations, we insist that $N \gtrsim 10
M_h/M_s$. A simulation of the orbital decay of a globular cluster in
a galactic halo of $10^{12}\>{\rm M_{\odot}}$ thus requires $N \gtrsim 10^7$, clearly
too large a number for practical purposes. We run our simulations
with $N=50,000$ particles, roughly an order of magnitude increase over
most previous work. The highest resolution, self-consistent
simulation aimed at investigating dynamical friction has so far been
performed by Hernquist \& Weinberg (1989) using $N=20,000$ and
focusing only on circular orbits. We discuss the influence of the
number of halo particles in Section~\ref{sec:npart}. With $N=50,000$,
$M_h = 10^{12}\>{\rm M_{\odot}}$, and the requirement $N \gtrsim 10 M_h/M_s$, our
minimum satellite mass is $2\times 10^8 \>{\rm M_{\odot}}$. A list of the
parameters for each of our simulations is presented in
Table~\ref{tab:param}. Models~1, 2, and 3 have an initial apocenter
of 160 kpc, and start on an orbit with $e=0.8$. The initial
pericenter for these orbits is located at $17.9$ kpc from the center
of the halo, well outside the halo's core radius ($r_c = 4$ kpc).
Satellites on orbits of lower eccentricity (Models~4, 5, and 6) are
started from smaller apocentric radii, such that the initial specific
energy of all satellites in all models is equal.
The satellite is initially positioned at $(x,y,z) = (r_{+},0,0)$, with
$v_x = v_z = 0$ and $v_y$ chosen such as to obtain an initial orbit
with the desired eccentricity: we randomly draw velocities $0 \leq v_y
\leq v_{\rm escape}$ and determine the orbital eccentricity of the
satellite as described in Section~\ref{sec:sing}. We repeat this
procedure until we find an eccentricity within one percent of the
desired value. At the start of each simulation, the satellite is
introduced instantly in the virialized halo potential.
The simulations use PKDGRAV (Dikaiakos \& Stadel 1996; Stadel \& Quinn
1998), a stable and well-tested, both spatially and temporally
adaptive tree code, optimized for massively parallel processors. It
uses an open ended variable timestep criteria based upon the local
acceleration (Quinn {et al.~} 1997). Forces are computed using terms up
to hexadecapole order and a tolerance parameter of $\theta = 0.8$.
The code uses spline kernel softening, for which the forces become
completely Newtonian at 2 softening lengths (see Hernquist \& Katz
1989 for details). In terms of where the force is 50 percent
Newtonian, the equivalent Plummer softening length would be $0.67$
times the spline softening length. The softening length of the halo
particles is set to $\epsilon = 0.05$, or $200$ pc. Particle
trajectories are computed using a standard second order symplectic
leap-frog integrator, with a maximum time-step $\Delta t = 1$
(corresponding to $3.77$ Myr in our adopted physical units). Because
of the multi-time stepping, some particles are integrated with smaller
timesteps. For a typical simulation, approximately $20$ percent of
the particles are advanced with $\Delta t = 0.5$, and $\sim 0.05$
percent with $\Delta t = 0.25$.
The satellite is modeled as a single particle with mass $M_s$ and
softening length $\epsilon_s$. Beyond $2 \epsilon_s$ the satellite
potential falls as $r^{-1}$, so this radius approximates the
satellite's tidal radius, which is mainly determined by conditions at
pericenter (King 1962). In principle. we could fix the softening
using an appropriate tidal radius. However, the pericentric distance
evolves and we would then have to include time evolution of the
softening and some mass loss. We opt for a simpler approach and fix
the mean density of each satellite:
\begin{equation}
\label{softsat}
\epsilon_s = 2.73 \, {\rm kpc} \; \biggl[ {M_s \over 10^{10} \>{\rm M_{\odot}}}
\biggr]^{1/3},
\end{equation}
The scaling is set so that a satellite with mass similar to the Large
Magellanic Cloud (hereafter LMC) has a softening length comparable to
the LMC's effective radius (de Vaucouleurs \& Freeman 1972). This
choice is somewhat arbitrary, and we discuss its influence on
dynamical friction timescales in Section~\ref{sec:ressize}.
All simulations are run for $15$ Gyr on $2$ or $3$ DEC Alpha
processors, each requiring about 48 hours of wallclock time. Energy
conservation was typically of the order of one percent over the total
length of the simulation.
\placefigure{fig:orbits}
\subsection{Results}
\label{sec:resfric}
We determine the eccentricity in two ways. We compute the center of
mass of the halo particles and use the galactocentric distance of the
satellite ($r_s$), and its energy and angular momentum to solve
equation~(\ref{rooteq}) for the orbital eccentricity (shown as solid
lines in Figures~\ref{fig:resa} and~\ref{fig:resb}). This
eccentricity is only accurate if the potential of the halo has not
changed significantly owing to the satellite's decay. Hence, we also
determine the radial turning points of the orbit and compute the
approximate eccentricity which we assign to a time midway between the
turning points (shown as open circles in Figures~\ref{fig:resa}
and~\ref{fig:resb}).
\placefigure{fig:resa}
In Figure~\ref{fig:orbits} we plot the trajectories of the satellites
for models 1 to 6. Both the $x$--$y$ (upper panels) and $x$--$z$
projections (smaller lower panels) are shown. The three trajectories
plotted on the top vary in satellite mass, whereas those at the bottom
vary in their initial orbital eccentricity (see
Table~\ref{tab:param}). The time-dependences of galactocentric
radius, eccentricity, energy, and angular momentum for various models
are shown in Figures~\ref{fig:resa} and~\ref{fig:resb}. Energies are
scaled by the central potential $\Phi_0$, and angular momenta by their
value at $t=0$. Eccentricities are only plotted up to $t_{0.8}$ (see
below), after which the satellite has virtually reached the halo's
center. Models~1 and~2 reveal an almost constant orbital
eccentricity. In Models~3 to~9, in which the satellite mass is equal
to $2 \times 10^{10} \>{\rm M_{\odot}}$, the eccentricity reveals a saw-tooth
behavior, such that eccentricities decrease near pericenter, and
increase near apocenter. This is in perfect agreement with
equation~(\ref{deccdt}). It is remarkable that the net effect of $d e
/ d t$ is nearly zero: the eccentricity does not change significantly.
The alternative definition of eccentricity based on observed turning
points (open circles) shows similar results. Small deviations are due
to a change of the halo potential induced by the decaying satellite,
and the heuristic assignment of a time to the value found from
monitoring the radial turning points. The change in energy with time
reveals a step-wise behavior, indicating that the pericentric passages
dominate the satellite's energy loss. Note that the energy of the
satellites in Models~3 to 9 does not become equal to $\Phi_0$ once the
satellite reaches the center of the potential well. This owes to the
deposition of energy into the halo particles by the satellite. The
details of this process will be examined in a future paper (van den
Bosch {et al.~} in preparation).
\placefigure{fig:resb}
Because of the elongation of the orbits, the galactocentric distance
is not a meaningful parameter to use to characterize the decay times.
Instead we use both the energy and the angular momentum. While the
energy is well defined, it changes almost stepwise (see
Figures~\ref{fig:resa} and~\ref{fig:resb}). The angular momentum
depends on the precise position of the halo's center which may be
poorly determined when the satellite induces an $m=1$ mode. We define
the following characteristic times: $t_{0.4}$, $t_{0.6}$, and
$t_{0.8}$ defined as the time when the satellite's energy reaches
$40$, $60$, and $80$ percent of $\Phi_0$ respectively, and $t_{1/4}$,
$t_{1/2}$, and $t_{3/4}$ when the angular momentum is reduced to a
quarter, a half and three-quarters of its initial value. These
timescales are listed in Table~\ref{tab:timescale}. Because of the
instantaneous introduction of the satellite in the virialized halo
potential the absolute values of these timescales may be off by a few
percent. However, we are mainly interested in the variation of the
dynamical friction time as function of the orbital eccentricity. We
belief that the instantaneous introduction of the satellite will not
have a significant influence on this behavior, as this is only a
second order effect.
\subsubsection{Influence of satellite mass}
\label{sec:resmass}
Models~1, 2, and 3 start on the same initial orbit, but vary in both
the mass and size of the satellite. Satellite masses correspond to $2
\times 10^8 \>{\rm M_{\odot}}$, $2 \times 10^9 \>{\rm M_{\odot}}$, and $2 \times 10^{10}
\>{\rm M_{\odot}}$ for Models~1, 2, and~3 respectively. The sizes of the
satellites, e.g., their softening lengths, are set by
equation~(\ref{softsat}), such that all satellites have the same mean
density. The mass and size of the satellite in model 3 correspond
closely to that of the LMC. It is clear from a comparison of
Models~1,2, and 3 that dynamical friction by the galactic halo is
negligible for satellites with masses $\lesssim 10^{9} \>{\rm M_{\odot}}$. Thus,
globular clusters and the dwarf spheroidals in the galactic halo are
not expected to have undergone significant changes in their orbital
properties induced by dynamical friction by the halo; the only two
structures in the galactic halo that have experienced significant
amounts of dynamical friction by the halo are the LMC and the SMC,
with masses of $\sim 2 \times 10^{10} \>{\rm M_{\odot}}$ and $\sim 2 \times 10^9
\>{\rm M_{\odot}}$ respectively (Schommer {et al.~} 1992). Note that we have
neglected the action of the disk and bulge, which apply a strong
torque on objects passing nearby. This can result in an enhanced
decay of the orbit, not taken into account in the simulations
presented here.
\subsubsection{Influence of orbital eccentricity}
\label{sec:resecc}
\placefigure{fig:halftime}
Models~3, 4, 5, and 6 differ only in the eccentricity of their initial
orbit; satellites on less eccentric orbits are started from smaller
apocentric radii, such that the initial energy is the same in each
case. The characteristic friction timescales (defined in
Section~\ref{sec:resfric} and listed in Table~\ref{tab:timescale}) as
a function of initial eccentricity are shown in
Figure~\ref{fig:halftime}. The decay time decreases with increasing
eccentricity, the exact rate of which depends on the characteristic
time employed. The timescales for the most eccentric orbit (Model~3)
are a factor $1.5$ to $2$ shorter than for the circular orbit
(Model~6). This exemplifies the importance of a proper treatment of
orbital eccentricities, since decay times based on circular orbits
alone overestimate the timescales of dynamical friction for typical
orbits in a dark halo by up to a factor two.
Using Chandrasekhar's dynamical friction formula, and integrating over
the orbits, Lacey \& Cole (1993) found that, for a singular isothermal
sphere, the dynamical friction time is proportional to $\eta^{0.78}$
(with $\eta$ the orbital circularity defined in
Section~\ref{sec:analyt}). The dotted lines in
Figure~\ref{fig:halftime} correspond to this dependence, normalized to
$t_{0.6}$ and $t_{1/2}$ for the orbit with an intrinsic eccentricity
of $0.6$. As can be seen, our results seem to suggest a somewhat
weaker dependence of the dynamical friction time on orbital
eccentricity. Combining all the different characteristic decay times
listed in Table~\ref{tab:timescale}, we obtain the best fit to our
$N$-body results for $t \propto \eta^{0.53 \pm 0.01}$. This
difference with respect to the results of Lacey \& Cole is most likely
due to the fact that their computations ignore the global response of
the halo to the decaying satellite.
\subsubsection{Influence of satellite size}
\label{sec:ressize}
\placefigure{fig:halfeps}
Since the adopted softening length of a satellite, ${\epsilon_s}$, is
somewhat arbitrary (see Section~\ref{sec:simul}), we examine the
effect of this choice by running a set of models (3, 7, 8, and 9) that
differ only in this quantity. The characteristic friction timescales
as a function of ${\epsilon_s}$ are shown in Figure~\ref{fig:halfeps}.
In Chandrasekhar's formalism, the dynamical friction timescale depends
on the inverse of the Coulomb logarithm ${\rm ln} \Lambda$. This is
often approximated by the logarithm of the ratio of the maximum and
minimum impact parameters for the satellite. Taking $b_{\rm max} =
r_t = 200$ kpc and $b_{\rm min} = \epsilon_s$, the expected ratios
between the timescales of Models~3, 7, 8, and 9 can be determined.
The dotted lines in Figure~\ref{fig:halfeps} shows these predictions
scaled to the characteristic times of Model~3. It is apparent,
however, that these simulations are not accurately represented by this
simple approximation. This is not surprising as the choices for the
minimum and maximum impact parameters are somewhat arbitrary. Because
of the radial density profile of the halo, $b_{\rm max}$ is likely to
depend on radius. Furthermore, approximating $b_{\rm min}$ by the
satellite's softening length is only appropriate for large values of
$\epsilon_s$; in the limit where the satellite becomes a point mass
$b_{\rm min}$ should be taken as $G M_s/\langle v^2 \rangle$, where
$\langle v^2 \rangle^{1/2}$ is the rms velocity of the background, a
quantity which again depends on the galactocentric distance of the
satellite (White 1976b). The predictions in Figure~\ref{fig:halfeps}
are thus as good as we might have hoped for. The important point we
wish to make here is that deviations from the softening lengths given
by equation~(\ref{softsat}), within reasonable bounds, do not
significantly modify the characteristic dynamical friction timescales
presented here.
\subsubsection{Influence of number of halo particles}
\label{sec:npart}
\placefigure{fig:comp}
In order to address the accuracy of our simulations we have repeated
Model~4 with both 20,000 and 100,000 halo particles. We henceforth
refer to these simulations as Models~10 and~11 respectively (see
Table~\ref{tab:param}). Both these models are first evolved for 10
Gyrs without satellite to let the halo virialize and reach
equilibrium. As for the halo with 50,000 particles, the density
distribution after 10 Gyrs has not changed significantly.
A comparison of the results of Models~4, 10 and~11 is shown in
Figure~\ref{fig:comp}. The solid lines correspond to Model~4, the
dotted lines to Model~10 ($N = 20,000$), and the dashed lines to
model~11 ($N = 100,000$). Clearly our results are robust, with only a
negligible dependence on the number of halo particles. The main
differences occur at later times, when the satellite has virtually
reached the center of the halo. In Models~4 and 11 the satellite
creates a core in the halo, which causes $E/\Phi_0 < 1$ in the center.
Model~10 has insufficient particles to resolve this effect, which
explains the differences in both the satellite's energy and
eccentricity at later times with respect to models~4 and~11. In
Table~\ref{tab:timescale} we list the different characteristic
timescales for Model~10 and~11. They are similar to those of Model~4
to an accuracy of $\sim 4$ percent for Model~10 and $\sim 1$ percent
for Model~11. Finally we note that our main conclusion, that there is
no net amount of circularization, holds for different numbers of halo
particles.
\section{Applications}
\label{sec:applic}
\placefigure{fig:prob}
\subsection{Orbits of Globular clusters}
\label{sec:globulars}
Odenkirchen {et al.~} (1997, hereafter OBGT) used Hipparcos data to
determine the proper motions of 15 globular clusters so that all six
of their phase-space coordinates are known. Their velocity dispersions
show a slight radial anisotropy with $(\sigma_r, \sigma_{\theta},
\sigma_{\phi}) = (127 \pm 24, 116 \pm 23, 104 \pm 20)$ $\>{\rm km}\,{\rm s}^{-1}$. OBGT
integrated the orbits using a model for the galactic potential (Allen
\& Santillan 1991) that approaches an isothermal at large radii. They
find a median eccentricity of $e = 0.62$ and conclude that the
globulars are {\it preferentially on orbits of high eccentricity}
We can compare their results with the eccentricity distributions
expected for a power-law tracer population in a singular isothermal
halo. We use the technique described in the Appendix to determine the
distribution of orbital eccentricities given a logarithmic slope of
the density distribution $\alpha$ and an anisotropy parameter $a$. We
compare the cumulative distribution to OBGT's sample using the K-S
test (e.g., Press {et al.~} 1992). We determine the probabilities that the
OBGT sample is drawn randomly from such a distribution using a $100
\times 100$ grid in $(\alpha,a)$-space with $\alpha \in [2,6]$ and $a
\in [-2,2]$ and show the contour plot of these probabilities in
Figure~\ref{fig:prob}. The contours corresponds to the
$10,20,...,80,90$ percent probability levels, whereby the latter is
plotted as a thick contour. Clearly, the slope of the density
distribution, $\alpha$, is poorly constrained as the dependence is
mild (see Section~\ref{sec:tracers}). However, the velocity
anisotropy is well constrained, and we find that a small amount of
radial anisotropy is required in order to explain the observed
eccentricities of the globular clusters. This is in excellent
agreement with the velocity dispersions obtained directly from the
data. The solid dot in Figure~\ref{fig:prob} corresponds to the best
fitting model with $\alpha = 3.5$, the value preferred by Harris
(1976) and Zinn (1985). In Figure~\ref{fig:ksplot} we plot the
cumulative distribution of the eccentricities from the OBGT sample
(thin lines) with the cumulative distribution for the model
represented by the dot in Figure~\ref{fig:prob}. The K-S test yields
a probability of $94.3$ percent that the OBGT sample of orbits is
drawn randomly from the probability distribution represented by the
thick line.
\placefigure{fig:ksplot}
We conclude that the distribution of eccentricities is just what one
expects from a population with a mild radial velocity anisotropy;
there is no ``preference'' for high eccentricity orbits as suggested
by OBGT. The potential used by OBGT to integrate their orbits
deviates significantly from a spherical isothermal in the center,
where the disk and bulge dominate. Since the OBGT sample is limited
to nearby globulars within approximately 20 kpc of the Sun, the
deviations are likely significant. However, the good agreement between
the distribution of eccentricities for the OBGT sample, based on the
axisymmetric potential of Allen \& Santillan (1991), and a spherical
isothermal potential suggests that the differences between these two
potentials have only a mild influence on the distribution of orbital
eccentricities.
\subsection{Kinematics in $\omega\;$Centauri}
\label{sec:omegacen}
Norris {et al.~} (1997) examined the dependence of kinematics on metal
abundance in the globular cluster $\omega\;$Centauri (hereafter
$\omega\;$Cen) and found that the characteristic velocity dispersion
of the most calcium rich stars is $\sim 8 \>{\rm km}\,{\rm s}^{-1}$, while that of the
calcium poor stars is $\sim 13 \>{\rm km}\,{\rm s}^{-1}$. The metal rich stars are
located closer to the middle where the velocity dispersion is largest
and the authors note that there is evidence for rotation in the
metal-poor sample (at $\sim 5 \>{\rm km}\,{\rm s}^{-1}$), but not in the metal rich
sample. They use all these facts to conclude that ``{\it The more
metal-rich stars in $\omega\;$Cen are kinematically cooler than the
metal-poorer objects}''. The metal-rich stars in their sample live
in the part of the cluster where the inferred circular velocity is
greatest. Hence, they have an average value of $\vec F \cdot \vec r$
that is greater than for the metal-poor stars. By the virial theorem,
their mean kinetic energy must be higher (equation~[\ref{virialthm}]),
yet Norris {et al.~} find just the opposite with the average kinetic
energy of a metal-poor star being more than twice that of a metal-rich
star.
Since we are not about to abandon the virial theorem, we can only
conclude that {\it if the measurements of the dispersions are correct,
the kinetic energy per metal-rich star must be at least that of the
metal-poor stars implying a much greater kinetic energy in the plane
perpendicular to the line of sight than observed along the line of
sight}. The straightforward way for this to occur is a rotating
disk seen face-on. The rotation must be large enough that $v_{rot}^2
+ 3 \sigma^2$ is at least as large as seen for the metal-poor stars.
This implies that the metal-rich stars have a rotation velocity of
$\gtrsim 18 \>{\rm km}\,{\rm s}^{-1}$. The metal-rich component of $\omega\;$Cen must be a
rotating disk that is more concentrated than the metal-poor stars.
This is exactly the signature that Norris {et al.~} would have ascribed to
self-enrichment of the cluster (Morgan \& Lake 1989). The rotation
signature would be visible as a proper motion of $0.064 ''$/century.
Rotation of a smaller magnitude has been detected in M22 by Peterson
\& Cudworth (1994). Note that a face-on disk in $\omega\;$Cen, which
is the Galactic globular with the largest projected flattening,
implies a triaxial potential.
Norris {et al.~} did not realize that their observations presented this
dynamical puzzle. Instead, they believed that the difference could
result from the relative radial profiles of the two components as
might be seen in the equations of stellar hydrodynamics
(equation~[\ref{hydroans}]); i.e., the density distribution of the
metal rich component must fall off more rapidly with radius than for
the metal poor component. They used the observations to argue that
$\omega\;$Cen was the product of a merger of previous generations of
substructure. However, we argue that such a merger would not have the
signatures that they see. The metal-poor stars are the overwhelming
majority of the stars. Conservation of linear momentum thus implies
that the mean radius of the stars that were in the small (metal-rich)
object will be greater than those that were in the large (metal-poor)
one. Conservation of angular momentum furthermore implies that the
rotation velocity of the stars that were in the small (metal-rich)
object will be greater than those that were in the large (metal-poor)
one. These signatures are exactly opposite of those claimed by Norris
{et al.~} to be consistent with the merger model.
\subsection{Sinking satellites and the heating of galaxy disks}
\label{sec:satellite}
The sinking and subsequent merging of satellites in a galactic halo
with an embedded thin disk has been studied by numerous groups (e.g.,
Quinn \& Goodman 1986; T\'oth \& Ostriker 1992; Quinn, Hernquist \&
Fullager 1993; Walker, Mihos \& Hernquist 1996; Huang \& Carlberg
1997). The timescale for merging clearly depends on the eccentricity
of its orbit. Once the satellite interacts with the disk, the sinking
accelerates. Several of the above studies assumed that
circularization owing to dynamical friction by the halo is efficient
and examined satellites that started on circular orbits at the edge of
the disk. However, we have shown that circularization is largely a
myth; satellites will have large eccentricities when they reach the
disk. Quinn \& Goodman (1986) followed a satellite with a ``typical''
eccentricity in an isotropic singular isothermal sphere. They derived
$e \simeq 0.47$ or $r_{+}/r_{-} = 2.78$ for this ``characteristic''
orbit based on some poorly founded arguments. This ratio, however, is
significantly smaller than the true median value of $r_{+}/r_{-} \sim
3.5$; approximately $63$ percent of the orbits have $e > 0.47$. Huang
\& Carlberg (1997), in an attempt to be as realistic as possible when
choosing the initial orbital parameters, started their satellite well
outside the disk on an eccentric orbit. However, the eccentricity is
only 0.2, clearly too low to be considered a typical orbit.
The eccentricity of the orbit has two effects; more eccentric orbits
decay more rapidly in the halo and they touch the disk at an earlier
time. The sinking time scales for the satellite caused by its
interaction with the disk are more rapid when the difference in
velocities between the satellite and the disk stars is smaller;
satellites on prograde, circular orbits in the disk decay fastest (see
Quinn \& Goodman 1986 for a detailed discussion). Thus whereas more
eccentric orbits reach the disk sooner, they are less sensitive to the
disk interaction because of their high velocities at pericenter. Less
eccentric orbits, on the other hand, require a longer time to reach
the disk, but once they do their onward decay is rapid (if the orbit
is prograde). The exact dependence of the timescales and disk heating
on the orbital eccentricities awaits simulations (van den Bosch {et al.~},
in preparation).
We can compare our results to the few satellite orbits determined from
proper-motions. Johnston (1998) integrated the orbits of the LMC,
Sagittarius, Sculptor, and Ursa Minor in the galactic potential, and
found apo- to pericenter ratios of $2.2$, $3.1$, $2.4$, and $2.0$
respectively. Using the K-S test, we find that these eccentricities
are so small that there is only a $8.7$ percent probability that this
sub-sample is drawn from the isotropic model of an isothermal sphere.
As shown in previous sections the distribution is relatively
insensitive to the profiles of the underlying potential and the
density distribution of the tracer population. This therefore implies
that the velocity distribution is very strongly
tangential\footnote{Even if we adopt the maximum amount of tangential
anisotropy allowed by our simple parameterization of $h_a(\eta)$
(i.e., $a \rightarrow \infty$) the K-S probability does not exceed
20 percent.}. Since we expect that collapsed halos would produce
states with dispersions that are preferentially radial, we suspect
that {\it the system of galactic satellites has been strongly altered
with satellites on more eccentric orbits having been destroyed owing
to their faster dynamical friction timescales}. In
Section~\ref{sec:friction}, we found that more eccentric orbits have
smaller dynamical friction timescales, but the effect is only modest
(less than a factor two). Furthermore, all the satellites except the
Magellanic clouds have masses $\lesssim 10^9 \>{\rm M_{\odot}}$, and the dynamical
friction owing to the halo is almost negligible for these systems. To
get the strong effect that is seen, we have to appeal to the Milky Way
disk to accelerate the decay and/or tidally disrupt the satellites on
the more eccentric orbits. The problem with this solution, however, is
that most studies of sinking satellites have shown that they lead to
substantial thickening of the disk. Further studies are needed to
investigate whether a disk can indeed yield the observed distribution
of orbital eccentricities for the (surviving) satellites without being
disrupted itself. We are currently using high resolution $N$-body
simulations to investigate this is detail (van den Bosch {et al.~} , in
preparation). Finally, we must note that the sample of satellites
with proper motions is small and the proper motions themselves have
large errors which bias results toward large transverse motions.
Hence, we close this section with the all too common lament of the
need for more data with more precision as well as better simulations
that include the disk.
\subsection{Tidal streams}
\label{sec:streams}
The tidal disruption of satellites orbiting in a galactic halo creates
tidal streams (see e.g., McGlynn 1990; Moore \& Davis 1994; Oh, Lin \&
Aarseth 1995; Piatek \& Pryor 1995; Johnston, Spergel \& Hernquist
1995). These streams are generally long-lived features outlining the
satellite's orbit (Johnston, Hernquist \& Bolte 1996). Clearly, a
proper understanding of the orbital properties of satellites is of
great importance when studying tidal streams and estimating the effect
they might have on the statistics of micro-lensing (i.e., Zhao 1998).
An interesting result regarding tidally disrupted satellites was
reached by Kroupa (1997) and Klessen \& Kroupa (1998). They simulated
the tidal disruption of a satellite without dark matter orbiting a
halo. After tidal disruption, a stream sighted along its orbit can
have a spread in velocities that mimics a bound object with a
mass-to-light ratio that can be orders of magnitude larger than the
actual, stellar mass-to-light ratio. They show that one can only
sight down such a stream if the satellite was on an eccentric orbit.
The chance of inferring such a large mass-to-light ratio thus
increases with eccentricity. Klessen \& Kroupa conclude that the high
inferred mass-to-light ratios in observed dwarf spheroidal galaxies
could occur from tidal streams (rather than from a dark matter halo
surrounding the satellite) {\it if the orbital eccentricities exceed
$\sim 0.5$}. We find that $\sim 60$\% of the orbits obey this
criterion even without any radial anisotropy (e.g.,
Figure~\ref{fig:anis}). We can thus not rule out satellites without
dark matter based on the distribution of orbits. However, Klessen \&
Kroupa (1998) neglect to examine the kinematics in detail. The
velocity spread owes to a systematic gradient in velocity along the
stream. The apocryphal dwarf will appear to rotate rapidly if it is
not perfectly aligned with the line of sight, as is clear in the
simulations of the Sagittarius dwarf performed by Ibata \& Lewis
(1998).
\placefigure{fig:ghigna}
\subsection{Clusters of galaxies}
\label{sec:clusters}
Using very high resolution cosmological $N$-body simulations, Ghigna
{et al.~} (1998) were able to resolve several hundred subhalos within a
rich cluster of galaxies. They examined, amongst others, the orbital
properties of these dark matter halos within the larger halo of the
cluster, and found that the subhalos followed the same distribution of
orbits as the dark matter particles (i.e., those particles in the
cluster that are not part of a subhalo). Ghigna {et al.~} report a median
apo-to-pericenter ratio of six. We have used the orbital parameters of
the subhalos of the cluster analyzed by Ghigna {et al.~} to examine the
orbital properties in some more detail. When we only consider subhalos
whose apocenters are less than the virial radius of the cluster,
$r_{200}$, we obtain a sample of $98$ halos with a median apo- to
pericenter ratio of $3.98$. Using the K-S test as in
Section~\ref{sec:globulars} we obtain a best fit to the distribution
of orbital eccentricities for an anisotropy parameter of $a \simeq
-0.04$: the virialized region of the cluster is very close to
isotropic. Throughout we consider only a tracer population with
$\alpha = 2$, since this is in reasonable agreement with the observed
number density of subhalos. Furthermore, the results presented here
are very insensitive to the exact value of $\alpha$, similar to what
we found in Section~\ref{sec:globulars}. In Figure~\ref{fig:ghigna}
we plot the cumulative distribution of the orbital eccentricities of
the 98 halos with $r_{+} < r_{200}$ (thin line) together with the same
distribution for an isothermal sphere with our best fitting anisotropy
parameter $a = -0.04$. The K-S test yields a probability of $61.3$
percent that the two data sets are drawn from the same distribution.
When analyzing all subhalos with $r_{+} < 2 \, r_{200}$, we obtain a
sample of 311 halos with a median apo-to-pericentric ratio of $4.64$
and a best fitting value for the anisotropy parameter of $a = -0.27$.
Clearly, the periphery of the cluster, which is not yet virialized, is
more radially anisotropic with more orbits on more eccentric orbits.
The $N$-body simulations in Section~\ref{sec:simul} are easily scaled
to clusters of galaxies, considering the virialized regions of both
galaxies and clusters. If we adopt a cluster mass of $10^{15} \>{\rm M_{\odot}}$
and take the timescale to be the same as for the Milky Way simulation,
the truncation radius becomes $r_t = 2$ Mpc. In
Section~\ref{sec:resmass} we found that only objects with masses
greater than $\sim 0.1$ percent of the halo mass, $M_{\rm gal} \gtrsim
10^{12} \>{\rm M_{\odot}}$ in the cluster, experience significant dynamical
friction in a Hubble time. So, only the most massive galaxies
experience any orbital decay.
Recently, Moore {et al.~} (1996) showed that high speed encounters
combined with global cluster tides---galaxy harassment---causes the
morphological transformation of disk systems into spheroids (see also
Moore, Lake \& Katz 1998). Moore {et al.~} limited themselves to mildly
eccentric orbits with $r_{+}/r_{-} = 2$, but they quoted correctly
that this was a low value compared to the typical eccentricity. Their
choice was made in order to be conservative and to underestimate the
effect of harassment, as the effect increases with orbital
eccentricity. They also felt that any larger value would stretch the
reader's credulity as they could refer to no clear presentation of the
likely distribution of orbital eccentricities. Our results imply that
the effects of harassment were underestimated in their study.
\subsection{Semi-analytical modeling of galaxy formation}
\label{sec:sam}
Over the past couple of years several groups have developed
semi-analytical models for galaxy formation within the framework of a
hierarchical clustering scenario of structure formation (e.g.,
Kauffmann, White \& Guiderdoni 1993; Cole {et al.~} 1994; Heyl {et al.~} 1995;
Baugh, Cole \& Frenk 1996; Somerville \& Primack 1998). The general
method of these models is to use the extended Press-Schechter
formalism (Bower 1991; Bond {et al.~} 1991; Lacy \& Cole 1993) to create
merging histories of dark matter halos. Simplified yet physical
treatments are subsequently used to describe the evolution of the
baryonic gas component in these halos. Using simple recipes for star
formation and feedback, coupled to stellar population models, finally
allows predictions for galaxies to be made in an observational
framework.
A crucial ingredient of these semi-analytical models is the treatment
of mergers of galaxies. When two dark halos merge, the fate of their
baryonic cores, i.e., the galaxies, depends on a number of factors.
First of all, dynamical friction causes the galaxies to spiral to the
center of the combined dark halo, thus enhancing the probability that
the baryonic cores collide. Secondly, whether or not such a collision
results in a merger depends on the ratio of the internal velocity
dispersion of the galaxies involved to the encounter velocity. Both
depend critically on the masses involved and on the orbital
parameters. The dependence on the orbital eccentricities is addressed
by Lacey \& Cole (1993) who concluded that observations of merger
rates and the thinness of galactic disks seem to argue against
strongly elongated orbits. This would be problematic in the light of
the typical distributions of orbital eccentricities presented here.
However, as we pointed out in Section~\ref{sec:resecc}, Lacey \& Cole
have likely overestimated the dependence of dynamical friction times
on orbital eccentricity. Furthermore, as emphasized in
Section~\ref{sec:satellite} our current understanding of the damaging
effect of sinking satellites on thin disks is not well established and
may have been overestimated in the past (Huang \& Carlberg 1997).
\placefigure{fig:avertime}
In the actual semi-analytical modeling the merger time-scales are
defined by simple scaling laws that depend on the masses only, but
that ignore the orbital parameters. The eccentricity distributions
presented here, coupled with the $\eta^{0.53}$ dependence of dynamical
friction times, may proof helpful in improving the accuracy of the
merging timescales in the semi-analytical treatments of galaxy
formation. As an illustrative example, we calculate the average
dynamical friction time
\begin{equation}
\label{frictime}
\langle t/t_0 \rangle = \int\limits_{0}^{1} d\eta \; \eta^{0.53} \,
p_a(\eta),
\end{equation}
where $t_0$ is the friction time for a circular orbit and $p_a(\eta)$
is the normalized distribution function of orbital circularities in a
singular isothermal sphere with anisotropy parameter $a$. This
average time is plotted as function of $a$ in
Figure~\ref{fig:avertime} (solid line). For comparison we also plotted
the average times obtained by assuming a $\eta^{0.78}$ dependence
(dashed line). The average dynamical friction time is typically of
the order of $70$ to $80$ percent of that of the circular orbit. The
stronger dependence of Lacey \& Cole underestimates the typical
friction times by approximately $10$ percent.
\section{Conclusions \& Discussion}
\label{sec:conc}
This paper has presented the distributions of orbital eccentricities
in a variety of spherical potentials. In a singular isothermal
sphere, the median eccentricity of an orbit is $e = 0.56$,
corresponding to an apo- to pericenter ratio of 3.55. About $15$
percent of the orbits have $r_{+}/r_{-} > 10$, whereas only $\sim 20$
percent have moderately eccentric orbits with $r_{+}/r_{-} < 2$.
These values depend strongly on the velocity anisotropy of the halo.
Collapse is likely to create radially biased velocity anisotropies
that skew the distribution to even higher eccentricities. We also
examined the distributions of orbital eccentricities of isotropic
tracer populations with a power-law density distribution ($\rho_{\rm
tracer} \propto r^{-\alpha}$) and found only modest dependence on
$\alpha$. Due to the unphysical nature of the singular isothermal
sphere, we examined more realistic models and found that they differed
only slightly from the isothermal case.
We stress that these eccentricity distributions apply only to systems
in equilibrium. If a tracer population has not yet fully virialized
in the halo's potential, its orbital eccentricities can be
significantly different from the virialized case. Cosmological
simulations of galaxy clusters and satellite systems around galaxies
show prolonged infall, and recent mergers can produce correlated
motions. Hence, care must be taken in applying our results to such
systems.
Objects with mass fractions greater than 0.1\% experience significant
orbital decay owing to dynamical friction. We used high resolution
$N$-body simulations with $50,000$ particles to examine the sinking
and (lack of) circularization of eccentric orbits in a truncated,
non-singular isothermal halo. We derived, and numerically verified, a
formula that describes the change of eccentricity with time; dynamical
friction increases the eccentricity of an orbit near pericenter, but
decreases it again near apocenter, such that no net amount of
circularization occurs. The energy loss owing to dynamical friction is
dominated by the deceleration at pericenter resulting in moderately
shorter sinking timescales for more eccentric orbits. We find a
dependence of the form $t \propto \eta^{0.53}$; the average orbital
decay time for an isotropic, isothermal sphere is $\sim 75$ percent of
that of the circular orbit. This dependence is weaker than the
predictions of Lacey \& Cole (1993), who found $t \propto \eta^{0.78}$
based on analytical integrations of Chandrasekhar's dynamical friction
formula. Since this analytical treatment ignores the global response
of the halo to the decaying satellite we believe our results to be
more accurate. This relatively weak dependence of decay times on
eccentricity, together with the absence of any significant amount of
circularization, implies that dynamical friction does not lead to
strong changes in the distribution of orbital eccentricities. When
scaling the simulations to represent the orbiting of satellites in the
galactic halo, we find that the LMC and the SMC are the only objects
in the outer Milky Way halo that have experienced significant amounts
of energy and angular momentum loss by dynamical friction.
The distribution of orbital eccentricities is important for several
physical processes including: timescales for the sinking and
destruction of galactic satellites, structure and evolution of streams
of tidal debris, harassment in clusters of galaxies, and mass
estimates based on the dynamics of the system of globular clusters.
The results presented here may proof particularly useful for improving
the treatment of galaxy mergers in semi-analytical models of galaxy
formation. In Section~\ref{sec:applic} we showed that the
distribution of orbital eccentricities of a subsample of the galactic
globular cluster system is consistent with that of a slightly radially
anisotropic $r^{-3.5}$ tracer population in an isothermal potential.
A similar result was found for the subhalos orbiting a large cluster
of galaxies in a high resolution, cosmological $N$-body simulation
presented by Ghigna {et al.~} (1998). However, the Milky Way satellites
are not consistent with this distribution, but show a bias toward
circularity that may have been caused by dynamical friction and/or
tidal disruption by the galactic disk. We expect that additional data
together with simulations that include the disk will lead to stringent
constraints on the formation and evolution of substructure in the
Milky Way.
\acknowledgments
We are grateful to Sebastiano Ghigna for sending us his data in
electronic form. We are indebted to Derek Richardson, Jeff Gardner
and Thomas Quinn for their help and support with the $N$-body
simulations, and to the anonymous referee for his helpful comments
that improved the paper. FvdB was supported by a Hubble Fellowship,
\#HF-01102.11-97A, awarded by STScI.
\clearpage
| proofpile-arXiv_065-9112 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The Ingelman-Schlein (IS) model \cite{ingsc}, the first approach proposing the
idea of hard diffraction, predicted that dijets could be produced in $\bar{p}p$ diffractive interactions. This kind of reaction was supposed to occur as a two-step process in which:
1) a pomeron is emitted from the quasi-elastic vertex;
2) partons of the pomeron interact with partons of the incoming
proton producing jets.
\noindent One notes that the first step refers to a soft process while the second one is typically hard. In the expression proposed to calculate the dijet diffractive cross-section, the interplay between the soft and hard parts is simply conceived as a product. One assumes that factorization
applies to these two steps so that
\begin{equation}
\frac{d^{2}\sigma_{jj}}{dtdx_{\bf I\! P}}=\frac{d^{2}\sigma_{sd}}{dtdx_{\bf I\! P}}
\ \frac{\sigma_{p \bf I\! P \rightarrow jj}}{\sigma_{p \bf I\! P \rightarrow X}},
\label{sigjj}
\end{equation}
where $d^{2}\sigma_{sd}/{dtdx_{\bf I\! P}}$ is the cross section for single diffraction with $x_{\bf I\! P}=M^2_X/s$.
The soft term in Eq.(\ref{sigjj}),
$(d^{2}\sigma_{sd}/dtdx_{\bf I\! P})(1/\sigma_{p \bf I\! P \rightarrow X}),$
has become known as the {\it pomeron flux factor} and is usually obtained from the Triple Pomeron Model \cite{pdbcollins} while $\sigma_{p \bf I\! P \rightarrow jj}$, the cross section pomeron-proton leading to dijets, is calculated from the parton model and QCD. In order to perform these calculations one has to know the pomeron structure function. This is the subject of Section 2.
The idea of hard diffractive production proposed by the IS model gave rise to a new branch of hadron physics, inspiring a lot of phenomenological studies as well as motivating projects of new experiments. On the phenomenological side, this concept was extended to processes like diffractive production of heavy flavours, $W$, $Z$, Drell-Yan pairs (see, for instance, refs.\cite{fritz,DL}). In \cite{DL} appears the
suggestion of the flux factor as a ``distribution of pomerons in the proton". A particular form of the standard flux is given there by
\begin{equation}
f(x_{\bf I\! P},t)=\frac{9b^{2}}{4\pi^{2}}[F_{1}(t)]^{2}x_{\bf I\! P}^{1-2\alpha(t)},
\label{dl}
\end{equation}
which is usually referred to as Donnachie-Landshoff flux factor.
Most of these processes were incorporated at the events generator POMPYT
created by
Bruni and Ingelman \cite{bruni}. A more recent analysis of diffractive dijets and W production can be found in \cite{collins,jim}.
As for experimental results, the UA8 Collaboration has recorded the first observations of diffractive jets \cite{ua8}. This success has inspired other experimental efforts in this direction. However, subsequent analysis revealed a disagreement between data and theoretical predictions which was referred to as ``discrepancy factor".
Goulianos has suggested \cite{dino} that this discrepancy factor observed in hard diffraction has to do with the well known unitarity violation that occurs
in soft diffractive dissociation and that it is caused by the flux factor given by the Triple Pomeron Model. In order to overcome this difficulty, he proposed a procedure \cite{dino} which consists of the flux factor ``renormalization", that is, the {\it renormalized} flux is defined as
\begin{equation}
f_{N}(x_{\bf I\! P},t)=\frac{f(x_{\bf I\! P},t) }{N(x_{\bf I\! P_{min}})},
\label{ren}
\end{equation}
where
\begin{equation}
N(x_{\bf I\! P_{min}})=\int_{x_{\bf I\! P_{min}}}^{x_{\bf I\! P_{max}}}dx_{\bf I\! P}
\int^{t=0}_{t=-\infty}f(x_{\bf I\! P},t)dt.
\end{equation}
Meanwhile, new data coming from HERA experiment has put the problem of the pomeron structure function in much more precise basis by measuring the {\it diffractive} structure function, {\it i.e.} the proton structure function tagged with rapidity gaps \cite{h1} - \cite{novos}. More recently yet, new diffractive production rates has become available from experiments performed at the Tevatron by the CDF and D0 collaborations \cite{wcdf} - \cite{d0180}.
This paper consists of a phenomenological analysis in which theoretical predictions of dijets and W diffractively produced are presented and compared to the experimental rates. These predictions take for the pomeron structure function results of an analysis performed previously \cite{nois} by using HERA data.
In such an analysis both possibilities of flux factor, standard and renormalized, are considered.
\section{Pomeron Structure Function from HERA }
The measurements of {\it diffractive} deep inelastic scattering performed by the H1 and the ZEUS collaborations \cite{h1,zeus} at HERA experiment are given in terms of the diffractive cross section
\begin{equation}
\frac{d^{4}\sigma_{ep \rightarrow epX}}{dxdQ^{2}dx_{ \bf I\! P}dt}=\frac{4\pi
\alpha^{2}}{xQ^{4}}[1-y+\frac{y^{2}}{2[1+R^{D}(x,Q^{2},x_{ \bf I\! P},t)]}] F^{D(4)}_{2}(x,Q^{2},x_{\bf I\! P},t),
\end{equation}
where $F^{D(4)}_{2}(x,Q^{2},x_{\bf I\! P},t)$ is the {\it diffractive} structure function (details on the notation and kinematics can be found in \cite{nois}).
In these measurements $R^{D}$ was neglected and $t$ was not measured, so that the obtained data were given in terms of
\begin{equation}
F^{D(3)}_{2}(Q^{2},x_{ \bf I\! P}, \beta)= \int{F^{D(4)}_{2}(Q^{2},x_{\bf I\! P},
\beta,t)\ dt}.
\end{equation}
The diffractive pattern exhibited by the $F^{D(3)}_{2}$ data \cite{h1,zeus} strongly suggested that the following factorization would apply,
\begin{equation}
F_{2}^{D(3)}=g(x_{ \bf I\! P})\ F_{2}^{ \bf I\! P}(\beta, Q^{2}).
\label{Fdiff}
\end{equation}
This property is not revealed by data obtained more recently in a extended kinematical region by the H1 Collab. \cite{novos}, but in such a case the violation of factorization basically takes place in the region not covered by the previous measurements and can be attributed to the existence of other contributions besides the pomeron.
Based on the IS model, one can interpret the quantities given in the above equation as $g(x_{ \bf I\! P})$, the integrated-over-$t$ flux factor, and $F_{2}^{ \bf I\! P}$, the pomeron structure function.
Our procedure to extract $F_{2}^{\bf I\! P}$ from HERA data is basically the following \cite{nois}:
\begin{itemize}
\item{We assume that the factorized expression (\ref{Fdiff}) (and, consequently, pomeron dominance) applies to the kinematical range covered by data given in \cite{h1,zeus};}
\item{For the integrated flux factor, that is for $g(x_{ \bf I\! P}) = \int f(x_{ \bf I\! P}, t)\ dt$, both forms, the standard (\ref{dl}) and the renormalized one (\ref{ren}), are considered;}
\item{The pomeron structure function is given by $ F_{2}^{\bf I\! P}(\beta,Q^{2})=\sum_{i} e_{i}^{2}\ \beta q(\beta, Q^{2}) = 2/9\ S(\beta, Q^2)$, where $S(\beta, Q^2) = \sum_{i=u,d,s} \ [q_i(\beta, Q^2) + {\bar q_i} (\beta, Q^2)]$ with $q_{u, {\bar u}} = q_{d, {\bar d}} = q_{s, {\bar s}}$;}
\item{The quark and gluon distribution are evolved in $Q^2$ from a initial scale by the DGLAP equations;}
\item{For the distributions at initial scale $Q^{2}_{0}=4$ GeV$^2$, three possibilities were considered:}
\end{itemize}
\noindent{\bf{1) hard-hard}}:
\begin{eqnarray*}
S(\beta,Q^{2}_{0})&=&a_{1}\ \beta\ (1- \beta)\\
g( \beta,Q^{2}_{0})&=&b_{1}\ \beta\ (1- \beta)
\end{eqnarray*}
\noindent{\bf{2) hard-free:}}
\begin{eqnarray*}
S( \beta,Q^{2}_{0})&=&a_{1}\ \beta\ (1- \beta)\\
g( \beta,Q^{2}_{0})&=&b_{1}\ \beta^{b_{2}}\ (1- \beta)^{b_{3}}
\end{eqnarray*}
\noindent{\bf{3) free-delta:}}
\begin{eqnarray*}
S( \beta,Q^{2}_{0})&=&a_{1}\ \beta^{a_{2}}\ (1- \beta)^{a_{3}}\\
g( \beta,Q^{2}_{0})&=&b_{1}\ \delta(1- \beta).
\end{eqnarray*}
The detailed description of these fits and results can be found in \cite{nois}.
Since for the case of renormalized flux it was difficult to establish the gluon component, a fourth possibility was used in which the initial distribution of gluons was supposed to be null. Four of these fits were selected from \cite{nois} to perform the calculation of the diffractive rates presented here.
The parameters used in such calculations are shown in Table \ref{um}.
\begin{table}
\begin{center}
\begin{tabular}{ccccc}
\hline \hline \\
& D\&L & D\&L & REN & REN \\
& hard-hard & free-delta & hard-hard & free-zero\\ \hline \\
$a_1$ & 2.55 & 1.51 & 5.02 & 2.80\\
$a_2$ & 1 & 0.51 & 1 & 0.65\\
$a_3$ & 1 & 0.84 & 1 & 0.58\\
$b_1$ & 12.08 & 2.06 & 0.98 & $-$ \\
$b_2$ & 1 & $-$ & 1 & $-$ \\
$b_3$ & 1 & $-$ & 1 & $-$ \\
\hline \hline
\end{tabular}
\vspace{0.2cm}
\caption{\sf{Fit parameters for the pomeron structure function. The procedure used to establish these parametrizations can be found in {\protect{\cite{nois}}}.}}
\label{um}
\end{center}
\end{table}
\section{Diffractive Parton Model}
In this section, we present the expressions we have used to calculate the rates for diffractive production of W and jets. From the parton model, the generic expression for the cross section of these processes is
\begin{equation}
\label{partonmodel}
d \sigma_{W/jj} = f_A(x_a, Q^2)dx_a\ f_B(x_b,Q^2)dx_b\ (d{\hat{\sigma}}_{ab})_{W/jj}
\end{equation}
where the parton {\it a} of the hadron {\it A} interacts with the parton {\it b} of the hadron {\it B} to give a W or a pair of partons $(c, d)$ in the case of dijets.
\subsection{W production}
With the elementary cross section
\begin{equation}
{\hat{\sigma}_{ab\rightarrow W}}= \frac{2}{3} \pi {g_W}^2\ \delta(x_a\ x_b\ s-M^2_W)
\end{equation}
in equation (\ref{partonmodel}), the integrated cross section is given by
\begin{equation}
\sigma(AB \rightarrow W^{\pm})= \frac{2}{3} \pi \frac{{g_W}^2}{s} \sum_{a, b} \int_{x_{a_{min}}}^{1}\frac{dx_a}{x_a}
f_A(x_a)\ f_B(x_b).
\label{www}
\end{equation}
with $x_b={M^2_W}/{x_a\ s}$.
For $W^+$ production, the interacting partons are $a=u$ and
$b={\bar d}_{\theta_C}$, and for $W^-$ production, $a=\bar{u}$ and $b=d_{\theta_C}$, with $d_{\theta_C}=d\cos{\theta_C}+s\sin{\theta_C}$ where $\theta_C$ is the Cabbibo angle ($\theta_C \cong 13^o$). The kinematical limit is determined by
$x_a\ x_b\ s = \hat{s}=M^2_W$, that is $x_{a_{min}} = {M^2_W}/{s}$.
\subsection{Dijets production}
In the case of dijets generated from partons {\it c} and {\it d}, their
transversal energy is
\begin{equation}
E_T = |p_c|\sin{\theta_c} = |p_d|\sin{\theta_d}.
\end{equation}
By using the definition of rapidity,
\begin{eqnarray*}
y=\frac{1}{2} \ln{\frac{E+E_L}{E-E_L}}
\end{eqnarray*}
one can get
\begin{equation}
e^{-y}=\frac{E_T}{|p_c|(1+\cos{\theta_c})}
\ \ \ \ {\rm and} \ \ \ \
e^{-y'}=\frac{E_T}{|p_d|(1+\cos{\theta_d})}.
\end{equation}
Defining the Mandelstam variables for the parton system as
\begin{equation}
\hat{s}=x_a x_b s
\end{equation}
and
\begin{eqnarray*}
\hat{t}=-2p_a.p_c &=& -x_a \ \sqrt{s}\ E_T\ e^{-y}
= -x_b\ \sqrt{s}\ E_T\ e^{y'},
\end{eqnarray*}
one can write the Bjorken variables $x_a$ and $x_b$ as
\begin{equation}
x_a=\frac{E_T}{\sqrt{s}}(e^{y}\ +\ e^{y'})
\ \ \ \
{\rm and} \ \ \ \
x_b=\frac{E_T}{\sqrt{s}}(e^{-y}\ +\ e^{-y'}).
\end{equation}
Now, making use of the transformation
$dx_a\ dx_b\ d\hat{t} \rightarrow 2E_T\ dE_T\ x_a\ x_b\ dy'\ dy$
in Eq. (\ref{partonmodel}), one obtains
\begin{eqnarray}
\frac{d\sigma}{dy}=\sum_{a, b}\int_{E_{T_{min}}}^{E_{T_{max}}} dE_{T}^2 \int_{y'_{min}}^{y'_{max}} dy' x_a f_A(x_a, Q^2) x_b f_B(x_b,Q^2)(\frac{d\hat{\sigma}}{d\hat{t}})_{jj}.
\label{dijet}
\end{eqnarray}
\noindent In this case, the kinematical limits are
\begin{eqnarray*}
\ln{\frac{E_T}{\sqrt{s}-E_T\ e^{-y}}} \leq &y'& \leq \ln{\frac{\sqrt{s}-E_T\ e^{-y}}{E_T}}, \\ \\
E_{T_{min}}=experimental\ &cut& \ \ \ \ {\rm and} \ \ \ \
E_{T_{max}}=\frac{\sqrt{s}}{e^{-y}+e^{y}}.
\end{eqnarray*}
\subsection{Diffractive Dijets and W production}
In order to calculate the diffractive cross sections, we use the
Pomeron structure function defined as
\begin{eqnarray}
xf_{\bf I\! P}(x, Q^2)=\int dx_{\bf I\! P} \int d\beta\ g(x_{\bf I\! P})\ \beta
f_{\bf I\! P}(\beta, Q^2) \delta(\beta-\frac{x}{x_{\bf I\! P}}).
\end{eqnarray}
Introducing this expression in Eq.(\ref{dijet}), we obtain the cross section
for diffractive dijet production,
\begin{eqnarray}
\frac{d\sigma}{dy}=\sum_{a, b}\int_{E_{T_{min}}}^{E_{T_{max}}} dE_{T}^2 \int_{y'_{min}}^{y'_{max}} dy' \int_{x_{\bf I\! P min}}^{x_{\bf I\! P max}} dx_{\bf I\! P} g(x_{\bf I\! P})\ x_a f_p(x_a, Q^2)\ \beta f_{\bf I\! P}(\beta,Q^2)\ (\frac{d\hat{\sigma}}{d\hat{t}})_{jj},
\label{ddijet}
\end{eqnarray}
where the scale is given by $Q^2 = E_T^2$.
As for diffractive W production, the expression obtained is
\begin{eqnarray}
\sigma(p\bar{p} \rightarrow W^{\pm})=\sum_{a, b}\frac{2}{3} \pi \frac{{g_W}^2}{s} \int_{x_{\bf I\! P min}}^{x_{\bf I\! P max}} dx_{\bf I\! P} \int_{\beta_{min}}^{1}\frac{d\beta}{x_{\bf I\! P} \beta}\ g(x_{\bf I\! P})
f_{\bf I\! P}(\beta,Q^2) f_p(\frac{\tau}{x_{\bf I\! P} \beta},Q^2),
\label{dwww}
\end{eqnarray}
where $\tau=M^2_W / s$ , $\beta_{min}=\tau /x_{\bf I\! P}$ and $Q^2 = M^2_W$.
In all of these calculations, the parametrizations used for the proton structure function were taken from ref.\cite{gluck}.
\section{Results and discussion}
The experimental rate for diffractive production of W is \cite{wcdf} $R_W\ =\ (1.15\pm 0.55)\ \%$. Table \ref{dataprod} summarizes the experimental data from \cite{rapgap,roman,d0180} referring to the diffractive production rates of dijets as well as the kinematical cuts used to obtain these
data.
\begin{table}
\vspace{-1.5cm}
\begin{tabular}{|c|c|c|c|c|}
\hline \hline
& & & & \\
CUTS & CDF (Rap-Gap) & CDF (Roman Pots) & D0 1800 & D0 630 \\
& & & & \\ \hline
& & & & \\
rapidity & $-3.5 \leq y \leq -1.8$ & $-3.5 \leq y \leq -1.8$ ~ &
$-4.1 \leq y \leq -1.6$ ~ & $-4.1 \leq y \leq -1.6$ ~ \\
& & & & \\ \hline
& & & & \\
$x_{\bf I\! P}$ & $ x_{\bf I\! P} \leq 0.1 $ & $ 0.05 \leq x_{\bf I\! P} \leq 0.1$ & $ x_{\bf I\! P} \leq 0.1 $ & $ x_{\bf I\! P} \leq 0.1 $ \\
& & & & \\ \hline
& & & & \\
$E_{T}(min)$ & $20\ GeV$ & $10\ GeV$ & $12\ GeV$ & $12\ GeV$ \\
& & & & \\ \hline
& & & & \\
RATES & $0.75\pm 0.10$ (2j+3j) ~ & $0.109\pm 0.016$ & & \\
& & & $0.67\pm 0.05$ & 1-2 \\
(\%) & 1.53* (2j) & (2j) & & \\
& & & & \\
\hline \hline
\end{tabular}
\vspace{0.3cm}
\caption{\sf{Experimental data of diffractive production of dijets and kinematical cuts. In the first column, the rate includes contribution of a third jet. The number given below indicated with an asterisk is the rate corrected to dijets only.
\label{dataprod}}}
\end{table}
In Figs.1-4, we present the rapidity distributions of jet cross section obtained with different parametrizations for the pomeron structure function
and for both flux factors.
The experimental data are shown again in Table \ref{resultrates} in comparison with the rates obtained from our theoretical calculations. The results obtained with the standard (Donnachie-Landshoff) flux are indicated by D \& L, while the columns indicated as REN give the results obtained with the renormalized (Goulianos) flux. By looking at these results, we can note the following:
\begin{itemize}
\item{The rates obtained with standard flux are much larger than the experimental values, being that these discrepancies are more pronounced for hard gluon distributions;}
\item{Generally speaking, the rates obtained with the renormalized flux are very close to the experimental data;}
\item{The experimental rate for the case of dijets-CDF obtained with rapidity gaps increases when one excludes the contamination with the third jets (see Table \ref{dataprod} , second column); thus we see that the renormalized flux generally underestimates the rates except for the case of jets-CDF obtained with roman pots, in which the contrary happens;}
\item{A lack of W's is noticed in the renormalized case in spite of the fact that the pomeron structure function for this case implies that
the quark component is pratically the double of the gluon component \cite{nois}.}
\end{itemize}
\section{Concluding remarks}
The results of W and dijet production rates presented in this paper show that, in order to make the theoretical predictions obtained with the pomeron structure function extracted from HERA data compatible with experimental data of such rates, a renormalization procedure (or something alike) is indispensable. Of course, this conclusion is conditioned by the presumptions that underlie the approach used here, that is the Ingelman-Schlein model.
\section*{Acknowledgement}
We would like to thank the Brazilian governmental agency FAPESP for the financial support.
\begin{table}
\begin{tabular}{|c|c||c|c|c|c|}
\hline \hline
& & & & & \\
& & D \& L & D \& L & REN & REN \\
EXPERIMENT & RATES & & & & \\
& & hard-hard ~ & free-delta ~ & hard-hard ~ & free-zero ~ \\
& & & & & \\ \hline
& & & & & \\
{\it Jets} - CDF (Rap-Gap) & $0.75\pm 0.10$ & 15.3 & 6.33 & 0.62 & 0.52 \\ & (2j+3j) & & & & \\ & & & & & \\ \hline
& & & & & \\
{\it Jets} - CDF (Roman Pots) & $0.109\pm 0.016$ ~ & 3.85 & 1.13 & 0.15 & 0.16 \\ & & & & & \\ \hline & & & & & \\
{\it Jets} - D0 $630\ GeV$ & 1-2 & 15.4 & 6.41 & 0.87 & 0.71 \\
& & & & & \\ \hline & & & & & \\
{\it Jets} - D0 $ 1800\ GeV$ & $0.67\pm 0.05$ & 16.6 & 6.14 & 0.65 & 0.57 \\ & & & & & \\ \hline & & & & & \\
{\it W's} - CDF (Rap-Gap) & $1.15\pm 0.55$ & 3.12 & 3.54 & 0.53 & 0.58 \\
& & & & & \\
\hline \hline
\end{tabular}
\vspace{0.2cm}
\caption{\sf{Production rates - all values are given in percentages.}}
\label{resultrates}
\end{table}
| proofpile-arXiv_065-9123 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The search for the Higgs boson and the study of its properties will be among
the most important tasks of elementary particle physics at future $e^+e^-$
linear colliders at the TeV scale (NLC)~\cite{lc}.
The research carried on at present colliders can explore an interval of Higgs
masses below $\sim 100$~GeV at LEP2~\cite{mcn} or possibly $120$--$130$ GeV at the
upgraded Tevatron~\cite{tevatron}. The remaining mass range, up to
$\sim 800$ GeV, will be in the reach of the future colliders LHC~\cite{LHC} and
NLC. In particular the precision studies that will be possible in the clean
environment of NLC will be of great help in the determination of the Higgs
boson properties.
A range of particular interest for the Higgs mass is between $100$ and
$200$ GeV, as many arguments both of experimental and of theoretical nature
indicate. Indeed a lower limit of $\sim 90$ GeV is given by recent results
in the direct search at LEP2~\cite{mcn}, while from fits to electroweak
precision data an upper limit of $\sim 280$ GeV at $95\%$ C.L. is
obtained~\cite{karlen}.
In this range two mass intervals may be considered: for $m_H\leq 140$ GeV the
Higgs decays mainly into $b\overline b$ pairs, while for $m_H\geq 140$ GeV the
decays into $WW$ and $ZZ$ pairs become dominant.
Therefore in the first case the mechanisms of Higgs production relevant to
$e^+e^-$ colliders, Higgs-strahlung and $VV(V=W,Z)$ fusion, give rise to
signatures that contain four fermions in the final state, which have been
extensively studied in the recent past~\cite{lep2}--\cite{dpeg}.
In the second case, in which $m_H\geq 140$ GeV, six fermions are produced in the
final state.
More generally, six-fermion ($6f$) final states come from other relevant
processes at NLC, such as $t\bar t$ and three-gauge-boson
production~\cite{keystone}.
As shown by complete four-fermion calculations for $WW$ and light Higgs physics
at LEP2~\cite{lep2}--\cite{dpeg,wwwg,wweg}, full calculations of $6f$
production
processes allow one to keep phenomenologically relevant issues under control, such
as off-shellness and interference effects, background contributions and spin
correlations.
Some calculations of such processes $e^+ e^- \to 6$f
have recently been performed~\cite{to1}--\cite{gmmnp97},
with regard to {\it top}-quark, Higgs boson and $WWZ$ physics at NLC.
Moreover recent progress in the calculation of processes
$e^+ e^- \to 6$~jets~\cite{sixj} and of $2\to$ up to $8$ partons
QCD amplitudes~\cite{alpha1,dkp}
should be mentioned for their relevance in QCD tests
at lepton and hadron machines.
These calculations rely upon different computational techniques, such
as helicity amplitude methods for the evaluation of the very large number
of Feynman diagrams associated to the process under
examination, or iterative numerical algorithms, where the transition
amplitudes are calculated numerically without using Feynman diagrams.
Concerning Higgs physics, an analysis of the processes
$e^+ e^- \to \mu^+ \mu^- u \bar d \tau^- \bar \nu_\tau$ and
$e^+ e^- \to \mu^+ \mu^- u \bar d e^- \bar \nu_e$ has been performed
in ref.~\cite{sixfzpc},
where the Higgs can be produced by Higgs-strahlung and
the subsequent decay proceeds through $W^+ W^-$ pairs.
Special attention has been devoted to the calculation of the Higgs boson
signal and of its Standard Model background, with special emphasis on the
determination and analysis of angular correlation variables, particularly
sensitive to the presence and to the spinless nature of the Higgs particle.
The $6f$ final states, where the Higgs signal gives
rise to two charged currents, have also been considered in
ref.~\cite{to1}, studying cross sections and invariant mass
distributions for the processes
$e^+ e^- \to f \bar f q \bar q' f' \bar f''$.
The case of the Higgs boson decay in neutral currents has been
briefly addressed for the signal alone in ref.~\cite{gmmnp97} with
the study of the reaction $e^+ e^- \to e^+ e^- \nu_e \bar \nu_e
u \bar u$. The aim of the present paper is to complete and extend
the analysis of ref.~\cite{gmmnp97} to include general $q \bar q$
neutral currents contributions and the effect of the contributions from
undetectable different-flavour neutrinos, in such a way as to provide
realistic predictions for processes
$e^+ e^- \to l^+ l^- \nu \bar \nu q \bar q$ at the parton level.
In the following, $b$-quark tagging will be assumed, leaving aside
$b\bar b$ final states, which lead to an interplay between Higgs and
{\it top} physics and will be studied elsewhere.
Consisting of only two jets, the processes considered in the present
paper are free from QCD backgrounds.
The outline of the paper is as follows. In Section 2 the physical process is
presented and the main technical issues of the calculation are explained.
In Section 3 several numerical results are shown and discussed and the
potentials of full $6f$ calculations are stressed. Finally,
Section 4 contains the conclusions.
\section{Physical process and computing technique}
The production of an intermediate mass Higgs boson is studied in the
process $e^+e^-\to q\overline q l^+ l^- \nu\overline\nu$, where a sum is
made over the contributions of the $u,d,c$ and $s$ quarks, of the three
neutrinos and of $l=e,\mu,\tau$.
Particular attention will be devoted to the signature
$q\overline q e^+ e^- \nu\overline\nu$.
One of the interesting features of this process is the presence of both
charged current and neutral current contributions~\cite{to1}, which is a
situation never
studied before, since all the six-fermion signatures analysed in the
literature~\cite{to1}--\cite{sixfzpc} involve only charged current
contributions. Moreover
this class of processes receives contribution from diagrams with up to three
$t$-channel gauge bosons. This feature is of particular interest
because of the large centre-of-mass (c.m.) energy, $\sqrt s$, at which the NLC
will operate. These topologies enhance the cross section with growing $s$.
The capability to provide predictions for processes with many electrons and
electron-neutrinos in the final state is therefore crucial to discuss NLC
physics.
The present study demonstrates the possibility to deal successfully
with the dynamics calculation and phase-space integration
of this class of final states.
Another important property is that the process is
free from QCD backgrounds because only two jets are produced.
As a drawback, the total cross section is smaller than
in the $6f$ processes with four or six jets.
However, the sums over quark, charged lepton and neutrino flavours, as well as the
combined action of different mechanisms of production (see fig.~\ref{fig:6fd}),
contribute to give a
significant size to the cross section, so that, assuming an
integrated luminosity of
$500$ fb$^{-1}$/yr and a Higgs mass of, say, $185$ GeV, more than $1000$
events can be expected at a c.m.
energy of $360$ GeV and more than $2000$ at
$800$ GeV (see fig.~\ref{fig:6fsscan20}). In particular, as will be seen in the
numerical results, the presence of the $t$-channel contributions of vector
boson fusion gives an enhancement of the cross sections at very high energies.
The diagrams containing a resonant Higgs boson coupled to gauge bosons for the
$q\overline q e^+ e^- \nu\overline\nu$ final state are shown in
fig.~\ref{fig:6fd}. As can be seen, there are four terms of the Higgs-strahlung
type and two of the fusion type. At relatively low energies, $\sqrt{s}\le 500$ GeV,
the process of Higgs-strahlung dominates and, in particular, the charged
current term is the most important one. As the energy is increased, the
$t$-channel terms of vector boson fusion become more and more important, as
they grow with increasing $s$, and they are dominant above $500$ GeV.
The diagrams for the processes with $\mu^+\mu^-$ or $\tau^+\tau^-$ instead of
$e^+e^-$ and/or $\bar\nu_{\mu,\tau} \nu_{\mu,\tau}$
instead of $\bar\nu_e \nu_e$
in the final state are a subset of those illustrated here.
\begin{center}} \def\ece{\end{center}
\begin{figure}} \def\efig{\end{figure}
\begin{center}} \def\ece{\end{center}
\epsfig{file=fig1.eps,height=8.cm,width=10.cm}
\caption{\small Feynman diagrams for the process
$e^+e^-\to q\overline q e^+ e^- \nu\overline\nu$ with a resonant Higgs boson.}
\label{fig:6fd}
\ece
\efig
\ece
The full set of diagrams containing a physical Higgs boson coupled to
gauge bosons includes also those shown in fig.~\ref{fig:6fdht}. These
contributions are non-resonant, as the Higgs is exchanged in the $t$-channel,
and their size can be expected to be negligible at low energies; at high
energies, however, they play
an important r\^ole in preserving gauge invariance and unitarity of the
$S$-matrix. This point will be discussed in more detail in the next section.
\begin{center}} \def\ece{\end{center}
\begin{figure}} \def\efig{\end{figure}
\begin{center}} \def\ece{\end{center}
\epsfig{file=fig2.eps,height=4.cm,width=10.cm}
\caption{\small Feynman diagrams with a non-resonant Higgs boson (two similar
diagrams are also obtained from these by particle exchanges).}
\label{fig:6fdht}
\ece
\efig
\ece
The total number of tree-level Feynman diagrams for the process under
examination is of the order of one thousand, which makes the calculations
very complicated for what concerns the determination of the transition matrix
element as well as the phase-space integration.
The transition matrix element is calculated by means of ALPHA
~\cite{alpha}, an algorithm that allows the calculation of scattering
amplitudes at the tree level without the use of Feynman diagrams.
The results produced by this algorithm in a large number of calculations of
multi-particle production are in full agreement with those obtained by programs
using traditional techniques ~\cite{smwg,wweg,alpha}--\cite{mmnp}.
This fact may be considered as a significant test of ALPHA.
Anyway, also in the present work, checks have been made, reproducing by means
of the helicity amplitude method ~\cite{hel} some of the results given by
ALPHA for the Higgs ``signal'' (which is defined below) and finding perfect
agreement.
For the integration over the phase space, as was already done in
refs. \cite{sixfzpc,gmmnp97}, a code has been developed by adapting the Monte Carlo
program HIGGSPV/WWGENPV \cite{higgspv,wwgenpv}, originally developed to treat
four-fermion production, to make $6f$ calculations.
The code can be used to perform Monte Carlo integrations
and obtain cross sections, or to generate samples of unweighted events.
Kinematical cuts can be introduced to simulate realistic experimental
conditions. The effects of initial-state radiation (ISR)~\cite{sf} and
beamstrahlung ~\cite{circe} are taken into account by means of the standard
convolution formula
\begin{equation}} \def\eeq{\end{equation}
\sigma =\int dz_1dz_2D_{BS}(z_1,z_2;s)\int dx_1dx_2D(x_1,s)D(x_2,s)
\hat\sigma (z_1,z_2;x_1,x_2;s)\ .
\eeq
An accurate importance sampling procedure is required in the Monte Carlo
integration to take care of the complicated structure of ``singularities'' in
the integrand. This structure results from the presence of several mechanisms of
Higgs production, and also from additional sources of variance among the very
large number of background diagrams present in the matrix element.
The ``singularities'' given by different terms correspond to different regions
of the (14-dimensional) phase space and in general must be treated with
different sets of integration variables. As a consequence, a multichannel
importance sampling technique is needed. If $n$ channels are introduced, the
integral is written as
\begin{equation}} \def\eeq{\end{equation}
\int f({\bf x})d\mu({\bf x}) = \sum_{i=1}^n\int {f({\bf x}^{(i)})\over
p({\bf x}^{(i)})}p_i({\bf x}^{(i)})
d\mu_i({\bf x}^{(i)}),
\qquad p({\bf x})=\sum_{i=1}^n p_i({\bf x}),
\eeq
where each ${\bf x}^{(i)}$ is a set of integration variables with a
corresponding measure $d\mu_i$, and $p_i$ is a suitably normalized
distribution function for the importance sampling in the $i$-th channel.
The choice of integration variables is made within two kinds of phase-space
decompositions, corresponding to two diagram topologies: $s$-channel,
based on the Higgs-strahlung terms, and $t$-channel, based on the fusion terms,
as illustrated in fig.~\ref{fig:6fcamp}.
\begin{center}} \def\ece{\end{center}
\begin{figure}} \def\efig{\end{figure}
\begin{center}} \def\ece{\end{center}
\epsfig{file=fig3.eps,height=4.cm,width=10.cm}
\caption{\small $s$-channel and $t$-channel topologies considered in the
importance sampling.}
\label{fig:6fcamp}
\ece
\efig
\ece
For the $s$-channel topology the phase-space decomposition reads:
\begin{eqnarray}} \def\eeqa{\end{eqnarray}
\nonumber
&d\Phi_6(P;q_1,\ldots ,q_6)=&(2\pi)^{12}d\Phi_2(P;Q_1,Q_2)d\Phi_2(Q_1;q_1,q_2)\\
\nonumber
&&d\Phi_2(Q_2;Q_3,Q_4)d\Phi_2(Q_3;q_3,q_4)d\Phi_2(Q_4;q_5,q_6)\\
&&dQ_1^2dQ_2^2dQ_3^2dQ_4^2,
\eeqa
where $P$ is the initial total momentum, $q_i$ are the momenta of the outgoing
particles, while $Q_i$ are those of the internal particles. The notation
$d\Phi_2(Q_i;Q_j,Q_k)$ indicates the two-particle phase space;
the momenta of the final particles $Q_j,Q_k$
are first generated in the rest frame of the particle of momentum $Q_i$
and then boosted to the laboratory
frame. The integration variables are the four invariant masses
$Q_1^2,\ldots ,Q_4^2$ and five pairs of angular variables $\cos\theta_i,\phi_i$,
one for each $d\Phi_2$ term. The invariant masses are sampled according to
Breit--Wigner distributions of the form
\begin{equation}} \def\eeq{\end{equation}
{N\over (M^2 - Q^2)^2 + \Gamma^2M^2},
\eeq
given by the propagators of the Higgs or gauge bosons in the internal lines
($N$ is a normalization factor).
For the angular variables a flat distribution is assumed. The various
$s$-channel terms differ for permutations of the external momenta and for
the parameters $\Gamma ,M$ in the importance sampling distributions.
In the case of $t$-channel diagrams, the phase space is
\begin{eqnarray}} \def\eeqa{\end{eqnarray}
\nonumber
&d\Phi_6(P;q_1,\ldots ,q_6)=&(2\pi)^{9}d\Phi_3(P;Q_1,q_1,q_2)\\
\nonumber
&&d\Phi_2(Q_1;Q_2,Q_3)d\Phi_2(Q_2;q_3,q_4)d\Phi_2(Q_3;q_5,q_6)\\
&&dQ_1^2dQ_2^2dQ_3^2,
\eeqa
where, as before, the $q_i$ are the outgoing momenta, while the $Q_i$ are
internal time-like momenta. The integration variables are the three invariant
masses, $Q_1^2,Q_2^2,Q_3^2$, three pairs of angular variables $\cos\theta,
\phi$ relative to the three $d\Phi_2$ terms, and, for the three-body phase
space $d\Phi_3$, one energy, $q_1^0$, and four angular variables,
$\cos\theta_1,\phi_1,\cos\theta_2,\phi_2$.
The invariant masses are sampled, as in the $s$-channel case, according to
Breit--Wigner distributions; one angular variable in $d\Phi_3$, say
$\cos\theta_1$, is sampled by means of the distribution
\begin{equation}} \def\eeq{\end{equation}
{N\over (M_V^2 + \sqrt{s}q_1^0(1 - \cos\theta_1))^2 + \Gamma_V^2M_V^2},
\eeq
corresponding to the propagator of one space-like gauge boson ($V=W,Z$)
emitted by the initial electron or positron (typically one of the bosons
participating in the fusion into Higgs); in some channels, corresponding to
background diagrams, also another angular variable $\cos\theta$ relative to a
two-body term $d\Phi_2$, is sampled in a similar way, in order to take into
account the ``singularity'' associated with a boson propagator. All other
variables have flat distributions.
\section{Numerical results and discussion}
The numerical results presented in this section are obtained with the same
set of phenomenological parameters as adopted in ref.~\cite{sixfzpc}.
Namely, the input parameters are $G_\mu$, $M_W$ and $M_Z$, and other quantities,
such as $\sin^2\theta_W$, $\alpha$ and the widths of the $W$ and $Z$ bosons,
are computed at tree level in terms of these constants.
The Higgs width includes
the fermionic contributions $h\to \mu\mu,\tau\tau,cc,bb$, with running masses
for the quarks (to take into account QCD corrections~\cite{hwg}),
the gluonic contribution
$h\to gg$ ~\cite{hwg}, and the two-vector boson channel, according
to ref.~\cite{kniel}.
The denominators of the bosonic propagators are of the form
$p^2 - M^2 + i\Gamma M$, with fixed widths $\Gamma$.
As already discussed in ref.~\cite{sixfzpc}, the aim of this choice is to
minimize the possible sources of gauge violation in the computation~\cite{bhf}.
Such gauge violations have been studied by the same
methods as were used in ref.~\cite{sixfzpc}. In particular, for what concerns $SU(2)$
gauge symmetry, comparisons have been made with results in the so-called
``fudge scheme''~\cite{fudge}.
A disagreement has been found at the level of few per cent
for a c.m. energy of $360$ GeV. The disagreement vanishes at higher energies.
By careful inspection of the various contributions, it has been checked
that the deviation at lower energies is due to the well-known fact that a given
fudge factor, close to a resonance, mistreats the contributions that do not
resonate in the same channel.
Concerning $U(1)$ invariance, a test has been performed by using different
forms of the photon propagator and finding perfect agreement,
up to numerical precision, in the values
of the squared matrix element.
The first group of results discussed in this section refers to cross-section
calculations, performed by using the program as an integrator of weighted
events. The signature considered in the first plots of total cross section
is $q\overline ql^+l^-\nu\overline\nu$, where, in addition to the sums over
quark and neutrinos flavours already mentioned, there is a sum over
$l=e,\mu,\tau$. All other results are instead relative to the signature
$q\overline qe^+e^-\nu\overline\nu$.
Some samples of unweighted events, obtained by using the code as a generator,
are then analysed in the remaining part of this section.
\subsection{Total cross sections}
In fig.~\ref{fig:6fsscan} the total cross section (including the contribution
of all the tree-level Feynman diagrams) is shown for three values of the Higgs
mass in the intermediate range,
$165,185$ and $255$ GeV, at energies between $360$ and $800$ GeV. To make
a first analysis, the following kinematical cuts are adopted: the invariant
mass of the quark--antiquark pair and that of the charged lepton pair are
required to be
greater than $70$ GeV, the angles of the two charged leptons with respect to
the beam axis within $5^\circ$ and $175^\circ$.
This choice
is applied to the quantities shown in figs.~\ref{fig:6fsscan},
\ref{fig:6f185isr} and~\ref{fig:6ft-spbg}.
Another set of cuts, with a lower limit of $20$ GeV
on the $l^+l^-$ invariant mass, is adopted in fig.~\ref{fig:6fsscan20} and
in the study of event samples.
The increase with energy, common to all three curves in
fig.~\ref{fig:6fsscan}, is due, at high energies, to the $t$-channel
contributions; in the case of $m_H=255$ GeV, the steep rise near
$\sqrt{s}=360$ GeV is related to the existence of a threshold
effect for the Higgs-strahlung process at an energy $\sqrt{s}\sim m_H + M_Z$.
\begin{center}} \def\ece{\end{center}
\begin{figure}} \def\efig{\end{figure}
\begin{center}} \def\ece{\end{center}
\epsfig{file=fig4.eps,height=8.cm,width=8.cm}
\caption{\small Total cross section for the process
$e^+e^-\to q\overline ql^+l^-\nu\overline\nu$ in the Born approximation,
as a function of $\sqrt s$ for three different values of the Higgs mass
$m_H$. The angles $\theta(l^+)$, $\theta(l^-)$
of the charged leptons with the beam axis are in the interval
$5^\circ$-$175^\circ$,
the $e^+e^-$ and the $q\bar q$ invariant masses are larger than $70$~GeV.}
\label{fig:6fsscan}
\ece
\efig
\ece
In fig.~\ref{fig:6fsscan20} the total cross section is plotted, with the cut on
the invariant mass of the charged lepton pair reduced to $20$ GeV. The effect
of this modification, as expected, is an enhancement of the cross section in
the low-energy region: indeed the most important contribution at energies below
$500$ GeV is given by the Higgs-strahlung diagram, with the Higgs decaying into
two $W$ bosons, which will be indicated from now on as the charged-current
Higgs-strahlung diagram, and which is characterized by a broad distribution of
the $l^+l^-$ invariant mass that goes well below $70$ GeV. This cut is still
sufficient to reduce to a negligible size the contribution of virtual photon
conversion into $l^+l^-$ pairs. The behaviour of the cross section as the Higgs
mass is varied depends on the interplay of the various production mechanisms
and of the decay branchings involved; this behaviour can be better observed in
the ``signal'' contribution that will be defined below
(see fig.~\ref{fig:sigmh}).
\begin{center}} \def\ece{\end{center}
\begin{figure}} \def\efig{\end{figure}
\begin{center}} \def\ece{\end{center}
\epsfig{file=fig5.eps,height=8.cm,width=8.cm}
\caption{\small The same as in fig.~\ref{fig:6fsscan} with the cut on
the $l^+l^-$ invariant mass reduced to $20$ GeV.}
\label{fig:6fsscan20}
\ece
\efig
\ece
The effect of ISR is illustrated in fig.~\ref{fig:6f185isr}, for a Higgs mass of
$185$ GeV and for the signature $q\overline qe^+e^-\nu\overline\nu$, to which
all the remaining results refer. Here the $e^+e^-$ invariant mass is again
greater than $70$ GeV.
\begin{center}} \def\ece{\end{center}
\begin{figure}} \def\efig{\end{figure}
\begin{center}} \def\ece{\end{center}
\epsfig{file=fig6.eps,height=7.5cm,width=7.5cm}
\caption{\small Effect of initial-state radiation on the total cross section
of the process $e^+e^-\to q\overline qe^+e^-\nu\overline\nu$ as a function
of $\sqrt s$ for a Higgs mass $m_H=185$ GeV. Cuts are the same as in
fig.~\ref{fig:6fsscan}.}
\label{fig:6f185isr}
\ece
\efig
\ece
The cross section in the presence of ISR is lowered by a quantity of the order
of $10\%$ with respect to the Born approximation. This phenomenon can be
easily understood, since the initial-state radiation reduces the c.m. energy,
so in this case it produces a shift towards energy values where the cross
section is smaller.
\subsection{Definition and study of the Higgs signal}
The results discussed so far, as stated above, are given by the sum of all the
tree-level Feynman diagrams. Strictly speaking, this is the only meaningful
procedure. On the other hand, there is a number of reasons to
consider a subset of diagrams that can be defined as the Higgs signal and to
define a corresponding background. In the first place this is of great interest
from the point of view of the search for the Higgs boson in the experiments.
Moreover, as will be shown, such a definition allows one to make a comparison
with results obtained in the narrow width approximation
(NWA)~\cite{hstr}--\cite{zervast},
which are the only available
estimations unless a complete $6f$ calculation is performed.
In principle, whenever a subset of diagrams is singled out, gauge invariance
may be lost and unitarity problems may arise. However, in the following, an
operative definition of signal and background is considered and its reliability
is studied for various Higgs masses and c.m. energies.
The signal is defined as the sum of the six graphs containing a resonant Higgs
boson, shown in fig.~\ref{fig:6fd}. The background is defined as the sum of all
the diagrams without a Higgs boson. In this definition the diagrams with a
non-resonant Higgs boson coupled to gauge bosons, shown in fig.~\ref{fig:6fdht},
are missing both in the signal and in the background. Such a choice has been
dictated by the fact that these non-resonant contributions cannot correctly be
included in the signal, since they cannot find a counterpart in the NWA, and
because of gauge cancellations with background
contributions at high energies; however, as they depend on the Higgs mass,
they should not be included in the background as well.
In order to give a quantitative estimate of the validity of this definition,
the total cross section (sum of all the tree-level
$6f$ Feynman diagrams) is compared in fig.~\ref{fig:6ft-spbg} with the
incoherent sum of signal and background.
The cuts are as in fig.~\ref{fig:6fsscan}, in particular
with the $e^+e^-$ invariant mass greater than $70$ GeV. In order to understand
the meaning of these results, it is important to note that, as observed above,
the contributions of the diagrams of fig.~\ref{fig:6fdht} are absent both from
the signal and from the background: thus if we indicate these contributions to
the scattering amplitude as $A_{ht}$, and the signal and background amplitudes
as $A_s$ and $A_b$ respectively, the total squared amplitude is
\begin{equation}} \def\eeq{\end{equation}
\vert A\vert^2 =\vert A_s + A_b + A_{ht}\vert^2 \quad .
\eeq
The terms neglected in the incoherent sum of signal and background are
$\vert A_{ht}\vert^2$ and all the interference terms. Among these, the
interferences of $A_{ht}$ with the rest are dominant at high energies
as they involve gauge cancellations.
\begin{center}} \def\ece{\end{center}
\begin{figure}} \def\efig{\end{figure}
\begin{center}} \def\ece{\end{center}
\epsfig{file=fig7.eps,height=11.cm,width=11.cm}
\caption{\small Full six-fermion cross section compared with the incoherent sum
of ``signal'' (S) and ``background'' (B).
A detailed discussion and an operative definition of ``signal''
and ``background'' are given in the text.}
\label{fig:6ft-spbg}
\ece
\efig
\ece
The curves of fig.~\ref{fig:6ft-spbg} show that up to $500$ GeV the total cross
section and the incoherent sum of signal and background are indistinguishable
at a level of accuracy of $1\%$,
and the definition of signal may be considered meaningful; at higher
energies, this separation of signal and background starts to be less reliable,
since it requires to neglect effects that are relevant at this accuracy. In
particular, at $800$ GeV the deviation is of the order of a few per cent and it
decreases when the Higgs mass passes from $165$ to $185$ and to $255$ GeV.
The above results are obtained with the set of kinematical cuts in which the
$e^+e^-$ invariant mass is greater than $70$ GeV, but when this cut is at
$20$ GeV, the difference between full cross section and incoherent sum of
signal and background is significantly reduced (about $3$--$4\%$ at $800$ GeV).
The analysis of event samples
presented in the following is made within this latter set of cuts, so that, up
to 800 GeV, it can be considered reliable, at the level of accuracy of a few
per cent, to speak of ``background'', as will be done.
On the other hand the problems arising when a definition of signal and
background is attempted show the importance of a calculation involving the full
set of tree-level Feynman diagrams to obtain reliable results, especially at
high energies.
A comparison with the NWA is shown in
fig.~\ref{fig:6ff-nwa} for the processes
$e^+e^-\to q\overline q e^+ e^- \nu\overline\nu$ and
$e^+e^-\to q\overline q \mu^+ \mu^- \nu\overline\nu$, where, for the sake of
comparison, no kinematical cuts are applied and the results are in the Born
approximation. Here $\sigma_{sig}$
is the signal cross section, containing the contributions of the six diagrams of
fig.~\ref{fig:6fd} (or the suitable subset of these for the case of the final
state $q\overline q \mu^+ \mu^- \nu\overline\nu$) and their interferences.
The cross section in the NWA, $\sigma_{NWA}$, is obtained in the following way
(for definiteness the case with $e^+e^-$ in the final state is considered): the
known cross sections for the processes of real Higgs production
$e^+e^-\to h\nu\overline\nu,he^+e^-$ ~\cite{almele,zervast} and
$e^+e^-\to Zh$~\cite{hstr} are multiplied by the appropriate branching ratios,
so as to obtain six terms corresponding to the diagrams of fig.~\ref{fig:6fd};
then the incoherent sum of these terms is taken.
Thus the comparison between $\sigma_{sig}$ and $\sigma_{NWA}$
gives a measure of interference between the different production mechanisms
and of off-shellness effects together.
\begin{center}} \def\ece{\end{center}
\begin{figure}} \def\efig{\end{figure}
\begin{center}} \def\ece{\end{center}
\epsfig{file=fig8.eps,height=14.cm,width=14.cm}
\caption{\small Comparison between the signal cross section obtained by a
diagrammatic six-fermion calculation and the one calculated in the
narrow width approximation (see the discussion in the text),
as a function of $\sqrt s$ (upper row) and of the Higgs mass
(lower row).}
\label{fig:6ff-nwa}
\ece
\efig
\ece
As can be seen in fig.~\ref{fig:6ff-nwa}, the relative difference $R$ is of the
order of
some per cent, depending on the Higgs mass and the c.m. energy; in some cases
it reaches values of more than $10\%$, with no substantial difference between
the two final states considered. In particular the off-shellness effects
are much more important than the interference ones. In fact the relative size
of the interferences has been separately evaluated by means of a comparison
between $\sigma_{sig}$ and the incoherent sum of the six diagrams of
fig.~\ref{fig:6fd} and has been found to be at most $2\%$, but generally less
than $1\%$ for the c.m. energies and Higgs masses considered here.\par
The size of the off-shellness effects, comparable with the ISR lowering,
indicates the importance of a full
$6f$ calculation in order to obtain sensible phenomenological
predictions.
In fig.~\ref{fig:sigmh} the signal cross section is shown as a function of the
Higgs mass for different c.m. energies.
The behaviour is related to the branching ratios of the decays of
the Higgs boson into gauge bosons and the differences between the three energy
values considered are due to the variations in the relative sizes of the
different signal contributions at different energies.
\begin{center}} \def\ece{\end{center}
\begin{figure}} \def\efig{\end{figure}
\begin{center}} \def\ece{\end{center}
\epsfig{file=fig9.eps,height=8.cm,width=8.cm}
\caption{\small Signal cross section as a function of the Higgs mass for three
different c.m. energies.}
\label{fig:sigmh}
\ece
\efig
\ece
\vskip 5 truemm
\subsection{Distributions}
The results presented in the following refer to samples of unweighted events
for a Higgs mass of $185$ GeV at energies of $360$ and $800$ GeV, with or
without the effect of ISR and beamstrahlung. The cuts adopted in all cases are
the following: the invariant mass of the $q\overline q$ pair greater than $70$
GeV, the invariant mass of the $e^+e^-$ pair greater than $20$ GeV, and the
angles of the electron and positron with respect to the beam axis between
$5^\circ$ and $175^\circ$; further cuts, applied in the analysis of particular
cases, will be described later. The numbers of events in all the samples are
normalized to the same luminosity.
In fig.~\ref{fig:nt185ibs4} the invariant masses of two different systems of
four momenta are studied at c.m. energies of $360$ and $800$ GeV in the Born
approximation (dashed histograms) as well as with ISR and beamstrahlung
(solid histograms). The first set is given by
$e^+e^-+$ missing momentum, where the missing momentum is defined as
$q_{miss}=p^+_{in}+p^-_{in}-q_{e^+}-q_{e^-}-q_q-q_{\overline q}$.
In the Born approximation this set of momenta corresponds to the system
$e^+e^-\nu\overline\nu$. The other set considered is that corresponding to the
four-fermion system $q\overline qe^+e^-$.
As can be seen in fig.~\ref{fig:6fd}, these are two of the possible sets of
four fermions produced by the decay of the Higgs boson in the process under
consideration; there is also a third set, $q\overline q\nu\overline\nu$, whose
invariant mass distribution, however, does not contain any new feature.
The presence of the Higgs boson can be revealed by a peak in the
distributions of these invariant masses. Indeed, in the Born approximation
(dashed histograms), a sharp peak around $185$ GeV can be seen in each of the
histograms of fig.~\ref{fig:nt185ibs4}. At a c.m. energy of $360$ GeV, the most
remarkable one is that of $e^+e^-+$ missing momentum, associated
to the system $e^+e^-\nu\overline\nu$, as it receives contributions from the
charged current Higgs-strahlung diagram, which is dominant at this energy.
In the presence of ISR and beamstrahlung, this peak is considerably lowered and
broadened, while the other distribution, not involving the missing momentum,
is not significantly affected by radiative effects.
At $800$ GeV this phenomenon is even more evident, because the peak in the first
distribution is completely eliminated by radiative effects, as a consequence of
the small size of the charged current Higgs-strahlung term at this energy,
while the second distribution results to be very sensitive to the presence of
the Higgs, since it receives, around $185$ GeV, contributions from the diagram of
$WW$ fusion into Higgs, which is the dominant signal term at high energies, and
the presence of ISR and beamstrahlung does not modify the shape of the
histogram. Thus, at high energies, a very clean signal of the
Higgs boson is provided by the process under study through this distribution.
\begin{center}} \def\ece{\end{center}
\begin{figure}} \def\efig{\end{figure}
\begin{center}} \def\ece{\end{center}
\epsfig{file=fig10.eps,height=11.cm,width=11.cm}
\caption{\small Invariant-mass distributions for four-fermion systems in the
Born approximation (dashed histograms) and with ISR and beamstrahlung (solid
histograms) at $\sqrt s=360$ GeV (upper row) and $\sqrt s=800$ GeV (lower row).}
\label{fig:nt185ibs4}
\ece
\efig
\ece
The quantities analysed above are useful to reveal the presence of the Higgs
boson and to determine its mass. Other variables can be considered to study
the properties of this particle, such as spin and parity. Some examples are
considered in figures~\ref{fig:th12b},~\ref{fig:cthzz} and~\ref{fig:thzw0}.
When the process $e^+e^-\to HZ$ is considered, a variable that can give
evidence of the scalar nature of the Higgs is the angle $\theta_Z$ of the $Z$
particle direction with respect to the beam in the laboratory frame.
It is well known~\cite{hstr} that the differential cross section
$d\sigma/d\cos\theta_Z$
goes as $\sin^2\theta_Z$
at energies much greater than $M_Z$ and away from the threshold for Higgs
production. A similar situation is expected to occur for the $6f$
process under study when the Higgs-strahlung contributions are dominant.
The distribution $d\sigma/d\theta_Z$ is shown, at the c.m. energies of $360$
and $800$ GeV,
in fig.~\ref{fig:th12b}, where the $Z$ particle is reconstructed as
the sum of the quark and antiquark momenta (indeed this is the case for the
dominant diagram). The contribution from the background alone (dashed histogram)
is also shown. The shape of the solid histogram shows
the expected behaviour at $360$ GeV, where Higgs-strahlung dominates.
\begin{center}} \def\ece{\end{center}
\begin{figure}} \def\efig{\end{figure}
\begin{center}} \def\ece{\end{center}
\epsfig{file=fig11.eps,height=8.cm,width=14.cm}
\caption{\small Distribution of the angle $\theta_Z$
of the $q\bar q$ pair with respect to the beam in the
laboratory frame at $\sqrt s=360,800$ GeV.
The solid histogram represents the full calculation, the dashed histogram is
the contribution of the background.}
\label{fig:th12b}
\ece
\efig
\ece
At the c.m. energy of $800$ GeV, where the dominant signal diagram is
$WW$ fusion into Higgs, the situation is substantially different, since the
process of Higgs production is of the $t$-channel type. One variable that results
very sensitive to the presence of the Higgs boson is shown in
fig.~\ref{fig:cthzz} and indicated as $\cos\theta_{ZZ}$; $\theta_{ZZ}$ is the
angle between the three-momenta in the laboratory frame
of the $q\bar q$ and $e^+e^-$ pairs, which
correspond, in the diagram of $WW$ fusion, to the $Z$ particles coming from the
Higgs. The full distribution (solid line) and the contribution from the
background alone (dashed line) are particularly distinguished in the region
near $1$. There is here a clear signal of the presence of the Higgs, and such a
variable can be used to impose kinematical cuts to single out signal
contributions. This phenomenon is however of a kinematical nature, and is not
directly related to the scalar nature of the Higgs, but is rather a
consequence of the smallness of diagrams with the same topology of the $WW$
fusion in the background.
\begin{center}} \def\ece{\end{center}
\begin{figure}} \def\efig{\end{figure}
\begin{center}} \def\ece{\end{center}
\epsfig{file=fig12.eps,height=8.cm,width=8.cm}
\caption{\small Distribution of the angle $\theta_{ZZ}$
between the $q\bar q$ and the $e^+e^-$ pairs in the
laboratory frame at $\sqrt s=800$ GeV. The solid histogram is the full
calculation, the dashed histogram is the background.}
\label{fig:cthzz}
\ece
\efig
\ece
Another variable has been considered at the energy of $800$ GeV in
fig.~\ref{fig:thzw0}. It is the angle $\theta^*_Z$ of the $Z$ particle,
reconstructed as the sum of the electron and positron momenta, with respect to
the beam, in the rest frame of the system $q\bar qe^+e^-$. The reference diagram
is always the $WW$ fusion: this may be regarded asymptotically as an $s$-channel
$WW$ scattering into $ZZ$, and, in the rest frame of the incoming $WW$ pair,
the angular distribution of the produced $ZZ$ pair is determined by the
scalar nature of the exchanged particle.
In the first row of fig.~\ref{fig:thzw0} the plot on
the left is made without additional cuts, while the plot on the right is
obtained with the requirement $\cos\theta_{ZZ}>0$, so as to reduce the
background. In the
second row the invariant mass of $q\bar qe^+e^-$ is required
to be smaller than 250 GeV (left) and
within 20 GeV around the Higgs mass (right) in order to further suppress the
background.
A clear difference between the shape of the full distribution and
that of the background can in fact be seen, and in the last three plots the
behaviour is very similar to the $\sin\theta^*_Z$ distribution expected on the
basis of the above observations.
\begin{center}} \def\ece{\end{center}
\begin{figure}} \def\efig{\end{figure}
\begin{center}} \def\ece{\end{center}
\epsfig{file=fig13.eps,height=14.cm,width=14.cm}
\caption{\small Distribution of the angle $\theta^*_{Z}$
of the $e^+e^-$ pair with respect to the beam axis in the
rest frame of the $q\bar qe^+e^-$ system at $\sqrt s=800$ GeV. The solid
histogram is the full calculation, the dashed histogram is the
background. First row: no additional cuts (left), $\cos\theta_{ZZ}>0$ (right),
where the angle $\theta_{ZZ}$ is defined in the text and shown in
fig.~\ref{fig:cthzz}. Second row: $q\bar qe^+e^-$ invariant mass smaller than
250 GeV (left) and within 20 GeV around the Higgs mass (right).}
\label{fig:thzw0}
\ece
\efig
\ece
\section{Conclusions}
The processes $e^+e^-\to q\overline q l^+ l^- \nu\overline\nu$ have been studied
in connection with the search for an intermediate-mass Higgs boson. The study,
which extends a previous analysis of $6f$ signatures with only two jets,
is characterized by the presence of neutral current contributions that were
never considered before and by the fact that several mechanisms
of Higgs production are simultaneously active.
The tool used for the numerical calculations is a Fortran code based on the
algorithm ALPHA, for the determination of the scattering amplitude, and on a
development of the Monte Carlo program HIGGSPV/WWGENPV, for the phase-space
integration.
The total cross section, including all the tree-level Feynman diagrams, has
been calculated with various kinematical cuts and taking into account the
effects of ISR and beamstrahlung.
A definition of signal and background has been considered and its reliability
has been studied. To this end the incoherent sum of ``signal'' and
``background'' has
been compared with the full cross section, and this has shown deviations that,
up to a c.m. energy of $500$ GeV, are negligible to an accuracy of $1\%$, but may be
of several per cent at $800$ GeV (fig.~\ref{fig:6ft-spbg}).
These deviations are, however, reduced when the
kinematical selection criteria become more inclusive.
A comparison of the ``signal'' cross section with results in the NWA
has shown that off-shellness effects have a relative size of
several per cent (fig.~\ref{fig:6ff-nwa}).
The results of figs.~\ref{fig:6ft-spbg} and \ref{fig:6ff-nwa} show the
importance of a complete $6f$ calculation to produce reliable results
at the TeV scale.
In the study of generated events the problem of finding observables that are
sensitive to the presence of the Higgs and to its properties has been
addressed. The
presence of several mechanisms of Higgs production, whose relative importance
varies with energy, requires that different variables be considered according
to the energy range studied.
The invariant masses of two sets of four fermions have been analysed first
(fig.~\ref{fig:nt185ibs4}): one, relative to the system $e^+e^-+$missing
momentum, is relevant to the detection of the Higgs boson at $360$ GeV of c.m.
energy, but, at $800$ GeV, the effects of ISR and beamstrahlung
prevent to study the Higgs by means of this distribution.
The other invariant mass, relative to the system $e^+e^- q\bar q$, is instead
particularly useful at high energies and is almost completely unaffected by
radiative effects.
Three angular variables have then been studied: the angle $\theta_Z$
(fig.~\ref{fig:th12b}) is suited
to reveal the spin zero nature of the Higgs at $360$ GeV, where the
Higgs-strahlung dominates, but it gives no information at $800$ GeV.
The angles $\theta_{ZZ}$ (fig.~\ref{fig:cthzz}) and $\theta^*_Z$
(fig.~\ref{fig:thzw0}) are very useful at $800$ GeV: the first one is very
effective to single out the signal, but is not able to distinguish the spin
nature of the Higgs; the second one has a distribution whose shape is very
different from that of the background, and is related to the spinless nature
of the Higgs particle.
The computing strategy and the relative computer code
developed in this work have been applied to study
intermediate-mass Higgs physics.
However, the variety
of diagram topologies present in the matrix element and
taken into account in the Monte Carlo integration,
as well as the possibility provided by ALPHA of dealing with any
kind of process, including now also QCD amplitudes, give the opportunity to
examine other topics relevant to physics at future colliders, where
$6f$ production is involved.
\vspace{1.truecm}
\noindent
{\bf Acknowledgements}\\
We wish to thank A.~Ballestrero and T.~Ohl for discussions and
for their interest in our work. The work of M.~Moretti is funded
by a Marie Curie fellowship (TMR-ERBFMBICT 971934).
F.~Gangemi thanks the INFN, Sezione di Pavia, for the use of
computing facilities.
| proofpile-arXiv_065-9124 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The description of deuteron properties and reactions involving the
deuteron is reflected in a large part of Arenh\"ovel's work. Relaying
also on his analyzes~\cite{arenhoevel} of experimental data the
discussion of relativistic issues in reactions involving the deuteron
has become more and more important in recent years. There are well
known examples of clear experimental evidence, in particular in the
electromagnetic disintegration of the deuteron~\cite{cam82,sch92}.
Meanwhile two relativistic approaches to the nucleon nucleon system
have received special attention in the last few years. One is based on
the Bethe-Salpeter equation~\cite{sal51} and its various
three-dimensional reductions. Based on quantum field theory the
Bethe-Salpeter formalism is four-dimensional and explicitly Lorentz
covariant. Interactions (e.g. electromagnetic currents) are
consistently treated via the Mandelstam formalism leading to Feynman
diagrams and the corresponding rules. The second approach considered
here is based on light front dynamics~\cite{dir49}. The state vector
describing the system is expanded in Fock components defined on a
hypershere in the four-dimensional space time. This approach is
intuitively appealing since it is formally close to the
nonrelativistic conception in terms of Hamiltonians, and state vectors
maybe directly interpreted as wave functions.
The equivalence between these field theoretic and light front
approaches has been a subject of recent discussions, see
e.g.~\cite{equiv} and references therein. A comparison of both
approaches for the deuteron as a two body system clarifies the
structure of the different components of the amplitude. It is also
useful in the context of three particle dynamics where the proper
covariant and/or light front construction of the nucleon amplitude in
terms of three valence quarks (including spin dependence and
configuration mixing) is presently discussed~\cite{bkw98,karm98}.
Although the relation between light front and Bethe-Salpeter
amplitudes for the two nucleon amplitude has been spelled out to some
extend in a recent report~\cite{car98} we provide here some useful
details and additionally discuss the use of the Nakanishi
representation.
To proceed we first present different ways to construct a complete
(covariant) Bethe-Salpeter amplitude, see, e.g. Ref.~\cite{BBBD}. In
particular, we consider the so called direct product and the matrix
representation form. Besides the spin structure of the wave function
(amplitudes) we also present a comparison of the ``radial'' part of
the amplitude on the basis of the Nakanishi integral
representation~\cite{nakanishi}. This integral representation -- well
known and elaborated for the scalar case, see, e.g.
Ref~\cite{williams}, however, not so frequently used -- allows us to
establish a connection between the different approaches also for the
weight functions (or densities) that has not been done so far and is
relevant for a treatment of the Bethe-Salpeter equation in Minkowski
space.
In the following we present different ways to construct a complete
(covariant) Bethe-Salpeter amplitude. In this context the so called
direct product representation used in the rest frame of the nucleon
nucleon system using the $\rho$-spin notation is close to the
nonrelativistic coupling scheme and provides states of definite
angular momentum. To construct the covariant basis this form is
transformed into a matrix representation which will then be expressed
in terms of Dirac matrices. A generalization to arbitrary deuteron
momenta finally leads to the covariant representation of the
Bethe-Salpeter amplitude. This will be explained in the next section
along with an explicit construction of the deuteron ($J=1$) and the
$J=0$ nucleon nucleon state.
The construction of the light front form from the Bethe-Salpeter
amplitude will be given in Section~\ref{sect:lfbs}. Presentation of
the light front approach will be kept concise here, since is has been
presented at length in a recent report~\cite{car98}. Again we show the
results for $J=1$ and $J=0$ deuteron and scattering states. Finally,
we present the analysis in terms of the Nakanishi integral
representation.
\section{The Bethe-Salpeter approach to the two nucleon system}
Commonly, two forms are utilized to describe the Bethe-Salpeter wave
functions (amplitudes) known as direct product form and matrix form.
They will be explained in the following.
For convenience, we introduce the $\rho$-spin notation for Dirac
spinors with momentum \mbf{p}, and spin projection $\mu $,
\begin{equation}
U_{\mu}^{\rho}(\mbf{p})=
\left\{\begin{array}{ll} u_{\mu}(\mbf{p}),&\rho=+\\
v_{-\mu}(-\mbf{p}),&\rho=-\end{array}\right.
\end{equation}
The Dirac spinors $u_\mu(\mbf {p})$, $v_\mu(\mbf{p})$
are defined according to Ref. \cite{itzik}, viz.
\begin{eqnarray}
u_{\mu}(\mbf{p}) =
{\cal L}(\mbf{p}) u_{\mu} (\mbf{0}), \quad
v_{\mu}(\mbf{p}) =
{\cal L}(\mbf{p}) v_{\mu} (\mbf{0}).
\label{trans}
\end{eqnarray}
where the boost of a spin-${\textstyle\frac 12}$ particle with mass $m$ is given by
\begin{eqnarray}
{\cal L}(\mbf{p}) =
\frac {m+ p\cdot\gamma \gamma_0}{\sqrt{2E(m+E)}},
\label{lor}
\end{eqnarray}
where the nucleon four momentum is $p=(E,\mbf{p})$, and
$E=\sqrt{\mbf{p}^2+m^2}$. In the rest frame of the particles the
spinors are given by
\begin{eqnarray}
u_{\mu}(\mbf{0}) =
\left(\begin{array}{c} \chi_{\mu} \\ 0\ \end{array} \right), \quad
v_{\mu}(\mbf{0})=
\left(\begin{array}{c} 0 \\ \chi_{-\mu} \ \end{array} \right).
\nonumber
\end{eqnarray}
In the direct product form the basis of the two particle spinor in
the rest frame is represented by
\begin{equation}
{U_{\mu_1}^{\rho_1}}(\mbf{p})\;
{U_{\mu_2}^{\rho_2}}(-\mbf{p}).
\end{equation}
Which one of the combinations $\rho_1\rho_2$ are actually present in a
particular amplitude depends on the parity and permutation symmetries
required. E.g. since $U^+U^-$ is parity odd, in the deuteron this
combination could only appear for $L=1$. Therefore the appearance of
$P$-states is a typical relativistic effect, because $U^-\rightarrow
0$ for the nonrelativistic limit.
The spin-angular momentum part ${\cal Y}_M^{\alpha}(\mbf{p})$ of the
two nucleon amplitude is then given by
\begin{eqnarray}
{\cal Y}_M^{\alpha}(\mbf{p})=
i^L \sum \limits_{\mu_1 \mu_2 m_L}\;
\langle L m_L S m_S | J M\rangle \;
\langle{\textstyle\frac 12} \mu_1 {\textstyle\frac 12} \mu_2 | S m_S\rangle\;
Y_{L m_L}({\hat{\mbf{p}}})\;
U_{\mu_1}^{\rho_1}(\mbf{p})\;
U_{\mu_2}^{\rho_2}(-\mbf{p}),
\label{Ydecomp}
\end{eqnarray}
where $\langle\cdot|\cdot\rangle$ denotes the Clebsch-Gordan
coefficient, and ${\hat {\mbf{p}}} =
{\mbf{p}}/{|\mbf{p}|}$. The decomposition is according to the quantum
numbers of relative orbital angular momentum $L$, total spin $S$,
total angular momentum $J$ with projection $M$, and $\rho$-spin
$\rho_1$, $\rho_2$, collectively denoted by $\alpha$ \cite{kubis}. The
Bethe-Salpeter amplitude of the deuteron with mass $M_d$ is then
written in the following way (see also ~\cite{honzava})
\begin{eqnarray}
\Phi_{JM}(\stackrel{\circ}{P},p)=\sum \limits_{\alpha} \;
g_{\alpha} (p_0,|\mbox{\boldmath $p$}|)\;
{\cal Y}_M^{\alpha}(\mbf{p}),
\label{reldp}
\end{eqnarray}
where $\stackrel{\circ}{P}=(M_d,\mbf{0})$. The radial parts of the
wave function are denoted by $g_{\alpha} (p_0,|\mbox{\boldmath $p$}|)$.
The matrix representation of the Bethe-Salpeter
amplitude~\cite{nakanishi} is obtained from the above expression
Eq.~(\ref{reldp}) by transposing the spinor of the second particle. In
the rest frame of the system this reads for the basis spinors
\begin{equation}
{U_{\mu_1}^{\rho_1}}(\mbf{p})\;
{U_{\mu_2}^{\rho_2}}(-\mbf{p})\;
\longrightarrow \;
{U_{\mu_1}^{\rho_1}}(\mbf{p})\;
{U_{\mu_2}^{\rho_2\top}}(-\mbf{p}),
\label{replace}
\end{equation}
which is now a $4\times 4$ matrix in the two particle spinor space.
The nucleon nucleon Bethe-Salpeter wave function in this basis
is then represented by
\begin{eqnarray}
\Psi_{JM}(P_{(0)},p)=\sum \limits_{\alpha}\;
g_{\alpha} (p_0,|\mbox{\boldmath $p$}|)\;
\Gamma_M^{\alpha}(\mbf{p})\;C.
\label{reldmatrix}
\end{eqnarray}
where $C$ is the charge conjugation matrix, $C=i\gamma_2\gamma_0$, and
$\Gamma_M^{\alpha}$ is defined as ${\cal Y}_M^{\alpha}$ where the
replacement Eq.~(\ref{replace}) is used.
As an illustration we give an example how to calculate the
spin-angular momentum part of the vertex function for the $^3
S_1^{++}$ state, where we use the spectroscopic notation
$^{2S+1}L^{\rho_1 \rho_2}_{J}$ of Ref.~\cite{kubis}. In this case
\begin{eqnarray}
\lefteqn{\sqrt{4 \pi}\; {\Gamma}^{^3 S_1^{++}}_M( \mbf{p})
= \sum \limits _{\mu_1 \mu_2}\;
\langle{\textstyle\frac 12} \mu_1 {\textstyle\frac 12} \mu_2 |1M\rangle\; u_{\mu_1}( \mbf{p})\;
u_{\mu_2}^\top(-\mbf{p})}&&\nonumber\\
&=&{{{\cal L}}}( \mbf{p})\sum \limits _{\mu_1 \mu_2}
\;\langle {\textstyle\frac 12} \mu_1 {\textstyle\frac 12} \mu_2 |1M\rangle
\;\left(\begin{array}{c} \chi_{\mu_1} \\ 0 \end{array} \right)
\;\left(\chi_{\mu_2}^\top\; 0\right )
\;{{{\cal L}}}^\top(-\mbf{p})\nonumber\\
&=&{{{\cal L}}}( \mbf{p})
\;\left( \begin{array}{cc}
\sum \limits_{\mu_1 \mu_2}
\langle{\textstyle\frac 12} \mu_1 {\textstyle\frac 12} \mu_2 |1M\rangle
\chi_{\mu_1} \chi^\top_{\mu_2} &0 \\ 0&0 \end{array}\right)
\;{{{\cal L}}}^\top(-\mbf{p})\nonumber\\
&=&{{{\cal L}}}( \mbf{p})
\;\frac {1+\gamma_0}{2} \frac 1{\sqrt2}
\;\left( \begin{array}{cc} 0 & -\mbf{\sigma}\cdot \mbf{\epsilon}_M\\
\mbf{\sigma}\cdot\mbf{\epsilon}_M &0 \end{array}\right)
\;\left( \begin{array}{cc} 0 & -i \sigma_2\\
-i\sigma_2 &0 \end{array}\right)
\;{{{\cal L}}}^\top(-\mbf{p})\nonumber\\
&=&{{{\cal L}}}( \mbf{p})
\;\frac {1+\gamma_0}{2} \frac 1{\sqrt2}
\;(-\mbf{\gamma}\cdot\mbf{\epsilon}_M)\;{{{\cal L}}}( \mbf{p})
C\nonumber\\
& =&\frac{1}{2E(m+E)}\frac{1}{\sqrt{2}}\;(m+{\gamma\cdot p_1})\;
\frac{1+\gamma_0}{2}\; \gamma\cdot\epsilon_M\;(m-{\gamma\cdot p_2})\;C,
\nonumber
\end{eqnarray}
Here we make use of $\sqrt{2}\sum_{\mu_1 \mu_2} ({\textstyle\frac 12} \mu_1 {\textstyle\frac 12}
\mu_2 |1M) \chi_{\mu_1} \chi^T_{\mu_2}=
(\mbf{\sigma}\cdot\mbf{\epsilon}_M) \;({i \sigma_2})$, where $
\mbf{\epsilon}_M$ is the polarization vector of the spin-1 composite
system with the components in the rest frame given by
\begin{eqnarray}
\mbf{\epsilon}_{+1}=(-1,-i,0)/\sqrt{2}, \quad
\mbf{\epsilon}_{-1}=(1,-i,0)/\sqrt{2}, \quad
\mbf{\epsilon}_{0}=(0,0,1),
\label{vecpol}
\end{eqnarray}
and the four-vector $\epsilon_M = (0,\mbf{\epsilon}_M)$. This
replacement can be done for all Clebsch-Gordan coefficients that in
turn allows us to write the basis in terms of Dirac matrices.
To keep the notation short the $\rho$-spin dependence is taken out of
the matrices and therefore the spin-angular momentum
functions ${\Gamma}^\alpha_M(\mbox {\boldmath$p$})$ are
replaced in the following way
\begin{eqnarray}
{\Gamma}^{\tilde\alpha,\, ++}_M(\mbox {\boldmath$p$}) &=
&\frac{\gamma\cdot p_1 + m}{\sqrt{2E(m+E)}}\;
\frac{1+\gamma_0}{2}\;
{\tilde \Gamma}^{\tilde\alpha}_M(\mbox {\boldmath$p$})\;
\frac{\gamma\cdot p_2 - m}{\sqrt{2E(m+E)}},
\nonumber\\
{\Gamma}^{\tilde\alpha,\, --}_M(\mbox {\boldmath$p$}) &=
&\frac{\gamma\cdot p_2 - m}{\sqrt{2E(m+E)}}\;
\frac{-1+\gamma_0}{2}\;
{\tilde \Gamma}^{\tilde\alpha}_M(\mbox {\boldmath$p$})\;
\frac{\gamma\cdot p_1 + m}{\sqrt{2E(m+E)}},
\nonumber\\
{\Gamma}^{\tilde\alpha,\, +-}_M(\mbox {\boldmath$p$}) &=
&\frac{\gamma\cdot p_1 + m}{\sqrt{2E(m+E)}}\;
\frac{1+\gamma_0}{2}\;
{\tilde \Gamma}^{\tilde\alpha}_M(\mbox {\boldmath$p$})\;
\frac{\gamma\cdot p_1 + m}{\sqrt{2E(m+E)}},
\nonumber\\
{\Gamma}^{\tilde\alpha,\, -+}_M(\mbox {\boldmath$p$}) &=
&\frac{\gamma\cdot p_2 - m}{\sqrt{2E(m+E)}}\;
\frac{1-\gamma_0}{2}\;
{\tilde \Gamma}^{\tilde\alpha}_M(\mbox {\boldmath$p$})\;
\frac{\gamma\cdot p_2 - m}{\sqrt{2E(m+E)}},
\label{gf}
\end{eqnarray}
with $\tilde\alpha=~^{2S+1}L_J$. The matrices ${\tilde
\Gamma}^{\tilde\alpha}$ for $J=0,1$ states are given later in
Tabs.~\ref{tab:1s0} and \ref{tab:3s1}.
To conclude this paragraph we give the following useful relations.
The adjoint functions are defined through
\begin{eqnarray}
{\bar {\Gamma}_M^{\alpha}}(\mbox {\boldmath$p$})
=\gamma_0 \;\left[{{\Gamma}_M^{\alpha}}
(\mbox {\boldmath$p$})\right]^{\dagger}\;\gamma_0,
\label{conj}
\end{eqnarray}
and the orthogonality condition is given by
\begin{eqnarray}
\int d^2{\hat{\mbox {\boldmath$p$}}}\
{\rm Tr} \{ {{{{\Gamma}}_M^{\alpha}}^{\dagger}(\mbox {\boldmath$p$})
{\Gamma}_{M^{\prime}}^{\alpha^{\prime}}
(\mbf{p})
\} = \delta_{M {M^{\prime}}}
\delta_{\alpha {\alpha}^{\prime}}}.
\label{ortm}
\end{eqnarray}
In addition, for identical particles the Pauli principle holds, which
reads
\begin{equation}
\Psi_{JM}(\stackrel{\circ}{P},p)=-P_{12}\Psi_{JM}(\stackrel{\circ}{P},p)
=(-1)^{I+1}C\left[\Psi_{JM}(\stackrel{\circ}{P},-p)\right]^\top C.
\end{equation}
where $I$ denotes the channel isospin.
This induces a definite transformation property of the
radial functions $g_{\alpha} (p_0,|\mbox{\boldmath $p$}|)$
on replacing $p_0 \rightarrow -p_0$, which is even or odd,
depending on $\alpha $. Also,
since the $P^{\rho_1\rho_2}$ amplitudes
do not have a definite symmetry we use instead
\begin{eqnarray}
\Gamma^{P^e}_M &=&
\frac {1}{\sqrt 2}(\Gamma^{P,+-}_M
+ \Gamma^{P,-+}_M),\nonumber \\
\Gamma^{P^o}_M &=&
\frac {1}{\sqrt 2}(\Gamma^{P,+-}_M - \Gamma^{P,-+}_M).
\label{Geven}
\end{eqnarray}
These functions have definite even(e) or odd(o) $\rho$-parity, which
allows us to define a definite symmetry behavior under particle
exchange.
We now discuss the $^1S_0$
channel and the deuteron channel in some detail.
\subsection{The $^1S_0$ channel}
For the two nucleon system in the $J=0$ state the relativistic wave
function consists of four states, i.e. $^1S_0^{++}$, $^1S_0^{--}$,
$^3P_0^{e}$, $^3P_0^{o}$, labeled by $1,\dots,4$ in the following.
The Dirac matrix representation of the spin structures are shown in
Table~\ref{tab:1s0}.
\begin{table}[h]
\caption{\label{tab:1s0} Spin angular momentum parts
$\tilde \Gamma_0^{\tilde\alpha}$
for the $J=0$ channel}
\[
\begin{array}{cc}
\hline\hline
\tilde\alpha&{\sqrt{8\pi} \;\;\tilde \Gamma}_0^{\tilde\alpha}\\[1ex]
\hline
^1S_0&-\gamma_5\\[1ex]
^3P_0&|\mbox{\boldmath $p$}|^{-1} ({\gamma\cdot p_1}-{\gamma\cdot p_2}) \gamma_5\\
\hline\hline
\end{array}
\]
\end{table}
Note the formally covariant relation for $|\mbox{\boldmath $p$}|$, and also for $p_0$
and $E$, used in the following,
\begin{equation}
p_0=\frac{P\cdot p}{M}, \quad E=\sqrt{\frac {(P\cdot p)^2}{M^2}-p^2+m^2},
\quad |\mbox{\boldmath $p$}|= \sqrt{\frac {(P\cdot p)^2}{M^2}-p^2}.
\nonumber
\end{equation}
Eq.~(\ref{gf}) along with Table~\ref{tab:1s0} may now be used as a
guideline to construct covariant expressions for the $J=0$ nucleon
nucleon Bethe-Salpeter amplitude. This will be achieved by allowing
the momenta involved to be off-shell. Introducing then four Lorentz
invariant functions $h_i(P\cdot p,p^2)$ this amplitude is given by
\begin{eqnarray}
\Psi_{00}(P,p) &=
&h_1\gamma_5 +
h_2\frac {1}{m} (\gamma\cdot {p}_1\gamma_5 +\gamma_5 \gamma\cdot {p}_2)
\nonumber\\&&
+h_3\left(\frac{\gamma\cdot {p}_1-m}{m}
\gamma_5 -\gamma_5 \frac {\gamma\cdot {p}_2+m}{m}\right)\nonumber\\
&&+h_4 \frac{\gamma\cdot {p}_1-m}{m} \gamma_5 \frac {\gamma\cdot
{p}_2+m}{m}
\label{covarj0}
\end{eqnarray}
The connection between the invariant functions $h_i(P\cdot p,p^2)$ and the
functions $g_i (p_0,|{\bf p}|)$ given before is achieved by expanding
the Dirac matrices appearing in Eq.~(\ref{covarj0}) into the
$\Gamma^\alpha$. The resulting
relation is
\begin{eqnarray}
h_1 &= &- \sqrt{2}D_1\; ( g_1+ g_2)
- \mu p_0 |\mbox{\boldmath $p$}|^{-1}\; g_3 - 4m|\mbox{\boldmath $p$}|^{-1}D_0 \; g_4 \nonumber\\
h_2 &=&{\textstyle\frac 14} m |\mbox{\boldmath $p$}|^{-1} \; g_3 \nonumber\\
h_3 &=& 8 a_0m^2 (g_1+g_2) - {\textstyle\frac 12}\mu p_0 |\mbox{\boldmath $p$}|^{-1}\;g_3
- 8a_0m|\mbox{\boldmath $p$}|^{-1}\varepsilon (m-E) \; g_4 \nonumber\\
h_4 &=& -4a_0\sqrt{2}m^2(g_1+g_2) + 8a_0m^3|\mbox{\boldmath $p$}|^{-1}\;g_4
\end{eqnarray}
where $a_0=1/(16ME)$, $\varepsilon =2m+E$, $\mu=m/M$, $M=\sqrt{(p_1+p_2)^2}$,
and
\begin{eqnarray}
D_0&=&a_0(4p_0^2+16m^2-12E^2-M^2),\\
D_1&=&a_0(-M^2/4+p_0^2-E^2+16m^2+ME).
\end{eqnarray}
Note, that only $h_2$ and $g_3$ are odd with respect to
$p_0\rightarrow -p_0$, and all other functions are even.
\subsection{$^3S_1- ^3D_1$ channel}
In the deuteron channel the relativistic wave function consists of
eight states, i.e. $^3S_1^{++}$, $^3S_1^{--}$,$^3D_1^{++}$,
$^3D_1^{--}$, $^3P_1^{e}$, $^3P_1^{o}$, $^1P_1^{e}$, $^1P_1^{o}$,
labeled by $1,\dots,8$ in the following. There Dirac matrix
representation of the spin structures ${\tilde
\Gamma}^{\tilde\alpha}_M$ is shown in Table~\ref{tab:3s1}.
\begin{table}[h]
\caption{\label{tab:3s1} Spin angular momentum parts
$\tilde \Gamma_M^{\tilde\alpha}$
for the deuteron channel}
\[
\begin{array}{cc}
\hline\hline
\tilde\alpha&{\sqrt{8\pi}\;\;\tilde \Gamma}_M^{\tilde\alpha}\\[1ex]
\hline
^3S_1&{\gamma\cdot \epsilon_M}\\
^3D_1& -\frac{1}{\sqrt{2}}
\left[ {\gamma\cdot \epsilon_M}+\frac{3}{2}
({\gamma\cdot p_1}-{\gamma\cdot
p_2})p\cdot\epsilon_M|\mbox{\boldmath $p$}|^{-2}\right]\\
^3P_1& \sqrt{\frac{3}{2}}
\left[ \frac{1}{2} {\gamma\cdot \epsilon_M}
({\gamma\cdot p_1}-{\gamma\cdot p_2})
-p\cdot\epsilon_M \right]|\mbox{\boldmath $p$}|^{-1}\\
^1P_1&\sqrt{3} p\cdot\epsilon_M|\mbox{\boldmath $p$}|^{-1}\\
\hline\hline
\end{array}
\]
\end{table}
Again, generalizing the Dirac representation it is possible to achieve
a covariant form of the Bethe-Salpeter amplitude with eight Lorentz
invariant functions $h_i(P\cdot p,p^2)$,
\begin{eqnarray}
\Psi_{1M}(P,p) &=
&h_1 \gamma\cdot {\epsilon}_M
+h_2 \frac{p \cdot\epsilon_M}{m} \nonumber\\
&&+h_3 \left (\frac {\gamma\cdot p_1-m}{m} \gamma\cdot {\epsilon}_M +
\gamma\cdot {\epsilon}_M \frac{\gamma\cdot p_2+m}{m}\right)
\nonumber\\
&&+h_4 \left(\frac {\gamma\cdot p_1 + \gamma\cdot p_2}{m}\right)
\frac {p\cdot \epsilon_M}{m}\nonumber\\&&+
h_5 \left(\frac {\gamma\cdot p_1-m}{m} \gamma\cdot {\epsilon}_M -
\gamma\cdot {\epsilon}_M \frac{\gamma\cdot p_2+m}{m}\right)
\nonumber\\
&&+h_6 \left(\frac {\gamma\cdot p_1 - \gamma\cdot p_2-2m}{m}\right)
\frac {p\cdot \epsilon_M}{m}
\nonumber \\
&&+\frac {\gamma\cdot p_1-m}{m}\left(h_7 {\gamma\cdot \epsilon_M}
+h_8 \frac{p\cdot \epsilon_M}{m} \right )\frac{\gamma\cdot p_2+m}{m}
\label{covar}
\end{eqnarray}
For the deuteron, the functions $h_i(P\cdot p,p^2)$ and $g_i(p_0,|\mbox{\boldmath $p$}|)$ are
connected via
\begin{eqnarray}
h_1 &= &D^+_1\; (g_3-\sqrt{2}g_1)
+ D^-_1\; (g_4-\sqrt{2}g_2)
+ {\textstyle\frac 12}\sqrt{6}\mu p_0|\mbox{\boldmath $p$}|^{-1} \;g_5
+ \sqrt{6}m D_0 |\mbox{\boldmath $p$}|^{-1} \;g_6 \nonumber \\
h_2 &= &\sqrt{2}(D^-_2\;g_1 + D_2^+\;g_2)
- D_3^+\;g_3
- D_3^-\;g_4 \nonumber\\
&&-{\textstyle\frac 12}\sqrt{6} \mu p_0|\mbox{\boldmath $p$}|^{-1}\;g_5
-\sqrt{6} D_4 m|\mbox{\boldmath $p$}|^{-1} \;g_6
+\sqrt{3} m^2 |\mbox{\boldmath $p$}|^{-1} E^{-1}\;g_7\nonumber \\
h_3 &= &-{\textstyle\frac 14}\sqrt{6}m |\mbox{\boldmath $p$}|^{-1}\; g_5\nonumber \\
h_4 &= &8a_1\sqrt{2}mp_0\;(g_1-g_2)
+8a_2\varepsilon \;mp_0(g_3-g_4)\nonumber \\&&
-16a_0\sqrt{3}m^2 |\mbox{\boldmath $p$}|^{-1}\;(p_0g_7-Eg_8)\nonumber \\
h_5 &= &16a_0m^2\;[g_4+g_3-\sqrt{2}\,(g_1+g_2)]\nonumber \\&&
+8a_0\sqrt{6}m |\mbox{\boldmath $p$}|^{-1} \;[p_0E\;g_5+
(2m^2-E^2)\;g_6] \nonumber \\
h_6 &= &4a_1\sqrt{2}m[D_6^-\;g_1 + D_6^+\; g_2]
- 4 m^2 |\mbox{\boldmath $p$}|^{-2} [D_5^+ \;g_3 + D_5^- \;g_4]\nonumber \\&&
-16a_0\sqrt{6}m^2 |\mbox{\boldmath $p$}|^{-1}\;(mg_6-M_dg_7)\nonumber \\
h_7 &=& 4a_0 m^2\;[\sqrt{2}(g_1+g_2)-(g_3+g_4)]
-4a_0\sqrt{6}m^3 |\mbox{\boldmath $p$}|^{-1}\; g_6 \nonumber \\
h_8 &=& 4a_0m^3 |\mbox{\boldmath $p$}|^{-2}\;
[\sqrt{2}(m-E)(g_1+g_2)-(2E+m)(g_3+g_4)]\nonumber \\&&
+4a_0\sqrt{6}m^3 |\mbox{\boldmath $p$}|^{-1} \;g_6
\end{eqnarray}
with $a_0$, $\varepsilon $, $\mu$, $D_0$ given above, $a_1=a_0 m/(m+E)$,
$a_2=a_1/(m-E)$, and the dimensionless functions
\begin{eqnarray}
D^\pm_1&=&a_0(4p_0^2+16m^2-M_d^2-4E^2\pm 4M_dE)\\
D^\pm_2&=&a_1(16m^2+16mE+4E^2+M_d^2-4p_0^2\pm 4M_d\varepsilon )\\
D^\pm_3&=&a_2[-12mE^2+2M_d^2E-8p_0^2E+16m^3+mM_d^2-4mp_0^2+8E^3\nonumber\\
&& \pm(16m^2M_d+ 4mM_dE- 8E^2M_d)]\\
D_4&=&a_0(16m^2-4E^2-p_0^2+m^2)\\
D^\pm_5&=&a_0(-2E^2+4m^2+4mE\pm \varepsilon M_d)\\
D_6^\pm&=&a_0(2\varepsilon \pm M_d)
\end{eqnarray}
Note now, that $h_3$, $h_4$ and $g_5$, $g_8$ are odd, and all other
functions are even under $p_0\rightarrow -p_0$.
\section{Construction of
the light-front wave function of two nucleon system
from the Bethe-Salpeter amplitude}
\label{sect:lfbs}
We now compare the above given covariant amplitudes of the
Bethe-Salpeter approach to the covariant light front form. The state
vector defining the light-front plane is denoted by $\omega$, where
$\omega=(1,0,0,-1)$ leads to the standard light front formulation
defined on the frame $t+z=0$. The formal relation between the
light-front wave functions $\Phi(k_1,k_2,p,\omega \tau )$, depending
on the on-shell momenta $k_1$, $k_2$, and $p=k_1+k_2-\omega\tau$, and
the Bethe-Salpeter amplitude $\Psi (p_1, p_2)$, where $p_1$ and $p_2$
are off-shell momenta has been given in Ref.~\cite{car98},
\begin{equation} \label{bs7}\Phi
(k_1,k_2,p,\omega \tau )=
\frac{k_1\cdot\omega \,k_2\cdot\omega}{\pi \omega\cdot p
\sqrt{m}}\int_{-\infty }^{+\infty }\Psi
(k_1-\omega\tau/2+\omega\beta, k_2-\omega\tau/2-\omega\beta )\,d\beta.
\end{equation}
In the theory on the null plane the integration of Eq.~(\ref{bs7})
corresponds to an integration over $dk^{-}$. Since $k_1$ and $k_2$
are on the mass shell it is possible to use the Dirac equation after
making the replacement of the arguments indicated in Eq.~(\ref{bs7}).
This will be done explicitly for the $J=0$ and the deuteron channel in
the following.
\subsection{$^1S_0$ channel}
Using the Dirac equations $\bar u(k_1)(\gamma\cdot k_1 -m) =0$, and
$(\gamma\cdot k_2 +m) C u(k_2)^\top=0$ one obtains the following form
of the light front wave function from the Bethe-Salpeter amplitude
using Eq.~(\ref{bs7})
\begin{equation}
\Psi_{00}\rightarrow
H^{(0)}_1 \gamma_5 +
2 H^{(1)}_2 \frac {\beta \gamma\cdot {\omega}}{m\omega\cdot
P}\gamma_5,
\label{1s0lf}
\end{equation}
The functions $H_1(s,x)$ and $H_2(s,x)$, depending now on
$x=\omega\cdot k_1/\omega\cdot P$ and $s=(k_1+k_2)^2=4(q^2+m^2)$ are
obtained from the functions $h_i(P\cdot p,p^2)$ through the remaining
integrals over $\beta$ implied in Eq.~(\ref{bs7}),
\begin{eqnarray}
H^{(0)}_i(s,x)&=& N \int {h_i((1-2x)(s-M^2)+\beta \omega\cdot P,
-s/4+m^2+(2x-1)\beta)\, \omega\cdot P d\beta} \nonumber\\
&\equiv&N \int {\tilde h_i (s,x,\beta^{\prime})\,
d\beta^{\prime} },\nonumber\\
H^{(k)}_i(s,x)&\equiv&N \int { \tilde h_i(s,x,\beta^{\prime}) \,
(\beta^{\prime})^k d\beta^{\prime} }
\label{eqn:H}
\end{eqnarray}
where the variable $\beta^{\prime}=\beta \omega\cdot P$ has been
introduced, and $N=x(1-x)$,
$1-x=\omega\cdot k_2/\omega\cdot P$.
The functions $h_3$ and $h_4$ do not
contribute. Instead of the four structures appearing in the
Bethe-Salpeter wave function, the light front function consists of
only two. Note, that the second term in parenthesis is defined by the
pure relativistic component of the Bethe-Salpeter amplitude.
\subsection{$^3S_1$-$^3D_1$ case}
In the deuteron case, starting from formula Eq.~(\ref{bs7}), replacing
the momenta $p_i$, and applying the Dirac equation we arrive at
\begin{eqnarray}
\Psi_{1M}&\rightarrow &
H^{(0)}_1 \gamma\cdot {\epsilon_M} +H^{(0)}_2 \frac {k\cdot \epsilon}{m}
+[H^{(1)}_2+2H^{(1)}_5]
\frac{\omega\cdot \epsilon}{m\omega\cdot P} \nonumber \\
&&+2H^{(1)}_6 \frac{k\cdot\epsilon \gamma\cdot {\omega}}{m^2
\omega\cdot P}
+2 H^{(1)}_3 \frac { \gamma\cdot {\epsilon} \gamma\cdot {\omega}
-\gamma\cdot {\omega} \gamma\cdot {\epsilon} }{\omega\cdot P}
\nonumber\\&&
+[2H^{(2)}_6+2H^{(2)}_7] \frac { \omega\cdot \epsilon \gamma\cdot
{\omega} }
{m^2(\omega\cdot P)^2},
\label{bsd5}
\end{eqnarray}
where $H_i^{(k)}$ are defined in eq.~(\ref{eqn:H}). In this case the
functions $h_4$ and $h_8$ do not contribute. The expression
$(\gamma\cdot {\epsilon}\gamma\cdot {\omega}-\gamma\cdot
{\omega}\gamma\cdot {\epsilon})$ at the term $H_5$ given in
Eq.~(\ref{bsd5}) can be transformed to a different one to compare
directly to the light front form given in Ref.~\cite{car98}. Using in
addition the on shellness of the momenta $k_1$ and $k_2$ the resulting
form is
\begin{eqnarray}
\label{bsd7}
\bar u_1(\gamma\cdot {\epsilon}\gamma\cdot {\omega}
-\gamma\cdot {\omega}\gamma\cdot {\epsilon})C\bar u_2^\top &=&
\frac{4}{s}\bar u_1[-i\gamma_5e_{\mu\nu\rho\gamma}
\epsilon_{\mu}k_{1\nu}k_{2\rho}\omega_{\gamma}\nonumber\\&&
+k\cdot\epsilon\;\omega\cdot P-m\;\gamma\cdot {\epsilon} \;\omega\cdot P
\nonumber\\
&&-\frac{1}{2}(s-M^2)(x-\frac{1}{2})\omega\cdot\epsilon
\nonumber\\&&
+\frac{1}{2}m(s-M^2)\,\frac{\gamma\cdot {\omega} \;\omega\cdot\epsilon}
{\omega\cdot P}] C\bar u_2^\top
\end{eqnarray}
The final form of light front wave function then is
\begin{eqnarray}
\Psi_{1M} &\rightarrow &
H_1^{\prime} \gamma\cdot {\epsilon}_M +H_2^{\prime} \frac {k\cdot
\epsilon}{m}
+H_3^{\prime} \frac{\omega\cdot \epsilon}{m\,\omega\cdot P)}
+H_4^{\prime} \frac{k\cdot \epsilon\; \gamma\cdot {\omega}}
{m^2 \omega\cdot P} \nonumber \\&&
+H_5^{\prime} i \gamma_5 e_{\mu \nu \rho
\sigma}\epsilon_{\mu}{k_1}_{\nu}
{k_2}_{\rho}{\omega}_{\sigma}
+H_6^{\prime} \frac { \omega\cdot \epsilon\;
\gamma\cdot {\omega} }{m^2(\omega\cdot P)^2},
\end{eqnarray}
with the functions
\begin{eqnarray}
H_1^{\prime} &=& H^{(0)}_1-\frac{4}{s}2H^{(1)}_3, \nonumber \\
H_2^{\prime} &=& H^{(0)}_2+\frac{4}{s}2H^{(1)}_3, \nonumber \\
H_3^{\prime} &=& [H^{(1)}_2+2H^{(1)}_5]
-\frac{ (s-M^2)}{s}(2x-1) 2H^{(1)}_3, \nonumber \\
H_4^{\prime} &=& 2H^{(1)}_6, \nonumber \\
H_5^{\prime} &=& \frac{4}{ms}2H^{(1)}_3, \nonumber \\
H_6^{\prime} &=& [2H^{(2)}_6+2H^{(2)}_7]
+2 \frac{s-M^2}{s}m^2 2H^{(1)}_3.
\label{eqn:conn}
\end{eqnarray}
Provided the invariant functions $h_i$ are given from a solution of
the Bethe-Salpeter equation the above relations allow us to directly
calculate the corresponding light front components of the wave
functions.
Thus, the projection of the Bethe-Salpeter amplitudes to the light
front reduces the number of independent functions to six instead of
eight for the $^3S_1-^3D_1$ channel and to two instead of four for the
$^1S_0$ channel. The reduction is because the nucleon momenta $k_1$
and $k_2$ are on-mass-shell in the light front formalism. The result
is based on the application of the Dirac equation and the use of the
covariant form. Any other representation (e.g. spin orbital momentum
basis) also leads to a reduction of the number of amplitudes for the
two nucleon wave function that is however less transparent. For an
early consideration compare, e.g. Ref.~\cite{GT60}.
\section{Integral representation method}
A deeper insight into the connection of Bethe-Salpeter amplitudes and
light front wave functions will be provided within the integral
representation proposed by Nakanishi~\cite{nakanishi}. This method
has recently been fruitfully applied to solve the Bethe-Salpeter
equation both in ladder approximation and beyond within scalar
theories~\cite{williams}. In this framework the following ansatz for
radial Bethe-Salpeter amplitudes of orbital momentum $\ell$ has been
proposed,
\begin{equation}
\phi_\ell(P\cdot p,p^2)=\int_0^\infty
d\alpha\;\int_{-1}^{+1} dz \;\frac{g_\ell(\alpha,z)}
{(\alpha+\kappa^2-p^2-z\,P\cdot p-i\epsilon)^n},
\label{nak1}
\end{equation}
where $g_\ell(\alpha,z)$ are the densities or weight functions,
$\kappa=m^2-M_d^2/4$ and the integer $n \geq 2$. The weight functions
$g_\ell(\alpha,z)$ that are continuous in $\alpha$ vanish at the
boundary points $z=\pm 1$. The form eq.~(\ref{nak1}) opens the
possibility to solve the Bethe-Salpeter amplitude in the whole
Minkowski space while commonly solutions are restricted to the
Euclidean space only. In fact the densities could be considered as the
main object of the Bethe-Salpeter theory, because knowing them allows
one to calculate all relevant amplitudes.
For the realistic deuteron we need to expand to Nakanishi form to the
spinor case, which has not been done so far. The key point to do so is
choosing the proper spin-angular momentum functions and perform the
integration over angles in the Bethe-Salpeter equation. The choice of
the covariant form of the amplitude allows us to establish a system of
equations for the densities $g_{ij}(\alpha,z)$, suggesting the
following general form for the radial functions $h_i(P\cdot p,p^2)$ (even in
$P\cdot p$)
\begin{eqnarray}
h_i(P\cdot p,p^2)&=&\int_0^\infty
d\alpha\;\int_{-1}^{+1} dz\; \left\{\frac{g_{i1}(\alpha,z)}
{(\alpha+\kappa^2-p^2-z\,P\cdot p)^n}\right.\nonumber\\&&
\qquad\qquad\qquad
+\frac{g_{i2}(\alpha,z)\;p^2}
{(\alpha+\kappa^2-p^2-z\,P\cdot p)^{n+1}} \\ \nonumber&&
\qquad\qquad\qquad
\left.+\frac{g_{i3}(\alpha,z)\;
(P\cdot p)^2}{(\alpha+\kappa^2-p^2-z\,P\cdot p)^{n+2}}\right\}.
\label{nak2}
\end{eqnarray}
For the functions that are odd in $P\cdot p$ the whole integrand is
multiplied by factor $P\cdot p$. Although now the number of densities is
larger the total number of {\em independent} functions is still eight.
The form given in eq.~(\ref{nak2}) is valid only for the deuteron
case. The continuum amplitudes of the $^1S_0$ state, e.g., require a
different form.
The basic point now is that using this integral represention allows us
to perform the integration over $\beta^{\prime}$ in the expressions of
eq.~(\ref{bsd5}). Substituting the arguments of the functions $h_i$
into the integral representation eq.~(\ref{nak2}) leads to a
denominator of the form
\begin{equation}
{\cal D}^k(\alpha,z;x,s,\beta')=
(\alpha+\frac{s}{4}(1+(2x+1)z)-\beta^{\prime} (2x-1+z)-i\epsilon)^k
\nonumber
\end{equation}
Then, using the identity for an analytic function $F(z)$
\begin{equation}
\int_{-1}^{+1} dz\, \int_0^\infty d\beta^{\prime}\, \frac{F(z)}
{{\cal D}^k(\alpha,z;x,s,\beta')}
=\frac{i\pi}{(k-1)}\frac{F(1-2x)}
{(\alpha+sx(1-x))^{k-1}}
\end{equation}
allows us to express the radial amplitudes in terms of
Nakanishi densities. E.g. $H_5(s,x)$ reads
\begin{equation}
H_5(s,x)=\frac{x(1-x)}{s} \int d\alpha
\left\{ \frac {g_{51}(\alpha,1-2x)}{(\alpha+sx(1-x))}+
\frac{g_{52}(\alpha,1-2x)sx(1-x) }{(\alpha+sx(1-x))^2}\right\}
\end{equation}
Note that the dependence of the amplitude on the light front argument
$x$ is fully determined by the dependence of the density on the
variable $z=1-2x$, which has also been noted in Ref.~\cite{car98} for
the Wick-Cutkosky model.
This fully completes the connection between the Bethe-Salpeter
amplitude and the light front form. Evaluation of the Nakanishi
integrals does not lead to cancelations of functions. Although some
functions are cancelled for reasons given above all spin orbital
momentum functions (or all densities) in principle contribute to the
light front wave functions.
Once the Bethe-Salpeter
amplitudes are given (or the Nakanishi densities) the
light front wave function can explicitly be calculated. The reduction
of the number of amplitudes is due to $k_1$ and $k_2$ being on-shell
in the light front form. The Nakanishi spectral densities of the
Bethe-Salpeter amplitudes lead directly to the light front wave
function.
\section{Conclusion}
Even more than 60 years after its discovery the deuteron is still an
object of intense research. In recent years the focus on relativistic
aspects has increased, in particular in the context of $ed$ scattering
and relativistic $pd$ reactions. Meanwhile a relativistic description
of the deuteron (i.e. the nucleon nucleon system) has achieved
considerable progress. Two successful relativistic approaches are
based on the Bethe-Salpeter equation or the light front dynamics. In
this paper we provide a detailed comparison of the different
approaches on the basis of the spin-orbital amplitudes and the radial
dependence on the basis of the Nakanishi integral representation. In
this context the $P$ waves of the deuteron play an important role. To
reach this conclusion the covariant form has been given in terms of
the partial wave representation using the $\rho$-spin notation.
We would like to stress that the two relativistic approaches have
shown qualitatively similar results in the description of the
electrodisintegration near threshold. The functions $f_5$ and $g_2$
(notation of Ref.~\cite{car98}) may be related to the pair current in
the light front approach whereas the functions $h_5$ and $h_2$ play
this role in the Bethe-Salpeter approach. The results presented here
allow us to specify this relation on a more fundamental level.
| proofpile-arXiv_065-9133 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Recent impressive progress in nanostructuring
(e-beam lithography, single atom manipulation with a
scanning tunneling microscope tip, etc.) has made it
possible to control the quantization effects in
nanostructures by varying their size and topology.
The most commonly used so far for this purpose are
semiconducting structures, where an elegant approach
developed by Landauer~\cite{Landauer} relates
directly mesoscopic transport to the quantum
transition probability, which is determined by the
specific configuration of quantum levels and
available tunneling barriers. Changing the
confinement potential via nanostructuring allows to
tune both the quantum levels and the tunneling
probabilities. Extended by
B\"{u}ttiker~\cite{Buttiker} to the multi-terminal
measurement geometry, the Landauer-B\"{u}ttiker
formalism has been widely and very successfully used
in the interpretation of numerous mesoscopic
experiments~\cite{Beenakker}.
In comparison to the semiconducting and normal
systems, {\it superconducting structures} have been
studied much less so far. In this report, it is
worth first to ask a few simple questions like: why
do we want to make such structures, what interesting
new physics do we expect, and why do we want to
focus on superconducting (and not, for example,
normal metallic) nanostructured materials?
First of all, by making low dimensional systems, one
creates an artificial potential in which charge
carriers or flux lines are confined. The confinement
length scale $L_{A}$ of an elementary "plaquette" A,
gives roughly the expected energy scale $E=\hbar^2
\pi^2 n^2/(2 m L_A^2)$. The concentration of charge
carriers or flux lines can be controlled by varying
the gate voltage in two-dimensional (2D) electron
gas systems~\cite{Ensslin} or the applied magnetic
field in superconductors~\cite{Pannetier84}. In this
situation, different commensurability effects
between the fixed number of elements A in an array
and a tunable number of charge or flux carriers are
observed.
Secondly, modifying the sample topology in those
systems creates a unique possibility to impose the
desired boundary conditions, and thus almost
"impose" the properties of the sample. A Fermi
liquid or a superconducting condensate confined
within such a system will be subjected to severe
constraints and, as a result, the properties of
these systems will be strongly affected by the
boundary conditions.
While a normal metallic system should be considered
quantum-mechanically by solving the Schr\"{o}dinger
equation:
\begin{equation}
\frac{1}{2 m}
\left( - \imath \hbar \vec{\nabla} - e \vec{A} \right)^{2} \Psi +U \: \Psi
= E \: \Psi \: ,
\label{e}
\end{equation}
a superconducting system is described by the two
coupled Ginzburg-Landau (GL) equations:
\begin{equation}
\frac{1}{2m^{\star}}(-i \hbar\vec{\nabla}-e^{\star}
\vec{A})^{2}\Psi_s+\beta |\Psi_s|^{2} \Psi_s = -\alpha \Psi_s
\label{GLFree1}
\end{equation}
\begin{equation}
\vec{j_{S}}=\vec{\nabla} \times \vec{h} = \frac{e^{\star}}{2 m^{\star}}
\left[ \Psi_s^{\star} (- \imath \hbar \vec{\nabla} - e^{\star} \vec{A})
\Psi_s +
\Psi_s ( \imath \hbar \vec{\nabla} - e^{\star} \vec{A}) \Psi_s^{\star}
\right] \: ,
\label{GL2A}
\end{equation}
with $\vec{A}$ the vector potential which
corresponds to the microscopic field
$\vec{h}=rot\vec{A}/\mu_{0}$, $U$ the potential
energy, $E$ the total energy, $\alpha$ a temperature
dependent parameter changing sign from $\alpha > 0$
to $\alpha < 0$ as $T$ is decreased through $T_{c}$,
${\beta}$ a positive temperature independent
constant, $m^{\star}$ the effective mass which can
be chosen arbitrarily and is generally taken as
twice the free electron mass $m$.
Note that the first GL~equation
(Eq.~(\ref{GLFree1})), with the nonlinear term
$\beta |\Psi_s|^{2} \Psi_s$ neglected, is the
analogue of the Schr\"{o}dinger equation
(Eq.~(\ref{e})) with $U$ = 0, when making the
following substitutions: $\Psi_{s}(\Psi$),
$e^{\star}(e)$, $-\alpha(E)$ and $m^{\star}(m)$. The
superconducting order parameter $\Psi_{s}$
corresponds to the wave function $\Psi$; the
effective charge $e^{\star}$ in the GL equations is
$2 e$, i.e. the charge of a Cooper pair; the
temperature dependent GL parameter $\alpha$
\begin{equation}
- \alpha = \frac{\hbar^{2}}{2 m^{\star} \: \xi^{2}(T)}
\label{GLAlpha}
\end{equation}
plays the role of $E$ in the Schr\"{o}dinger
equation. Here $\xi(T)$ is the temperature dependent
coherence length:
\begin{equation}
\xi(T)=\xi(0) \, \left( 1-\frac{T}{T_{c0}} \right)^{-1/2}.
\label{XiT}
\end{equation}
The boundary conditions for interfaces between
normal metal-vacuum and superconductor-vacuum are,
however, different (Fig.~\ref{FD}):
\begin{equation}
\left. \Psi\Psi^{\star} \right|_{b}=0
\label{EBound}
\end{equation}
\begin{equation}
\left. (- \imath \hbar \vec{\nabla} - e^{\star} \vec{A}) \Psi_{s}
\right|_{\perp , b}=0
\label{GLBound}
\end{equation}
i.e. for normal metallic systems the density is
zero, while for superconducting systems, the
gradient of $\Psi_s$ (for the case $\vec{A}=0$) has
no component perpendicular to the boundary. As a
consequence, the supercurrent cannot flow through
the boundary. The nucleation of the superconducting
condensate is favored at the superconductor/ vacuum
interfaces, thus leading to the appearance of
superconductivity in a surface sheet with a
thickness $\xi(T)$ at the third critical field
$H_{c3}(T)$.
\begin{figure}[h]
\centerline{\psfig{figure=lafig1.eps}}
\caption{Boundary conditions for interfaces between normal
metal-vacuum and
\mbox{superconductor}-vacuum.}
\label{FD}
\end{figure}
For bulk superconductors the surface-to-volume ratio
is negligible and therefore superconductivity in the
bulk is not affected by a thin superconducting
surface layer. For submicron superconductors with
antidot arrays, however, the boundary conditions
(Eq.~(\ref{GLBound})) and the surface
superconductivity introduced through them, become
very important if $L_{A}\leq\xi(T)$. The advantage
of superconducting materials in this case is that it
is not even necessary to go down to the nanometer
scale (like for normal metals), since for $L_{A}$ of
the order of 0.1-1.0~$\mu m$ the temperature range
where $L_{A}
\leq \xi(T)$, spreads over $0.01-0.1~K$ below
$T_{c}$ due to the divergence of $\xi(T)$ at
$T\rightarrow T_{c0}$ (Eq.~(\ref{XiT})).
In principle, the mesoscopic regime
$L_{A}\leq\xi(T)$ can be reached even in bulk
superconducting samples with $L_{A}\sim$ $1~cm$,
since $\xi(T)$ diverges. However, the temperature
window where $L_{A}\leq\xi(T)$ is so narrow, not
more than $\sim$ $1~nK$ below $T_{c0}$, that one
needs ideal sample homogeneity and a perfect
temperature stability.
In the mesoscopic regime, $L_{A}\leq\xi(T)$, which
is easily realized in (perforated) nanostructured
materials, the surface superconductivity can cover
the whole available space occupied by the material,
thus spreading superconductivity all over the
sample. It is then evident that in this case surface
effects play the role of bulk effects.
Using the similarity between the linearized
GL~equation (Eq.~(\ref{GLFree1})) and the
Schr\"{o}dinger equation (Eq.~(\ref{e})), we can
formalize our approach as follows: since the
parameter
-$\alpha$ (Eqs.~(\ref{GLFree1}) and~(\ref{GLAlpha})) plays the role of
energy $E$ (Eq.~(\ref{e})), then {\it the highest
possible temperature $T_{c}(H)$ for the nucleation
of the super\-con\-duc\-ting state in the presence
of a magnetic field $H$ always corresponds to the
lowest Landau level $E_{LLL}(H)$} found by solving
the Schr\"{o}dinger equation (Eq.~(\ref{e})) with
"superconducting" boundary
conditions~(Eq.~(\ref{GLBound})).
Figure~\ref{FE} illustrates the application of this
rule to the calculation of the upper critical field
$H_{c2}(T)$: indeed, if we take the well-known
classical Landau solution for the lowest level in
bulk samples $E_{LLL}(H)=\hbar\omega/2$, where
$\omega = e^{\star} \mu_0 H / m^{\star}$ is the
cyclotron frequency. Then, from -$\alpha =
E_{LLL}(H)$ we have
\begin{equation}
\frac{\hbar^{2}}{2 m^{\star} \: \xi^{2}(T)}=
\left. \frac{\hbar\omega}{2}\right|_{H=H_{c2}}
\label{ha}
\end{equation}
and with the help of Eq.~(\ref{GLAlpha}), we obtain
\begin{equation}
\mu_{0} H_{c2}(T)=\frac{\Phi_{0}}{2 \pi \xi^{2}(T)}
\label{hc2}
\end{equation}
with $\Phi_0 = h/e^{\star} = h/2 e$ the
superconducting flux quantum.
\begin{figure}[h]
\centerline{\psfig{figure=lafig2.eps}}
\caption{Landau level scheme for a particle in a magnetic field. From the
lowest Landau level $E_{LLL}(H)$ the second critical field $H_{c2}(T)$
is derived (solid line).}
\label{FE}
\end{figure}
In nanostructured superconductors, where the
boundary conditions (Eq.~(\ref{GLBound})) strongly
influence the Landau level scheme, $E_{LLL}(H)$ has
to be calculated for each different confinement
geometry. By measuring the shift of the critical
temperature $T_c(H)$ in a magnetic field, we can
compare the experimental $T_{c}(H)$ with the
calculated level $E_{LLL}(H)$ and thus check the
effect of the confinement topology on the
superconducting phase boundary for a series of
nanostructured superconducting samples. The
transition between normal and superconducting states
is usually very sharp and therefore the lowest
Landau level can be easily traced as a function of
applied magnetic field. Except when stated
explicitly, we have taken the midpoint of the
resistive transition from the superconducting to the
normal state, as the criterion to determine
$T_c(H)$.
\section{Flux confinement in individual structures:
line, loop and dot}
In this section we present the experimental $T_c(H)$
phase boundary measured in superconducting aluminum
mesoscopic structures with different topologies with
the same width of the lines ($w=0.15
\: \mu m$) and film thickness ($t= 25 \: nm$).
The magnetic field $H$ is always applied
perpendicular to the structures.
\subsection{Line structure}
In Fig.~\ref{MesLine}a the phase boundary $T_c(H)$
of a mesoscopic line is shown. The solid line gives
the $T_c(H)$ calculated from the well-known
formula~\cite{Tin63}:
\begin{equation}
T_{c}(H)=T_{c0} \left[ 1 - \frac{\pi^{2}}{3}
\left( \frac{w \: \xi(0) \mu_0 H}{\Phi_0}\right)^{2} \right]
\label{TCBLine}
\end{equation}
which, in fact, describes the parabolic shape of
$T_c(H)$ for a thin film of thickness $w$ in a
parallel magnetic field. Since the cross-section,
exposed to the applied magnetic field, is the same
for a film of thickness $w$ in a parallel magnetic
field and for a mesoscopic line of width $w$ in a
perpendicular field, the same formula can be used
for both configurations~\cite{VVM95}. Indeed, the
solid line in Fig~\ref{MesLine}a is a parabolic fit
of the experimental data with Eq.~(\ref{TCBLine})
where $\xi (0)=110 \: nm$ was obtained as a fitting
parameter. The coherence length obtained using this
method, coincides reasonably well with the dirty
limit value $\xi(0)= 0.85 (\xi_0 \ell)^{1/2}= 132 \:
nm$ calculated from the known BCS coherence length
$\xi_0=1600 \: nm$ for bulk Al~\cite{dGABook} and
the mean free path $\ell = 15 \: nm$, estimated from
the normal state resistivity $\rho$ at $4.2 \:
K$~\cite{Rom82}.
We can use also another simple argument to explain
the parabolic relation $T_c(H) \propto H^2$, since
the expansion of the energy $E(H)$ in powers of $H$,
as given by the perturbation theory,
is~\cite{Wel38}:
\begin{equation}
E(H)=E_0+A_1 L H + A_2 S H^2 + \cdots
\label{Perturb}
\end{equation}
where $A_1$ and $A_2$ are constant coefficients, the
first term $E_0$ represents the energy levels in
zero field, the second term is the linear field
splitting with the orbital quantum number $L$ and
the third term is the diamagnetic shift with $S$,
being the area exposed to the applied field.
\begin{figure}[h]
\centerline{\psfig{figure=lafig3.eps}}
\caption{The measured superconducting/normal phase boundary
as a function of the reduced temperature
$T_c(H)/T_{c0}$ for a)~the line structure, and
b)~the loop and dot structure. The solid line in (a)
is calculated using Eq.~(\ref{TCBLine}) with $\xi
(0)=110 \: nm$ as a fitting parameter. The dashed
line represents $T_c(H)$ for bulk Al.}
\label{MesLine}
\end{figure}
For the topology of the line with a width $w$ much
smaller than the Larmor radius $r_H \gg w$, any
orbital motion is impossible due to the constraints
imposed by the boundaries onto the electrons inside
the line. Therefore, in this particular case $L=0$
and $E(H)=E_0 + A_2 S H^2$, which immediately leads
to the parabolic relation $T_c \propto H^2$. This
diamagnetic shift of $T_c(H)$ can be understood in
terms of a partial screening of the magnetic field
$H$ due to the non-zero width of the
line~\cite{Tinkham75}.
\subsection{Loop structure}
The $T_c(H)$ of the mesoscopic loop, shown in
Fig.~\ref{MesLine}b, demonstrates very distinct
Little-Parks (LP) oscillations~\cite{Litt62}
superimposed on a monotonic background. A closer
investigation leads to the conclusion that this
background is very well described by the same
parabolic dependence as the one which we just
discussed for the mesoscopic line~\cite{VVM95} (see
the solid line in Fig.~\ref{MesLine}a). As long as
the width of the strips $w$, forming the loop, is
much smaller than the loop size, the total shift of
$T_c(H)$ can be written as the sum of an oscillatory
part and the monotonic background given by
Eq.~(\ref{TCBLine})~\cite{VVM95,Gro68}:
\begin{equation}
T_c(H)=T_{c0} \left[ 1 - \frac{\pi^2}{3} \left(
\frac{w \: \xi(0) \mu_0 H}{\Phi_0} \right)^2 - \frac{\xi^2(0)}{R^2}
\left( n - \frac{\Phi}{\Phi_0} \right)^2 \right]
\label{TCBRing}
\end{equation}
where $R^2=R_1 \: R_2$ is the product of inner and
outer loop radius, and the magnetic flux threading
the loop $\Phi=\pi R^2 \mu_0 H$. The integer $n$ has
to be chosen so as to maximize $T_c(H)$ or, in other
words, selecting $E_{LLL}(H)$.
The LP~oscillations originate from the fluxoid
quantization requirement, which states that the
complex order parameter $\Psi_s$ should be a
single-valued function when integrating along a
closed contour
\begin{equation}
\oint \vec{\nabla} \varphi \cdot dl = n \: 2 \pi
\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \;
n=\cdots \: , -2,-1,0,1,2, \: \cdots
\label{Fluxoid}
\end{equation}
where we have introduced the order parameter $\Psi_s
= |\Psi_s| \exp \: (\imath \varphi)$. Fluxoid
quantization gives rise to a circulating
supercurrent in the loop when $\Phi \neq n
\Phi_0$, which is periodic with the applied flux
$\Phi / \Phi_0$.
Using the sample dimensions and the value for
$\xi(0)$ obtained for the mesoscopic line (with the
same width $w=0.15 \: \mu m$), the $T_c(H)$ for the
loop can be calculated from Eq.~(\ref{TCBRing})
without any free parameter. As shown in
Fig.~\ref{MesLine}b, the agreement with the
experimental data is very good.
Another interesting feature of the mesoscopic loop
or other structures is the unique possibility they
offer for studying nonlocal effects~\cite{StrN96}.
In fact, a single loop can be considered as a 2D
artificial quantum orbit with a {\it fixed radius},
in contrast to Bohr's description of atomic
orbitals. In the latter case the stable radii are
found from the quasiclassical quantization rule,
stating that only an integer number of wavelengths
can be set along the circumference of the allowed
orbits. For a superconducting loop, however,
supercurrents must flow, in order to fulfill the
fluxoid quantization requirement
(Eq.~(\ref{Fluxoid})), thus causing oscillations of
$T_c$ versus $H$.
In order to measure the resistance of a mesoscopic
loop, electrical contacts have, of course, to be
attached to it, and as a consequence the confinement
geometry is changed. This "disturbing" or "invasive"
aspect can now be exploited for the study of
nonlocal effects~\cite{StrN96}. Due to the
divergence of the coherence length $\xi(T)$ at $T =
T_{c0}$ (Eq.~(\ref{XiT})) the coupling of the loop
with the attached leads is expected to be very
strong for $T
\rightarrow T_{c0}$.
\begin{figure}[h]
\centerline{\psfig{figure=lafig4.eps}}
\caption{Local ($V_1/V_2$) and nonlocal phase boundary
($V_1/V_3$ or $V_2/V_4$) measurements. The transport
current flows through $I_1/I_2$. The solid and
dashed lines correspond to the theoretical $T_c(H)$
of an isolated loop and a one-dimensional line,
respectively. The inset shows a schematic of the
mesoscopic loop with various contacts ($P=0.4\:\mu
m$).}
\label{NL1}
\end{figure}
Fig.~\ref{NL1} shows the results of these
measurements. Both "local" (potential probes
$V_1/V_2$ across the loop) and "nonlocal" (potential
probes $V_1/V_3$ or $V_2/V_4$ aside of the loop)
LP~oscillations are clearly observed. For the
"local" probes there is an unexpected and pronounced
increase of the oscillation amplitude with
increasing field, in disagreement with previous
measurements on Al microcylinders~\cite{Gro68}. In
contrast to this, for the "nonlocal" LP~effect the
oscillations rapidly vanish when the magnetic field
is increased.
When increasing the field, the background
suppression of $T_c$ (Eq.~(\ref{TCBLine})) results
in a decrease of $\xi(T)$. Hence, the change of the
oscillation amplitude with $H$ is directly related
to the temperature-dependent coherence length. As
long as the coherence of the superconducting
condensate extends over the nonlocal voltage probes,
the nonlocal LP~oscillations can be observed.
The importance of an "arm" attached to a mesoscopic
loop was already demonstrated theoretically by de
Gennes in 1981~\cite{dGA81}. For a perfect 1D loop
(vanishing width of the strips) adding an "arm" will
result in a decrease of the LP~oscillation
amplitude, what we indeed observed at low magnetic
fields, where $\xi(T)$ is still large. With these
experiments, we have proved that adding probes to a
structure considerably changes both the confinement
topology and the phase boundary $T_c(H)$.
The effect of topology on $T_c(H)$, related to the
presence of the sharp corners in a square loop, has
been considered by Fomin {\it et
al.}~\cite{FominSSC97,FominPRB}. In the vicinity of
the corners the superconducting condensate sustains
a higher applied magnetic field, since at these
locations the superfluid velocity is reduced, in
comparison with the ring. Consequently, in a
field-cooled experiment, superconductivity will
nucleate first around the corners~\cite{FominSSC97}.
Eventually, for a square loop, the introduction of a
{\it local} superconducting transition temperature
seems to be needed. As a result of the presence of
the corner, the $H_{c3}(T)$ of a wedge with an angle
$\theta$~\cite{Fomin98} will be strongly enhanced at
the corner resulting in the ratio $H_{c3}/H_{c2}
\approx 3.79$ for $\theta \approx 0.44 \,
\pi$~\cite{Fomin98}.
\subsection{Dot structure}
The Landau level scheme for a cylindrical dot with
"superconducting" boundary conditions
(Eq.~(\ref{GLBound})) is presented in
Fig.~\ref{DotCalc}. Each level is characterized by a
certain orbital quantum number $L$ where $\Psi_s =
|\Psi_s| \exp \: (\mp \imath L
\varphi)$~\cite{PME96}. The levels, corresponding to
the sign "+" in the argument of the exponent are not
shown since they are situated at energies higher
than the ones with the sign "-". The lowest Landau
level in Fig.~\ref{DotCalc} represents a cusp-like
envelope, switching between different $L$ values
with changing magnetic field. Following our main
guideline that $E_{LLL}(H)$ determines $T_c(H)$, we
expect for the dot the cusp-like superconducting
phase boundary with nearly perfect linear
background. The measured phase boundary $T_c(H)$,
shown in Fig.~\ref{MesLine}b, can be nicely fitted
by the calculated one (Fig.~\ref{DotCalc}), thus
proving that $T_c(H)$ of a superconducting dot
indeed consists of cusps with different
$L$'s~\cite{Bui90}. Each fixed $L$ describes a giant
vortex state which carries $L$ flux quanta $\Phi_0$.
The linear background of the $T_c(H)$ dependence is
very close to the third critical field $H_{c3}(T)
\simeq 1.69
\: H_{c2}(T)$~\cite{Saint-James65}. Contrary to the
loop, where the LP~oscillations are perfectly
periodic, the dot demonstrates a certain
aperiodicity~\cite{VVMScripta}, in very good
agreement with the theoretical
calculations~\cite{Bui90,Benoist}.
\begin{figure}[h]
\centerline{\psfig{figure=lafig5.eps}}
\caption{Energy level scheme versus normalized flux
$\Phi / \Phi_0$ for a superconducting cylinder in a
magnetic field parallel to the axis. The cusp-like
$H_{c3}(T)$ line is formed due to the change of the
orbital quantum number $L$.}
\label{DotCalc}
\end{figure}
The lower critical field of a cylindrical dot
$H_{c1}^{dot}$ corresponds to the change of the
orbital quantum number from $L=0$ to $L=1$, i.e. to
the penetration of the first flux
line~\cite{Benoist}:
\begin{equation
\mu_0 H_{c1}^{dot}= 1.924 \, \frac{\Phi_0}{\pi \, R^2} \, .
\label{BeZw}
\end{equation}
For a long mesoscopic cylinder described above,
demagnetization effects can be neglected. On the
contrary, for a thin superconducting disk, these
effects are quite
essential~\cite{Deo,Schweigert,Palacios}. For a
mesoscopic disk, made of a Type-I superconductor,
the phase transition between the superconducting and
the normal state is of the second order if the
expulsion of the magnetic field from the disk can be
neglected, i.e. when the disk thickness is
comparable with $\xi$ and $\lambda$. When the disk
thickness is larger than a certain critical value
first order phase transitions should occur. The
latter has been confirmed in ballistic Hall
magnetometry experiments on individual Al
disks~\cite{GeimNat,GeimAPL,GeimIMEC}. A series of
first order transitions between states with
different orbital quantum numbers $L$ have been seen
in magnetization curves $M(H)$~\cite{GeimNat} in the
field range corresponding to the crossover between
the Meissner and the normal states. Besides the
cusplike $H_{c3}(T)$ line, found earlier in
transport measurements~\cite{VVM95,Bui90},
transitions between the $L=2$ and $L=1$ states have
been observed~\cite{GeimNat} by probing the
superconducting state below the $T_c(H)$ line with
Hall micromagnetometry. Still deeper in the
superconducting area the recovery of the normal
$\Phi_0$-vortices and the decay of the giant vortex
state might be expected~\cite{Palacios}. The former
has been considered in Ref.~\cite{BuzBrison94} in
the London limit, by using the image method.
Magnetization and stable vortex configurations have
been recently analyzed in mesoscopic disks in
Refs.~\cite{Deo,Schweigert,Palacios}.
\section{Conclusions}
We have carried out a systematic study of and
quantization phenomena in submicron structures of
superconductors. The main idea of this study was to
vary the boundary conditions for confining the
superconducting condensate by taking samples of
different topology and, through that, to modify the
lowest Landau level $E_{LLL}(H)$ and therefore the
critical temperature $T_{c}(H)$. Different types of
individual nanostructures were used: line, loop and
dot structures. We have shown that in all these
structures, the phase boundary $T_{c}(H)$ changes
dramatically when the confinement topology for the
superconducting condensate is varied. The induced
$T_{c}(H)$ variation is very well described by the
calculations of $E_{LLL}(H)$ taking into account the
imposed boundary conditions. These results
convincingly demonstrate that the phase boundary in
$T_{c}(H)$ of mesoscopic superconductors differs
drastically from that of corresponding bulk
materials. Moreover, since, for a known geometry
$E_{LLL}(H)$ can be calculated a priori, the
superconducting critical parameters, i.e.
$T_{c}(H)$, can be controlled by designing a proper
confinement geometry. While the optimization of the
superconducting critical parameters has been done
mostly by looking for different materials, we now
have a unique alternative - to improve the
superconducting critical parameters of {\it the same
material} through the optimization of {\it the
confinement topology} for the superconducting
condensate and for the penetrating magnetic flux.
\\
\noindent
{\it Acknowledgements}---The authors would like to
thank V.~Bruyndoncx, E.~Rosseel, L.~Van~Look,
M.~Baert, M.~J.~Van~Bael, T.~Puig, C.~Strunk,
A.~L\'{o}pez, J.~T.~Devreese and V.~Fomin for
fruitful discussions and R.~Jonckheere for the
electron beam lithography. We are grateful to the
Flemish Fund for Scientific Research (FWO), the
Flemish Concerted Action (GOA) and the Belgian
Inter-University Attraction Poles (IUAP) programs
for the financial support.
| proofpile-arXiv_065-9145 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{#1}}
\def1.1{1.1}
\oddsidemargin .20in
\evensidemargin .5in
\topmargin 0in
\input epsf
\textwidth 6.25in
\textheight 8.5in
\input{tcilatex}
\begin{document}
\null
\hfill CERN-TH/98-363
\vspace{50pt}
\begin{center}
{\LARGE Quantum Conformal Algebras}
\vskip .3truecm
{\LARGE and Closed
Conformal Field Theory}
\end{center}
\vspace{6pt}
\begin{center}
{\sl Damiano Anselmi}
{\it Theory Group, CERN, Geneva, Switzerland}
\end{center}
\vspace{8pt}
\begin{center}
{\bf Abstract}
\end{center}
We investigate the quantum conformal algebras of
N=2 and N=1 supersymmetric gauge theories.
Phenomena occurring at strong coupling are analysed
using the Nachtmann theorem and very general,
model-independent, arguments.
The results lead us to
introduce a novel class of conformal field theories,
identified by a
closed quantum conformal algebra. We conjecture that
they are the exact solution to the strongly coupled large-$N_c$ limit
of the open conformal field theories.
We study the basic properties
of closed conformal field theory
and work out the operator product
expansion of the conserved current multiplet ${\cal T}$.
The OPE structure is uniquely
determined by two central charges, $c$ and $a$.
The multiplet ${\cal T}$
does not contain just the stress-tensor,
but also $R$-currents and
finite mass operators. For this reason, the ratio $c/a$ is different
from 1. On the other hand,
an open algebra contains an infinite tower of non-conserved currents,
organized in pairs and singlets with respect to
renormalization mixing. ${\cal T}$ mixes with a second multiplet
${\cal T}^*$ and
the main consequence
is that $c$ and $a$ have different subleading corrections.
The closed algebra simplifies considerably at $c=a$, where it coincides
with the N=4 one.
\vskip 4.5truecm
\noindent CERN-TH/98-363
\noindent November, 1998.
\vfill\eject
Recently \cite{ics,high} we developed
techniques to study the operator product
expansion of the stress-energy tensor, with the purpose
of acquiring a deeper knowledge of conformal field theories in four dimensions
and quantum field theories interpolating between pairs of them.
These techniques are similar to those used,
in the context of the deep inelastic
scattering \cite{muta}, to study the parton-electron scattering
via the light-cone
operator product expansion of the electromagnetic current.
The investigation of the ``graviton-graviton'' scattering, i.e.
the $TT$ OPE, is convenient in a more theoretical context,
to study conformal windows and hopefully the low-energy limit
of quantum field theories in the conformal windows.
Furthermore, an additional ingredient, supersymmetry,
reveals that a nice algebraic structure \cite{ics}
governs the set of operators generated by
the $TT$ OPE. We called this structure
the {\it quantum conformal algebra} of the theory, since
it is the basic algebraic notion
identifying a conformal field theory
in more than two dimensions.
We have considered in detail the maximal supersymmetric
case in ref. \cite{ics} and in the present paper we extend our
investigation to N=2 and N=1 theories, with special attention to
the theories with vanishing beta function.
We believe that this interplay
between physical techniques and more theoretical
issues will be very fruitful for both.
It was observed in \cite{ics} that the relevant features
of the algebra do not change with the value of the coupling constant.
This was proved using a theorem due to Nachtmann \cite{nach}, found
1973 in the context of the theory of deep inelastic scattering.
Only at special values $g_*$ of the coupling constant
can the algebra change considerably.
One special point is of course
the free-field limit, where infinitely many currents
are conserved. Another remarkable point is the limit
in which the operator product expansion
closes with a finite set of conserved currents,
which means only the stress-tensor in the N=4 theory,
but more than the stress
tensor in the N=2 algebra, as we will see.
This special situation, we believe, deserves
attention {\sl per se}. It is the simplest conformal field theory
in four dimensions, simpler than free-field theory and yet non-trivial.
It can be viewed as the true
analogue of two-dimensional conformal field theory.
Because of its simplicity, it is suitable for an algebraic/axiomatic
investigation. It is expected to be relevant
both physically and mathematically.
For example, in \cite{high} (sect. 4.5) we argued, using the
AdS/CFT correspondence \cite{malda}, in particular the results of
\cite{klebanov}, that the limit in which the
$TT$ OPE closes should be the
strongly coupled large-$N_c$ limit.
In the present paper we argue something similar about
finite N=2 theories.
The plan of the paper is as follows.
In sections 1 and 2 we study the quantum conformal algebras of the N=2
vector multiplet and hypermultiplet, respectively. In sections 3 and 4
we combine the two of them into a finite N=2 theory and discuss the
most important phenomena that take place when the interaction
is turned on, like renormalization
splitting and renormalization mixing, anomalous dimensions
and so on. In the rest of the paper we argue, using the Nachtmann theorem and
very general arguments, that the algebra closes in the strongly
coupled large-$N_c$ limit (sect. 5).
We describe various properties of
closed conformal field theory (sect. 5), compare them to those
of open conformal field theory (sects. 5 and 6),
give the complete OPE algebra in the N=2 case (section 6)
and discuss aspects of the N=1 closed quantum conformal algebra.
For supersymmetry,
we use the notation of Sohnius \cite{Sohnius}, converted to the Euclidean
framework
via $\delta_{\mu\nu}\rightarrow-%
\delta_{\mu\nu}$, $T^{V,F,S}\rightarrow -T^{V,F,S}$
(these are the vector, spinor and scalar contributions to the stress-tensor),
$\varepsilon_{\mu\nu\rho%
\sigma} \rightarrow -i\varepsilon_{\mu\nu\rho\sigma}$ and $\gamma_\mu,
\gamma_5\rightarrow -i\gamma_\mu,-i\gamma_5$. Moreover, we multiply $A_i$
by a factor $\sqrt{2}$ and use ordinary Majorana spinors $\lambda_i$
instead of symplectic Majorana spinors $\lambda_s^i$
($\lambda_s^i={1\over 2}[(\delta_{ij}-i\varepsilon_{ij})-\gamma_5
(\delta_{ij}+i\varepsilon_{ij})]\lambda_j$).
For the current algebra we use the notations of \cite{ics,high}.
\section{Vector multiplet}
We begin our analysis with the N=2 vector multiplet and the
hypermultiplet,
by repeating the steps of \cite{ics}.
The current multiplets have length 2 in spin units, but the important
point is that {\sl all} of them have length 2. We recall that
the stress-tensor
multiplet has length 0 in the N=4 algebra \cite{ics}.
Moreover, there is one multiplet for each spin, even or odd.
The vector, spinor and scalar contributions
to the currents of the N=2
vector multiplet are
\begin{eqnarray*}
{\cal J}^{V} &=&F_{\mu \alpha }^{+}\overleftrightarrow{\Omega }_{{\rm even}%
}F_{\alpha \nu }^{-}+{\rm impr}.,\quad \quad \quad \quad \quad \quad {\cal A}^{V} =F_{\mu \alpha }^{+}\overleftrightarrow{\Omega }_{{\rm odd}%
}F_{\alpha \nu }^{-}+{\rm impr}., \\
{\cal J}^{F}&=&\frac{1}{2}\bar{%
\lambda}_{i}\gamma _{\mu }\overleftrightarrow{\Omega }_{{\rm odd}}\lambda
_{i}+{\rm impr}.,\quad \quad \quad \quad \quad \quad {\cal A}^{F}=\frac{1}{2}\bar{%
\lambda}_{i}\gamma _{5}\gamma _{\mu }\overleftrightarrow{\Omega }_{{\rm even}%
}\lambda _{i}+{\rm impr}., \\
{\cal J}^{S} &=&M\overleftrightarrow{\Omega }_{{\rm even}%
}M+N\overleftrightarrow{\Omega }_{{\rm even}}N+%
{\rm impr}.,\quad {\cal A}^{S}=-2iM\overleftrightarrow{\Omega }_{{\rm odd}%
}N+{\rm impr}.,
\end{eqnarray*}
where $\overleftrightarrow{\Omega }_{{\rm even/odd}}$ denotes an even/odd
string of derivative operators $\overleftrightarrow{\partial }$ and
``impr.'' stands for the improvement terms \cite{high}.
A simple
set of basic
rules suffices to determine the operation $\delta _{\zeta
}^{2} $ which relates the currents of a given multiplet and is
a certain combination of
two supersymmetry transformations
(see \cite{ics} for details).
The result is
\begin{eqnarray*}
{\cal J}_{2s}^{S} &\rightarrow &-2~{\cal A}_{2s+1}^{F}+2~{\cal A}_{2s+1}^{S},
\qquad \qquad
{\cal A}_{2s-1}^{S}\rightarrow -2~{\cal J}%
_{2s}^{F}+2~{\cal J}_{2s}^{S}, \\
{\cal J}_{2s}^{F}&\rightarrow & -8~{\cal A}%
_{2s+1}^{V}+~{\cal A}_{2s+1}^{S},
\qquad \qquad
{\cal A}_{2s-1}^{F}\rightarrow -8~{\cal J}%
_{2s}^{V}+~{\cal J}_{2s}^{S}, \\
{\cal J}_{2s}^{V} &\rightarrow &-2~{\cal A}_{2s+1}^{V}+\frac{1}{4}~{\cal A}%
_{2s+1}^{F},
\qquad \qquad
{\cal A}_{2s-1}^{V} \rightarrow -2~{\cal J}_{2s}^{V}+\frac{1}{4}~{\cal J}%
_{2s}^{F}.
\end{eqnarray*}
As we see, the algebra is more symmetric than the N=4 one \cite{ics}.
In particular, there is an evident even/odd-spin
symmetry that was missing in \cite{ics}.
We have fixed the normalization of the scalar
axial current ${\cal A}^S$ (absent in N=4) in order to
exhibit this symmetry.
We recall that $T^{V}=-2{\cal J}_2^{V},$ $T^{F}=%
{\cal J}_2^{F}/2$ and $T^{S}=-{\cal J}_2^{S}/4$ are the various contributions
to the stress-tensor.
The list of current multiplets generated by the $TT$ OPE is easily
worked out and reads
\begin{equation}
\begin{tabular}{lllll}
${\cal T} _{0}={1\over 2}{\cal J}_{0}^{S}$ &
& \\
${\cal T}_1=-{\cal A}_{1}^{F}+{\cal A}_{1}^{S}$ & $\Lambda_{1}={1\over 4}{\cal A}_{1}^{F}+{1\over 4}{\cal A}_{1}^{S}$ & \\
${\cal T}_2=8{\cal J}_{2}^{V}-2{\cal J}_{2}^{F}+{\cal J}_{2}^{S}$ &
$\Lambda_{2}=-2{\cal J}_{2}^{V}-{1\over 2}{\cal J}_{2}^{F}+{3\over 4}{\cal J}_{2}^{S}$ & $%
\Xi _{2}=\frac{4}{15}{\cal J}_{2}^{V}+\frac{4}{15}{\cal J}_{2}^{F}
+{1\over 5}{\cal J}%
_{2}^{S}$ \\
$\Delta_{3}={3\over 7}{\cal A}_{3}^{V}+{15\over 56}{\cal A}_{3}^{F}+
{5\over 28}{\cal A}_{3}^{S}$ &
$\Lambda_{3}=8{\cal A}_{3}^{V}-2{\cal A}_{3}^{F}+{\cal A}_{3}^{S}$
& $\Xi _{3}=-{8\over 3}{\cal A}_{3}^{V}-{1\over 3}{\cal A}_{3}^{F}+
{2\over 3}{\cal A}_{3}^{S}$ \\
$\Delta _{4}=-{3}{\cal J}_{4}^{V}-{1\over 4}{\cal J}_{4}^{F}+
{5\over 8}{\cal J}_{4}^{S}$ & $\Upsilon_{4}=\frac{8}{15}{\cal J}_{4}^{V}+\frac{4}{15}
{\cal J}_{4}^{F}+{1\over 6}{\cal J}_{4}^{S} $ & $%
\Xi _{4}=8{\cal J}_{4}^{V}-2{\cal J}_{4}^{F}+{\cal J}%
_{4}^{S}$ \\
$\Delta_5=8{\cal A}_{5}^{V}-2{\cal A}_{5}^{F}+{\cal A}_{5}^{S}$
& $\Upsilon _{5}=-
\frac{16}{5}{\cal A}_{5}^{V}-{1\over 5}{\cal A}_{5}^{F}+
{3\over 5}{\cal A}_{5}^{S}$&
$\Omega _{5}=%
\frac{20}{33}{\cal A}_{5}^{V}+{35\over 132}{\cal A}_{5}^{F}+
{7\over 44}{\cal A}_{5}^{S}$
\\
$\ldots$ & $%
\Upsilon_{6}=8{\cal J}_{6}^{V}-2{\cal J}_{6}^{F}+%
{\cal J}_{6}^{S}$& $\Omega _{6}=-{10\over 3}{\cal J}_{6}^{V}-{1\over 6}{\cal J}_{6}^{F}+{7\over 12}{\cal J}_{6}^{S}$ \\
& $\ldots$ & $\Omega _{7}=8{\cal A}_{7}^{V}-2{\cal A}_{7}^{F}+{\cal A}_{7}^{S}$
\end{tabular}
\label{ulla}
\end{equation}
The lowest components of each current multiplet (${\cal T}_2$, $\Lambda_3$,
$\Xi_4$, $\Delta_5$, $\Upsilon_6$, $\Omega_7$)
have the same form.
The normalization is fixed in such a way that these components have also
the same overall factor.
In \cite{high} we used a different convention:
we fixed the normalization of each current
by demanding that the coefficients of ${\cal A}^{F}$ and ${\cal J}^{S}$
be 1. Here we have to be more precise and
keep track of the relative factors inside
current multiplets, since
we need to superpose the vector and matter quantum conformal
algebras in order to obtain the most general
N=2 structure (see section 3).
{\it Checks}. Scalar odd-spin currents appear in the algebra and
their two-point functions were not computed in \cite{high}.
We can combine
orthogonality checks with the indirect derivations
of the amplitudes of these
currents.
These currents are necessary to properly diagonalize the multiplet.
For example,
only the ${\cal A}^S_1$-independent
combination $-{1\over 2}{\cal T}_1+2\Lambda_1=
{\cal A}^F_1$ appears in the OPE,
but the scalar current ${\cal A}^S_1$ orthogonalizes
${\cal T}_1$ and $\Lambda_1$.
Indeed, the two-point function of the scalar
spin-1 current, easy to compute,
\[
\langle {\cal A}^S_\mu(x)\,{\cal A}^S_\nu(0)\rangle= {4\over 3}N_V \left( {1\over 4\pi^2}\right)^2
\,\pi_{\mu\nu} \left( {1\over |x|^4}\right),
\]
agrees with the orthogonality
relationship $\langle {\cal T}_1\,\Lambda_1\rangle=0$.
Similarly,
${\cal T}_2$ and $\Lambda_2$ are orthogonal
and this can be verified with the results of \cite{ics}.
$\Xi_2$ is then determined
by requiring that it is orthogonal to both
${\cal T}_2$ and $\Lambda_2$. Note that $\Xi_2$
has the same form as $\Xi_2$ in the
N=4 algebra \cite{ics}, apart from a factor
due to the different normalization.
Then $\Lambda_3$ and $\Xi_3$ are determined
via the transformation $\delta _{\zeta
}^{2}$. The two-point function of ${\cal A}^S_3$ is derived by the
orthogonality relationship $\langle \Lambda_3\, \Xi_3\rangle=0$. We obtain
\[
\langle {\cal A}^S_3(x)\,{\cal A}^S_3(0)\rangle={8\over 35}N_V
\left( {1\over 4\pi^2}\right)^2
\,{\prod}^{(3)} \left( {1\over |x|^4}\right),
\]
while $\langle {\cal A}^F_3\,{\cal A}^F_3\rangle$ and $\langle {\cal A}^V_3\,{\cal A}^V_3\rangle$
can be found in \cite{high}. Then we determine $\Delta_3$ via
the equations $\langle \Delta_3\,\Lambda_3 \rangle=\langle \Delta_3\, \Xi_3\rangle=0$
and $\Xi_4$, $\Delta_4$
via the $\delta _{\zeta
}^{2}$ operation. The amplitudes of \cite{high}
suffice to show that $\langle \Xi_4\, \Delta_4\rangle=0$, which is
a non-trivial numerical check of the values.
Finally, once $\Upsilon_4$ is found by solving $\langle \Upsilon_4\, \Xi_4\rangle
=\langle \Upsilon_4\, \Delta_4\rangle=0$,
we extract the two-point function of ${\cal A}^S_5$
via the orthogonality condition of $\Delta_5$ and $\Upsilon_5$, with the result
\[
\langle {\cal A}^S_5(x)\,{\cal A}^S_5(0)\rangle={2^5\over 3^2\cdot 7\cdot 11}N_V
\left( {1\over 4\pi^2}\right)^2
\,{\prod}^{(5)} \left( {1\over |x|^4}\right).
\]
$\Omega_5$ is determined by the conditions $\langle \Omega_5\, \Upsilon_5\rangle=0$
and $\langle \Omega_5\, \Delta_5\rangle=0$, and so on.
Any current multiplet is 2-spin long and has the form
\begin{eqnarray}
\Lambda _{s}&=&4~{a_s~{\cal J}_{s}^{V}+b_s~{\cal J}_{s}^{F}+c_s~{\cal J}
_{s}^{S}\over (a_s+8b_s+8c_s)} \qquad \rightarrow \qquad \nonumber \\
\Lambda_{s+1}&=&{4\over (a_s+8b_s+8c_s)}\left[-2(a_s+4b_s)~{\cal A}_{s+1}^{V}+{1\over 4} (a_s-8c_s)~{\cal A}
_{s+1}^{F}+ (b_s+2c_s)~{\cal A}
_{s+1}^{S}\right] \qquad \rightarrow \nonumber \\
\Lambda_{s+2}&=&8{\cal J}_{s+2}^{V}- 2~{\cal J}%
_{s+2}^{F}+~{\cal J}_{s+2}^{S}.
\end{eqnarray}
for all $s$ (${\cal J}\leftrightarrow {\cal A}$ when $s$ is odd).
We stress again the most important novelty exhibited
by the N=2 algebra
with respect to the
N=4 one \cite{ics}: the multiplet of the stress-tensor
is not shorter than the other multiplets;
rather, it contains also a spin-1 current (the $R$-current)
and a spin-0 partner, on which we will have more to say later on.
The theory is not finite in the absence of hypermultiplets.
Nevertheless, it is meaningful to calculate the anomalous dimensions
of the operators to lowest order, since at one-loop order
around a free-field theory
conformality is formally preserved.
We give here the first few values of the anomalous dimensions
for illustrative purposes.
The procedure for the computation
is the same as the one of ref. \cite{ics} and
will be recalled in the next sections.
We find $h_{\cal T}=0$, $h_\Lambda=2N_c {\alpha\over \pi}$ and
$h_\Xi={5\over 2}N_c {\alpha\over \pi}$. These three values obey
the Nachtmann theorem \cite{nach}, which states that the
spectrum of anomalous dimensions is a
positive, increasing and convex function of the spin.
Actually, the Nachtmann theorem applies only to the lowest
anomalous dimension of the even-spin levels.
Nevertheless, it seems that the property is satisfied by all the
spin levels in this particular case. This is not true in the presence
of hypermultiplets, as we will see.
\section{Hypermultiplet}
The structure of the algebra is much simpler for the matter multiplet.
The currents are
\begin{eqnarray*}
{\cal J}^{F}&=&\bar
\psi\gamma _{\mu }\overleftrightarrow{\Omega }_{{\rm odd}}\psi
+{\rm impr}.,\quad \quad \quad \quad \quad \quad
{\cal A}^{F}=\bar
\psi\gamma _{5}\gamma _{\mu }\overleftrightarrow{\Omega }_{{\rm even}%
}\psi+{\rm impr}., \\
{\cal J}^{S} &=&2\bar A_i\overleftrightarrow{\Omega }_{{\rm even}%
}A_i+{\rm impr}.,
\end{eqnarray*}
The basic operation $\delta^{2}_\zeta$
does not exhibit the even/odd spin symmetry
and is more similar to the N=4 one:
\begin{eqnarray*}
{\cal J}_{2s}^{S} &\rightarrow &-4~{\cal A}_{2s+1}^{F},
\qquad \qquad
\\
{\cal J}_{2s}^{F}&\rightarrow & -2~{\cal A}%
_{2s+1}^{F},
\qquad \qquad
{\cal A}_{2s-1}^{F}\rightarrow -2~{\cal J}%
_{2s}^{F}+~{\cal J}_{2s}^{S}.
\end{eqnarray*}
It gives the following list of multiplets
\begin{equation}
\begin{tabular}{lllll}
${\cal T} _{0}=-{1\over 4}~{\cal J}_{0}^{S}$ & &
& & \\
${\cal T}_1={\cal A}_{1}^{F}$ & & & & \\
${\cal T}_2=-2~{\cal J}_{2}^{F}+{\cal J}_{2}^{S}$ & &
$%
\Xi _{2}=-{1\over 5}~{\cal J}_{2}^{F}-{3\over 20}~{\cal J}%
_{2}^{S}$ & & \\
& &
$\Xi _{3}={\cal A}_{3}^{F}$ \\
& & $%
\Xi_{4}=-2~{\cal J}_{4}^{F}+{\cal J}%
_{4}^{S}~$ & & $\Upsilon_{4}=-\frac{2}{9}~%
{\cal J}_{4}^{F}-{5\over 36}~{\cal J}_{4}^{S} $ \\
& & & &$\Upsilon _{5}={\cal A}_{5}^{F}$
\\
& & & & $%
\Upsilon_{6}=-2~{\cal J}_{6}^{F}+~{\cal J}_{6}^{S}, $%
\end{tabular}
\label{cum}
\end{equation}
determined with the familiar procedure.
We see that no spin-1 scalar current appears and that,
again, the stress-tensor has two partners, the $R$-current
and a mass term. The general form of the current hypermultiplet
is particularly simple:
\[
\Lambda_{2s}=-{a_s~{\cal J}^F_{2s}+b_{s}~{\cal J}^S_{2s}
\over 2(a_s+2b_s)}\quad \rightarrow \quad
\Lambda_{2s+1}={\cal A}^F_{2s+1}\quad \rightarrow \quad
\Lambda_{2(s+1)}=-2~{\cal J}_{2(s+1)}^{F}+~{\cal J}_{2(s+1)}^{S}.
\]
There is no anomalous dimension to compute here,
since the hypermultiplet admits no renormalizable self-coupling.
In the next section we combine vector multiplets
and hypermultiplets to study in particular the finite N=2 theories.
\section{Combining the two multiplets into a finite N=2 theory}
In this section we work out the quantum conformal
algebra of finite N=2 supersymmetric theories.
We consider, as a concrete
example (the structure
is completely general), the theory
with group $SU(N_c)$ and $N_f=2N_c$ hypermultiplets in the fundamental
representation.
The beta-function is just one-loop owing to N=2
supersymmetry. Precisely, it is
proportional to $N_c-{1\over 2}N_f$,
so it vanishes identically for $N_f=2N_c$ \cite{appropriate}.
Combining the free-vector and free-hypermultiplet
quantum conformal algebras is not as straightforward as it might seem.
The algebra is much richer than the N=4 one and some non-trivial work
is required before singling out its nice properties.
To begin with, the us write down the
full multiplet ${\cal T}={\cal T}_v+{\cal T}_m$ of the stress-tensor:
\begin{eqnarray}
{\cal T}_0&=&{1\over 2}~{\cal J}_{0v}^S-{1\over 4}{\cal J}_{0m}^S=
{1\over 2}(M^2+N^2-\bar A_iA_i),
\nonumber\\
{\cal T}_1&=&-{\cal A}^F_{1v}+{\cal A}^F_{1m}+{\cal A}^S_{1v}=
-{1\over 2}\bar\lambda_i\gamma_5\gamma_\mu\lambda_i+
\bar\psi\gamma_5\gamma_\mu\psi-2iM\overleftrightarrow{\partial }_\mu N,
\nonumber\\
{\cal T}_2&=&8{\cal J}^V_{2v}-2({\cal J}^F_{2v}+{\cal J}^F_{2m})
+{\cal J}^S_{2v}+{\cal J}^S_{2m}=-4T_{\mu\nu},\nonumber
\label{mul}
\end{eqnarray}
where the additional subscripts $v$ and $m$ refer to the vector and
matter contributions, respectively (this heavy notation is
necessary, but fortunately temporary - we write down the explicit
formulas in order to facilitate the reading).
In general, the full ${\cal T}$-multiplet
appears in the quantum conformal algebra.
${\cal T}_1$ is the $SU(2)$-invariant $R$-current, and its anomaly
vanishes because it is proportional to the beta-function.
${\cal T}_0$ is one of the finite mass perturbations \cite{parkes}.
Our picture gives a nice argument for the finiteness of
such a mass term, which follows directly
from the finiteness of the stress-tensor.
The next observation is that the ${\cal T}$-multiplet has to be part of a pair
of multiplets having the same position in the algebra.
The general OPE structure
of \cite{high} shows that the singularity
$1/|x|^6$ carries the sum of the squared scalar fields with coefficient 1.
In our case it should be $M^2+N^2+2\bar A_iA_i$
and not just $M^2+N^2-\bar A_iA_i$. On the other hand,
the mass operator
$M^2+N^2+2\bar A_iA_i$ is not
finite and cannot stay with the stress-tensor.
Therefore it is split into a linear combination of two operators, precisely
${\cal T}_0$ and ${\cal T}_0^*={1\over 2}~
{\cal J}_{0v}^S-{1\over 4}~{\cal J}_{0m}^S={1\over 2}~(
M^2+N^2+\bar A_iA_i)$. These two operators are not
orthogonal:
they can freely mix under
renormalization, because
their current multiplets ${\cal T}$ and ${\cal T}^*$
have the same position in the
algebra.
This means that in the N=2 quantum conformal algebra
the $I$-degeneracy of \cite{high} survives.
Orthogonalization would be rather awkward, since the number
of components of $M$ and $N$ is proportional to $N_c^2-1$,
while the number of components of $A_i$ is proportional
to $N_f N_c=2N_c^2$. Coefficients of the form
$\sqrt{(N_c^2-1)/N_c^2}$ would appear and the diagonalization
would not survive once the interaction is turned on.
In the presence of mixing, there is no privileged basis
for the two currents, in general.
However, the ${\cal T}\,{\cal T}^*$-pair satisfies a further property, namely
${\cal T}$ and ${\cal T}^*$ do split in the large-$N_c$ limit
(we will present in sects. 5 and 6 an interesting
interpretation of this fact).
We have fixed ${\cal T}_0^*$ by imposing
$\langle {\cal T}_0\, {\cal T}_0^*\rangle=0$ in this limit.
The complete ${\cal T}^*$
multiplet is then ${\cal T}^*={\cal T}^*_v-{\cal T}^*_m$:
\begin{eqnarray}
{\cal T}^*_0&=&{1\over 2}~{\cal J}_{0v}^S+{1\over 4}~{\cal J}_{0m}^S=
{1\over 2}(M^2+N^2+\bar A_iA_i)
\nonumber\\
{\cal T}^*_1&=&-{\cal A}^F_{1v}-{\cal A}^F_{1m}+{\cal A}^S_{1v}=
-{1\over 2}\bar\lambda_i\gamma_5\gamma_\mu\lambda_i-
\bar\psi\gamma_5\gamma_\mu\psi-2iM\overleftrightarrow{\partial }_\mu N
\nonumber\\
{\cal T}^*_2&=&8{\cal J}^V_{2v}-2({\cal J}^F_{2v}-{\cal J}^F_{2m})
+{\cal J}^S_{2v}-{\cal J}^S_{2m}=
-4(T_v-T_m),
\nonumber
\label{mul2}
\end{eqnarray}
where $T_v$ and $T_m$ are the vector and matter contributions
to the stress-tensor.
Now we analyse the spin-1 level of the OPE.
The first observation is that
the scalar current contribution
${\cal A}^S_{1v}=-2iM\overleftrightarrow{\partial }_\mu N$
appears in ${\cal T}_1$ and ${\cal T}_1^*$.
We know that it does not appear in the general free-field
algebra \cite{high}.
Moreover, the relative coefficient of the fermionic contributions
${\cal A}^F_{1v}$ and ${\cal A}^F_{1m}$
(coming from vector multiplets and hypermultiplets) should be 1.
These two conditions cannot be satisfied by taking a
linear combination of ${\cal T}_1$
and ${\cal T}_1^*$, so that a new current should appear, precisely
the lowest-spin current of a new multiplet.
This is the multiplet $\Lambda$ of (\ref{ulla}), which is orthogonal
to both ${\cal T}$ and ${\cal T}^*$, and therefore unaffected by the
hypermultiplets (but only in the free-field limit - see below).
The scalar current $-2iM\overleftrightarrow{\partial }_\mu N$
is required to properly orthogonalize the multiplets,
as it happens in the spin-0 case.
Some effects appear just when the interaction
is turned on:
the scalar current ${\cal A}^S$, which
cancels out at the level of the free-field algebra,
appears at non-vanishing $g$.
The current $\Lambda$ does not
depend on the hypermultiplets
at the free-field level, but receives hypermultiplet contributions
at non-vanishing $g$. The procedure
for determining the currents at the interacting level is worked out
in \cite{ics}.
In particular, after covariantizing the derivatives we have to take the
traces out. In the construction of $\Lambda$, such traces are proportional
to the vector multiplet field equations, which receive contributions
from the hypermultiplets at $g\neq 0$.
At the spin-2 level of the OPE, the situation is similar to the
spin-0 one. The basic
formulas for the squares are
\begin{eqnarray*}
\langle {\cal J}^V_{2v}\,{\cal J}^V_{2v}\rangle&=&1/20\, N_V,
\quad\quad\langle {\cal J}^F_{2v}\,{\cal J}^F_{2v}\rangle=2/5\, N_V,
\quad\quad\langle {\cal J}^S_{2v}\,{\cal J}^S_{2v}\rangle=8/15\, N_V,\\
\langle {\cal J}^F_{2m}\,{\cal J}^F_{2m}\rangle&=&4/5\, N_c^2,
\quad\quad\langle {\cal J}^S_{2m}\,{\cal J}^S_{2m}\rangle=32/15\, N_c^2,
\end{eqnarray*}
factorizing out the common factor
$1/(4\pi^2)^2\,{\prod}^{(2)}(1/|x|^4)$.
Three spin-2 operators come from the previous multiplets, ${\cal T}_2$,
${\cal T}^*_2$
and $\Lambda_2$, and two new operators appear, $\Xi_2$ and $\Xi_2^*$.
These two mix under renormalization and do not split in the large-$N_c$ limit
(see next section).
They have the form
\[
\frac{4}{15}~{\cal J}_{2v}^{V}+\frac{4}{15}~{\cal J}_{2v}^{F}
+{1\over 5}~{\cal J}%
_{2v}^{S}+\alpha_\Xi
\left(2~{\cal J}_{2m}^{F}+{3\over 2}~{\cal J}%
_{2m}^{S}\right)=\Xi_{2v}+\alpha_\Xi \Xi_{2m}.
\]
We call $\alpha_\Xi$ the coefficient for
$\Xi_2$ and $\alpha_\Xi^*$ the one for
$\Xi_2^*$.
In order to proceed with the study of the
quantum conformal algebra, it is not necessary to fix
both $\alpha_\Xi$ and $
\alpha_\Xi^*$,
and we can treat any degenerate pair, such as $\Xi_2$ and $\Xi_2^*$,
as a whole.
Summarizing, the result is that the final algebra contains the multiplets
\begin{eqnarray}
{\cal T}&=&{\cal T}_{v}+\alpha_{\cal T} \,
{\cal T}_{m},\quad\quad\quad\quad \, \,
\Lambda \,\,= \,\,\Lambda_{v},\nonumber\\
\Xi&=&\Xi_{v}+\alpha_\Xi \, \Xi_{m},\quad\quad\quad\quad \,
\Delta \,\,= \, \,\Delta_{v},\nonumber\\
\Upsilon&=&\Upsilon_{v}+\alpha_\Upsilon \,\Upsilon_{m},\quad\quad\quad\quad
\Omega \,\,= \,\,\Omega_{v},\nonumber
\end{eqnarray}
and so on. We have $\alpha_{\cal T}=-\alpha_{\cal T}^*=1$,
while $\alpha_{\Xi}$ and
$\alpha_{\Upsilon}$ are undetermined. Fixing them by diagonalizing the
matrix of two-point functions is possible to the lowest order (and
in the next section we use this property to present the results
of our computations),
but in general it is not meaningful to all orders.
\section{Anomalous dimensions and degenerate pairs}
In this section we discuss the currents at non-vanishing $g$,
compute their anomalous dimensions and study the
degenerate multiplets.
We start with the spin-1 currents ${\cal T}_1$, ${\cal T}_1^*$
and $\Lambda_1$, which we call $\Sigma^i_\mu$, $i=1,2,3$, respectively.
The currents are easily defined at non-zero coupling $g$ by covariantizing the
derivative appearing in ${\cal A}^S_1$, i.e. ${\cal A}^S_1
\rightarrow -2i M \overleftrightarrow{D}_\mu N$.
The matrix of two-point functions has the form (see for example
\cite{ccfis})
\begin{equation}
\langle \Sigma^i_\mu(x)\,\Sigma_\nu^j(0)\rangle={1\over (|x|\mu)^{h_{ik}(g^2)}}\,\,
\pi_{\mu\nu}\left({c^{(1)}_{kl}(g^2)\over |x|^4}\right){1\over (|x|\mu)^
{h_{jl}(g^2)}}\,\left({1\over 4\pi^2}\right)^2.
\label{ij}
\end{equation}
To calculate the lowest order
of the matrix $h_{ij}(g^2)$ of anomalous dimensions
it is sufficient to take the zeroth-order $c^{(1)}_{ij}(0)$
of the matrix
of central charges $c^{(1)}_{ij}(g^2)$ (see \cite{high}).
We have, at finite $N_c$,
\begin{equation}
c^{(1)}_{ij}(0)={8\over 3}\left(\matrix{
2N_c^2-1 & -1 & 0 \cr
-1 & 2N_c^2-1 & 0 \cr
0 & 0 & {1\over 16}N_V
}\right),
\label{c1}
\end{equation}
which becomes diagonal only in the large-$N_c$ limit. Now, from
(\ref{ij}) we can compute the matrix of divergences
\begin{equation}
\langle \partial\Sigma^i(x)\,\partial\Sigma^j(0)\rangle=
-{3\over \pi^4}{(ch^t+hc)_{ij}\over |x|^8}.
\end{equation}
Calling $a$ the matrix $ch^t+hc$, the explicit computation gives $a=
N_cN_V\, {\alpha\over \pi}\,{\rm diag}
(0,64/3,1)$, whence we obtain
\begin{equation}
h=\left(
\matrix{
0 & {1\over N_c} & 0 \cr
0 & 2N_c-{1\over N_c} & 0 \cr
0 & 0 & 3N_c
}\right)\,{\alpha\over \pi}.
\label{h1}
\end{equation}
This matrix is in general triangular, with entries $(i,3)$
and $(3,i)$ equal to zero,
since the current multiplets ${\cal T}$ and $\Lambda$
are orthogonal. Moreover, the entry $h_{11}$ is zero
because of the finiteness of the stress-tensor.
Finally, we observe that the off-diagonal element is suppressed in the
large-$N_c$ limit, as we expected, and that in this limit
the anomalous dimension of ${\cal T}^*$ becomes
$h_{{\cal T}^*}=2N_c\,{\alpha\over \pi}< h_\Lambda=3N_c\,{\alpha\over \pi}$.
The diagonal form of the pair $({\cal T},{\cal T}^*)$ is given
by $({\cal T}^\prime,{\cal T}^{*\prime})=H\,({\cal T},{\cal T}^*)$
with
\[
H=\left(\matrix{1 & 0 \cr
{1\over 2 N_c^2} & 1-{1\over 2 N_c^2}}\right).
\]
One finds
$h^\prime={\rm diag}(0,h^*)$ with $h^*={\alpha\over \pi}
\left(2N_c-{1\over N_c}\right)$.
Now we study the spin-2 level of the OPE. A new degenerate pair
$\{\Xi,\Xi^*\}$ appears and therefore we have five currents ${\cal J}^{(i)}_2$,
$i=1,\ldots 5$,
organized into two degenerate pairs and a ``singlet''. The matrix
$c^{(2)}$ of central charges, defined by
\[
\langle {\cal J}^{(i)}_{\mu\nu}(x)\,{\cal J}^{(j)}_{\rho\sigma}(0)\rangle=
{\frac{1}{%
60}}{1\over (|x|\mu)^{h_{ik}}}
{\prod}^{(2)}\left({{c^{(2)}_{kl}}\over |x|^4}\right)
{1\over (|x|\mu)^{h_{ik}}}\left({1\over 4\pi^2}\right)^2,
\]
is block-diagonal, $c^{(2)}={\rm diag}(120\,c^{(1)}_{\cal T}, 36 N_V,
c^{(2)}_\Xi)$,
where the first two blocks are proportional
to the spin-1 blocks, see formula
(\ref{c1}). The third block reads
\[
c^{(2)}_\Xi={16\over 5}\left(\matrix{
N_V+{3\over 2}~\alpha_\Xi^2 N_c^2
& N_V+{3\over 2}~\alpha_\Xi\alpha_\Xi^{*} N_c^2 \cr
N_V+{3\over 2}~\alpha_\Xi\alpha_\Xi^{*} N_c^2
& N_V+{3\over 2}~\alpha_\Xi^{*2} N_c^2
}\right).
\]
The matrix of divergences is
\begin{equation}
\langle \partial_\mu{\cal J}^{(i)}_{\mu\nu}(x)\, \partial_\rho{\cal J}%
^{(j)}_{\rho\sigma}(0)\rangle= {\frac{3}{4\pi^4}}(c^{(2)}h_2^t+h_2
c^{(2)})_{ij}\,{\frac{%
{\cal I}_{\nu\sigma}(x)}{|x|^{10}}}.
\label{ty}
\end{equation}
Again, the matrix $a^{(2)}=c^{(2)}h_2^t+h_2 c^{(2)}$ is block
diagonal and the first two blocks coincide with those of
the corresponding spin-1 matrix. This
correctly reproduces the known anomalous
dimension of ${\cal T}$, ${\cal T}^*$ and $\Lambda$.
Instead, the $\Xi$-block reads
\[
a^{(2)}_\Xi={16\over 5}\left(\matrix{
7+{11\over 2}~\alpha_\Xi^2-2~\alpha_\Xi
& 7+{11\over 2}~\alpha_\Xi\alpha_\Xi^*-\alpha_\Xi-\alpha_\Xi^* \cr
7+{11\over 2}~\alpha_\Xi\alpha_\Xi^*-\alpha_\Xi-\alpha_\Xi^*
&7+{11\over 2}~\alpha_\Xi^{*2}-2~\alpha_\Xi^*
}\right)N_cN_V{\alpha\over \pi}.
\]
The calculation that we have performed
is not sufficient to completely determine the matrix $h$, since
$h$ is not symmetric. However, at the lowest order, $a^{(2)}$
is sufficient for our purpose.
In particular, let us diagonalize $c^{(2)}$ and $a^{(2)}$ in
the large-$N_c$ limit. We have $\alpha,\alpha^*=
{1\over 3}(5\pm \sqrt{31})$ and $h=N_c\, {\alpha\over \pi}\,
{\rm diag}(1.7,3.6)$.
It appears that the entire pair acquires an anomalous dimension and
moves away.
We conclude that the issue of splitting the paired currents
in the large-$N_c$ limit is irrelevant to this case.
What is important is that the two currents
move together to infinity.
The other pairs of the quantum conformal algebra
($\Xi$, $\Upsilon$, etc.) exhibit a similar behaviour
and only the pair ${\cal T}$ is special.
The analysis of the present section
could proceed to the other multiplets
and multiplet pairs that appear in the algebra. However, the description
that we have given so far is sufficient to understand the general
properties of the algebra and proceed.
We now comment on the validity of the
Nachtmann theorem \cite{nach}, which states that the minimal anomalous
dimension
$h_{2s}$ of the currents of the
even spin-$2s$ level is a positive, increasing and
convex function of $s$. We have
$h_2=0$ and $h_4={1.7}N_c{\alpha\over \pi}$.
Moreover, $h_\Lambda=3N_c{\alpha\over \pi}>h_4$
and $h_{{\cal T}^*}\sim 2 N_c
{\alpha\over \pi}>h_4$. There is no contradiction with the Nachtmann
theorem, which is restricted to the minimal even-spin values of
the anomalous dimensions. Nevertheless, it is worth noting that
the nice regular behaviour predicted by this theorem
cannot be extended in general to the full spectrum of anomalous dimensions.
In particular an odd-spin value $h_{2s+1}$
can be greater than the even-spin value $h_{2(s+1)}$.
Although the spectrum is less regular than in the N=4 case, the
irregularity that we are remarking works in the sense of making certain
anomalous dimensions greater than would be expected.
This will still allow us to argue that
all anomalous dimensions that are non-zero
in perturbation theory become infinite in the strongly coupled
large-$N_c$ limit. In the rest
of the paper we discuss this
prediction and present various consistency checks of it.
Other operators
appear in the quantum conformal algebra besides those that
we have discussed in detail\footnotemark\footnotetext{I am grateful
to S. Ferrara for clarifying discussions of this point.}.
They can be grouped into
three classes:
i) symmetric operators with a non-vanishing
free-field limit; these are the ones that we have discussed;
ii) non-symmetric operators with a non-vanishing
free-field limit; these are not completely
symmetric in their indices;
iii) operators with a vanishing
free-field limit; these are turned on by the interaction.
Operators of classes ii) and iii) can often be derived from
those of class i) by using supersymmetry.
This is the case, for example,
of the N=4 quantum conformal algebra \cite{adrianopoli}.
The anomalous dimensions are of course
the same as those of their class i)-partners, so that
our discussion covers them and the conclusions that
we derive are unaffected.
\section{Closed conformal field theory}
The multiplet of the stress-tensor is the most important
multiplet of the algebra.
Since all of its components are conserved, it will survive
at arbitrary $g$ and in particular
in the large-$N_c$ limit. The OPE algebra generated by this multiplet
is in general not closed, but it might be closed in some
special cases.
We can classify conformal field theory
into two classes:
i) {\it open} conformal field theory, when the quantum conformal algebra
contains an infinite number of (generically non-conserved) currents;
ii) {\it closed} conformal field theory, when the quantum conformal algebra
closes within a finite set of conserved currents.
This section is devoted to a study of this classification.
We conjecture that closed conformal field theory is
the boundary of the set of open conformal field theories.
Roughly, one can think of a ball centred in
free-field theory. The boundary sphere is the set of closed
theories. The bulk is the set of open theories.
As a radius $r$ one can take the value of the minimal
non-vanishing anomalous dimension.
In the N=4 theory, $r$ is the anomalous dimension
of the Konishi multiplet, while in the N=2
finite theories $r$ is the minimal eigenvalue
of the matrix of anomalous dimensions of
the $(\Xi,\Xi^*)$-pair. The theory is free for $r=0$,
open for $r<0<\infty$
and closed for $r=\infty$. The function $r=r(g^2 N_c)$ can be taken
as the true coupling constant of the theory instead of $g^2 N_c$.
The Nachtmann theorem is completely general
(a consequence of unitarity and dispersion relations)
and does not make any use of supersymmetry, holomorphy,
chirality or whatsoever,
which would restrict its range of validity.
It states in particular
that if $r=0$, then $h_{2s}=0$ $\forall s>0$,
and if $r=\infty$, then $h_{2s}=\infty$ $\forall s>0$.
The considerations of the previous section, in particular the regularity
of the spectrum of critical exponents,
suggest that in the former
case all current multiplets are conserved and in the latter case all of them
have infinite anomalous dimensions and
decouple from the OPE.
Very precise properties of the strongly coupled limit of the theory
can be inferred from this.
It was pointed out in \cite{high}, using the
AdS/CFT correspondence \cite{malda}, in particular the results of
\cite{klebanov}, that the $TT$ OPE should close in the strongly coupled
(which means at large 't Hooft coupling, $g^2 N_c\gg 1$) large-$N_c$
limit of N=4 supersymmetric theory. In the weakly coupled limit,
the anomalous dimensions of the various
non-conserved multiplets are non-zero and
$r\sim g^2 N_c$.
The results of \cite{klebanov}
suggest that in the vicinity of the boundary sphere,
the anomalous dimension of the Konishi operator
changes to $r\sim (g^2 N_c)^{1\over 4}$,
but still tends to infinity.
The Nachtmann theorem then implies that
all the anomalous dimensions of the non-conserved operators
tend to infinity.
It is reasonable to expect a similar behaviour in the case of
N=2 finite theories (to which the
AdS/CFT correspondence does {\it not} apply, in general).
We expect that in the strongly coupled large-$N_c$ limit
the OPE closes just with the currents
(\ref{mul}) \footnotemark\footnotetext{We can
call ${\cal T}_0$ the ``spin-0 current'', with some
abuse of language.}. This appears to be the correct generalization of the
property exhibited in the N=4 case. Therefore we conjecture that
$\cdot$ {\it closed conformal field theory is the exact solution
to the strongly coupled large-$N_c$ limit of open conformal field
theory.}
The weakly coupled behaviour studied in the previous sections
is consistent with this picture.
We have observed that ${\cal T}$
and ${\cal T}^*$ split in the large-$N_c$ limit
already at weak coupling. This suggests that ${\cal T}^*$
moves away from ${\cal T}$.
Moreover,
this splitting does not take place in the
other multiplet-pairs ($\Xi$, $\Upsilon$ and so on)
that mix under renormalization:
this means that they remain paired
and each pair moves to infinity, without leaving any remnant.
Secondly, we claim that
$\cdot$ {\it a closed quantum conformal algebra
determines uniquely the associated closed
conformal field theory\footnotemark
\footnotetext{To our present knowledge, the stronger version of this statement,
i.e. its extension to open algebras, might hold also.
However, this is a more difficult
problem to study.}.}
A similar property holds in two-dimensional conformal field theory
and indeed we are asserting
that closed conformal field theory is the proper
higher-dimensional analogue of two-dimensional
conformal field theory. Thirdly, we show in the next section that
$\cdot$ {\it
a closed algebra is determined uniquely by two central charges: $c$ and $a$.}
The two central charges, called $c$ and $a$ in ref.
\cite{noi} take different
values in the N=2 algebra, precisely\footnotemark
\footnotetext{We use the normalization of \cite{noi}.}:
\begin{equation}
c={1\over 6}(2N_c^2-1),~~~~~~~~~~~~~~~~~~~
a={1\over 24}(7N_c^2-5),
\label{ca}
\end{equation}
and equal values in the
N=4 algebra, $c=a={1\over 4}(N_c^2-1)$ if the gauge group is
$SU(N_c)$.
We recall that the values of $c$ and $a$
are independent of the coupling
constant $g$ \cite{noi} if the theory is finite.
When N=2, the difference between $c$ and $a$
persists in the large-$N_c$ limit
(where $c/a\sim 8/7$),
both strongly and weakly coupled.
The presence of more partners in the current multiplet
of the stress-tensor (precisely
${\cal T}_0$ and ${\cal T}_1$) is
related to the ratio ${c\over a}\neq 1$ in N=2 theories, something which we
will describe better in the next section.
This is a remarkable
difference between closed conformal
field theory in four dimensions
and conformal field theory in two dimensions,
two types of theories that otherwise
have several properties in common and can be studied in parallel.
Let us now consider N=1 (and non-supersymmetric)
theories. The multiplet of the stress-tensor
will not contain spin-0 partners, in general, but just the $R$-current.
The above considerations stop at the spin-2 and 1 levels
of the OPE, but the procedure to
determine the closed algebra is the same. What is more subtle
is to identify the physical situation that the closed limit should describe.
In supersymmetric QCD with $G=SU(N_c)$ and $N_f$ quarks and
antiquarks in the fundamental representation, the conformal
window is the interval
$3/2\, N_c < N_f < 3\, N_c$.
In the limit where both $N_c$ and $N_f$ are large,
but the ratio $N_c/N_f$ is fixed and
arbitrary in this window, the $TT$ OPE does not close \cite{noialtri}
and $r$ is bounded by
\cite{kogan}
\[
r \sim g^2 N_c < 8\pi^2,
\]
a relationship that assures positivity of the denominator
of the NSVZ exact beta-function \cite{nsvz}.
Therefore the closed limit $r\rightarrow \infty$
presumably does not exist in the conformal window
(it is still possible, but improbable,
to have $r=\infty$ for some
finite value of $g^2 N_c$).
The absence of a closed limit
could be related to the non-integer (rational)
values of $24 c$ and $48 a$
\footnotemark\footnotetext{In our normalization
the free-field values of $c$ and $a$ are \cite{noi}
$c={1\over 24}(3N_v+N_\chi)$, $a={1\over 48}(9N_v+N_\chi)$,
where $N_v$ and $N_\chi$ are the numbers of vector and chiral multiplets.},
which indeed depend on $N_c/N_f$.
In the low-energy
limit we have
the formulas \cite{noi}
\begin{equation}
c={1\over 16}\left(7N_c^2-2-9{N_c^4\over N_f^2}
\right),~~~~~~~~~~~~~~~~
a={3\over 16}\left(2N_c^2-1-3{N_c^4\over N_f^2}
\right).
\end{equation}
The ${N_c^4\over N_f^2}$-contributions to $c$ and $a$ are not subleading
in the large-$N_c$ limit.
Presumably, an open algebra
is necessary to produce non-integer values
of $c$ and $a$. This problem deserves
further study and is currently under investigation.
In conclusion, our picture of the moduli space of
conformal field theory as a ball centred in the origin and with
closed conformal field theory as a boundary
works properly when supersymmetry is
at least N=2 or, more generally, when
the conformal field theory belongs
to a one-parameter family of conformal field theories
with a point at infinity,
the parameter in question being the radius $r$ of the ball or,
equivalently, the coupling constant $g^2 N_c$.
N=1 finite families of this type are for example
those studied in refs. \cite{lucchesi}.
\section{OPE structure of closed conformal field theory.}
The basic rule to determine the
quantum conformal algebra of closed conformal field theory
is as follows.
One first studies the free-field OPE of an open conformal
field theory and organizes the currents into orthogonal
and mixing multiplets.
Secondly, one computes the anomalous
dimensions of the operators to the lowest orders
in the perturbative expansion.
Finally, one
drops all the currents with
a non-vanishing anomalous dimension.
More generically, one can postulate a set
of spin-0, 1 and 2 currents, that we call
${\cal T}_{0,1,2}$, and study the most general
OPE algebra consistent with closure and unitary.
The closed limit of the N=4 quantum conformal algebra is very simple
and actually the formula of $\tilde {\rm SP}_{\mu\nu,\rho\sigma;\alpha\beta}$
that was given in \cite{ics}, toghether with the value of
the central charge $c=a$
encode the full closed N=4 algebra\footnotemark\footnotetext{This
situation should be described also by the
formalism of N=4 superfields \cite{howe}. Claimed by
its authors to
hold generically in N=4 supersymmetric Yang-Mills theory,
this formally can actually be correct only in the closed limit
of theories with $c=a+{\cal O}(1)$.}.
We study here the closed N=2 quantum
conformal algebra for generic $c$ and $a$.
The case with gauge group $SU(N_c)$
and $2N_c$ hypermultiplets in the fundamental
representation is $c/a=8/7$.
We have
\begin{eqnarray}
{\cal T}_0(x)\,{\cal T}_0(y)&=&{3\over 8\pi^4}\,
{c\over |z|^4}-
{1\over 4\pi^2}\left(3-4{a\over c}\right){1\over |z|^2}\,{\cal T}_0(w)
+\,{\rm descendants}+{\rm regular \, terms},
\nonumber\\
{\cal T}_{\mu}(x)\,{\cal T}_{\nu}(y)&=&{c\over \pi^4}\,\pi_{\mu\nu}
\left({1\over |z|^4}\right)-
{4\over 3\pi^2}\left(1-2{a\over c}\right){\cal T}_0(w)\,\pi_{\mu\nu}\left({1\over |z|^2}\right)
\nonumber\\&&
-{1\over \pi^2}\left(1-{a\over c}\right) {\cal T}_\alpha(w)\,
\varepsilon _{\mu \nu \alpha \beta }~\partial _{\beta
}~\left( \frac{1}{|z|^{2}}\right)
\nonumber\\&&
+{1\over 960\pi^2}{\cal T}_{\alpha\beta}(w)
{\prod}^{(1,1;2)}_{\mu,\nu;\alpha\beta}\left(|z|^2 \ln |z|^2M^2\right)
+\,{\rm descendants}+{\rm regular \, terms},\nonumber\\
{\cal T}_{\mu\nu}(x)\,{\cal T}_{\rho\sigma}(y)&=&
{2\, c\over \pi^4}\,{\prod}^{(2)}_{\mu\nu,\rho\sigma}
\left({1\over |z|^4}\right)-{16
\over 3\pi^2}\left(1-{a\over c}\right)
{\cal T}_0(w)\,{\prod}^{(2)}_{\mu\nu,\rho\sigma}
\left({1\over |z|^2}\right)+\nonumber\\&&
-\frac{4}{\pi ^{2}}\left(1-{a\over c}\right)
{\cal T}_{\alpha }(w)~\sum_{{\rm symm}%
}\varepsilon _{\mu \rho \alpha \beta }~\pi _{\nu \sigma }~\partial _{\beta
}~\left( \frac{1}{|z|^{2}}\right)+\nonumber\\&&
-{1\over \pi^2}\,{\cal T}_{\alpha\beta}(w)\, \tilde
{\rm SP}_{\mu\nu,\rho\sigma;\alpha\beta}\left({1\over |z|^2}\right)+\,
{\rm descendants}+{\rm regular \, terms},
\label{opppo}
\end{eqnarray}
where $z_\mu=x_\mu-y_\mu$, $w_\mu={1\over 2}(x_\mu+y_\mu)$ and
\begin{eqnarray}
{\prod}^{(1,1;2)}_{\mu,\nu;\alpha\beta}&=&
\left(4+{a\over c}\right){\prod}^{(2)}_{\mu\nu,\alpha\beta}
-{2\over 3}\left(7-5{a\over c}\right)\pi_{\mu\nu}
\partial_\alpha\partial_\beta,\nonumber\\
\tilde{\rm SP}_{\mu\nu,\rho\sigma;\alpha\beta}
\left({1\over |z|^2}\right)&=&
{\rm SP}_{\mu\nu,\rho\sigma;\alpha\beta}\left({1\over |z|^2}\right)
+{1\over 480}\left(102-59{a\over c}\right){\prod}^{(2)}_{\mu\nu,\rho\sigma}
\partial_\alpha\partial_\beta\left(|z|^2 \ln |z|^2M^2\right)\nonumber\\&&
-{1\over 32}\left(12-7{a\over c}\right)
{\prod}^{(3)}_{\mu\nu\alpha,\rho\sigma\beta}\left(|z|^2 \ln |z|^2M^2\right).
\label{uy}
\end{eqnarray}
The numbers appearing in these two space-time invariants are not
particularly illuminating.
It might be that the decomposition that we
are using is not the most elegant one, but for
the moment we do not have a better one to propose.
The mixed OPE's read:
\begin{eqnarray}
{\cal T}_\mu(x)\,{\cal T}_0(y)&=&{1\over 24\pi^2}\left(1-2{a\over c}\right)
{\cal T}_\nu(w) \, \pi_{\mu\nu}\left(\ln |z|^2M^2\right)+\,
{\rm descendants}+{\rm regular \, terms},\nonumber\\
{\cal T}_{\mu\nu}(x)\, {\cal T}_0(y)&=&{2\over 3\pi^2} {\cal T}_0(w)\,
\pi_{\mu\nu}\left({1\over |z|^2}\right)-{1\over 160\pi^2}
{\cal T}_{\alpha\beta}(w)\left(1-{a\over c}\right)
{\prod}^{(2)}_{\mu\nu,\alpha\beta}(|z|^2
\ln |z|^2 M^2)\nonumber\\&&+\,
{\rm descendants}+{\rm regular \, terms},\nonumber\\
{\cal T}_{\mu\nu}(x)\, {\cal T}_\rho(y)&=&{1\over 2\pi^2}{\cal T}_\alpha(w)
{\prod}^{(2,1;1)}_{\mu\nu,\rho;\alpha}\left(\ln |z|^2M^2\right)
\nonumber\\&&
+{3\over 40 \pi^2}{\cal T}_{\alpha\gamma}(w)\left(1-{a\over c}\right)
\sum_{\rm symm}\varepsilon_{\mu\rho\alpha\beta}
\pi_{\nu\gamma}\partial_\beta(\ln
|z|^2M^2)\nonumber\\&&
+\,{\rm descendants}+{\rm regular \, terms},
\label{mixx}
\end{eqnarray}
where
\begin{eqnarray*}
{\prod}^{(2,1;1)}_{\mu\nu,\rho;\alpha}\left(\ln |z|^2M^2\right)&=&
(\delta_{\alpha\mu}\partial_\nu\partial_\rho+
\delta_{\alpha\nu}\partial_\mu\partial_\rho+2\delta_{\alpha\rho}
\partial_\mu\partial_\nu-\delta_{\mu\rho}\partial_\nu\partial_\alpha
-\delta_{\nu\rho}\partial_\mu\partial_\alpha)\left({1\over |z|^2}\right)\\
&&+{1\over 6}\left(1-2{a\over c}\right){\cal T}_\alpha(w)
{\prod}^{(2)}_{\mu\nu,\rho\alpha}\left(\ln |z|^2 M^2\right).
\end{eqnarray*}
The ${\cal T}_0\,{\cal T}_0$
OPE closes by itself, but this fact does not appear
to be particularly meaningful.
We now make some several remarks about the above algebra.
We begin by explaining how
to work out (\ref{opppo})-(\ref{mixx}).
In a generic N=2 finite theory,
we collect the hypermultiplets into a single representation $R$.
The condition for finiteness is the equality of the Casimirs of $R$
and the adjoint representation: $C(G)=C(R)$.
We have
\[
c={1\over 6}\dim G +{1\over 12} \dim R,~~~~~~~~~~~~~
a={5\over 24}\dim G + {1\over 24} \dim R.
\]
The form of the currents belonging to the
${\cal T}$ multiplet does not depend on the theory
(i.e. on $c$ \& $a$). Instead, the form
of the currents ${\cal T}^*$ does. Let us write
${\cal T}_0^*={1\over 2}{\cal J}^S_{0v}+{\beta\over 4}
{\cal J}^S_{0m}$.
The correlator
$\langle {\cal T}_0\,{\cal T}_0^*\rangle$ is proportional to $\dim G
-{\beta\over 2}\dim R$. By definition, in the closed limit
$\langle {\cal T}_0\,{\cal T}_0^*\rangle=0$, which gives
\[
\beta=-2 {c-2a\over 5c-4a}.
\]
The scalar operator appearing in the OPE can be decomposed
as
\[
M^2+N^2+2\bar A_i A_i=2 {\beta-2\over \beta+1}{\cal T}_0+
{6\over \beta+1}{\cal T}_0^*.
\]
Dropping ${\cal T}_0^*$, one fixes the coefficient with which
${\cal T}_0$ appears in the $TT$ OPE. It is proportional
to $c-a$, as we see in (\ref{opppo}). The other terms of (\ref{opppo})
and (\ref{mixx}) can be worked out similarly.
Let us now analyse the ``critical'' case $c=a$.
Examples of theories with $c=a+{\cal O}(1)$
can easily be constructed. It is sufficient
to have $\dim G= \dim R+{\cal O}(1)$. We do not possess the
complete classification of this case.
At $c=a$ the structure simplifies enormously.
The $TT$ OPE closes just with the stress-tensor,
$\tilde{\rm SP}_{\mu\nu,\rho\sigma;\alpha\beta}$ reduces to the N=4
expression \cite{high} and the operators ${\cal T}_{0,1}$ behave as
primary fields with respect to the stress-tensor.
The divergence of the $R$-current (which is simply
${\cal T}_1$ in our notation, up to a numerical factor)
has an anomaly
\begin{equation}
\partial_\mu(\sqrt{g}R^\mu)={1\over 24 \pi^2}(c-a)R_{\mu\nu\rho\sigma}\tilde
R^{\mu\nu\rho\sigma},
\label{ano}
\end{equation}
non-vanishing when $c\neq a$.
The
three-point function $\langle R\,T\,T\rangle$ is not zero, which means that
the $TT$ OPE does contain some operator mixing with $R_\mu$.
This operator is just $R_\mu$ in the large-$N_c$ limit,
as we see in (\ref{opppo}),
but {\it both} $R_\mu$ and ${\cal T}_1^*$ at generic $N_c$.
On the other hand, if the theory has $c=a$ and is supersymmetric
(N=1 suffices), then the above anomaly vanishes. Given that the correlator
$\langle R\,T\,T\rangle$ is unique (because there
is a unique space-time invariant for the
spin-1 level of the OPE, see \cite{high}), we have $\langle R\,T\,T\rangle=0$
in such cases. In conclusion, ${\cal T}_1$ is
kicked out of the $TT$ OPE algebra
when $c=a$, even when the number of supersymmetries is less than four.
The spin-1
current ${\cal A}^F_{1v}+{\cal A}^F_{1m}$ appearing in the $TT$ OPE
has to be a linear combinarion of $\Lambda_1$ and ${\cal T}_1^*$.
A simple computation shows that
this is possible only for ${\cal T}_1^*=-{\cal A}^F_{1v}-2
{\cal A}^F_{1m}+{\cal A}^S_{1v}$,
which means $\beta=2$. In turn, this implies that
${\cal T}_0$ is also out of the $TT$ OPE, as we have just seen.
Therefore the $TT$ OPE closes just with the stress-tensor and
$\tilde {\rm SP}$ coincides precisely with the one given in \cite{ics}:
$\cdot$ {\it the $c=a$ closed algebra
is unique and coincides with the one of \cite{ics}.}
When $c\neq a$ and supersymmetry is at least N=2
the difference $c-a$ fixes the coefficients of the new terms
in the OPE:
$\cdot$ {\it given $c$ and $a$, there is a unique closed conformal algebra
with N=2 supersymmetry.}
In the large-$N_c$ limit of the N=2 model that
we have studied in section 3,
$c$ and $a$ are ${\cal O}(N_c^2)$ and their ratio is $8/7$.
At finite $N_c$ (i.e. when the algebra is open)
they receive different
${\cal O}(1)$-order corrections (see (\ref{ca})).
Using our algebra, we can give a very imple explanation of both effects.
The two-point function $\langle TT\rangle$ encodes
only the quantity $c$. $c$ and $a$ are encoded into the three-point
function $\langle TTT\rangle$,
or the higher-point functions, as it is clear from the
trace anomaly formula
\[
\Theta={1\over 16\pi^2}\left[
c (W_{\mu\nu\rho\sigma})^2-a (\tilde R_{\mu\nu\rho\sigma})^2\right],
\]
and the algebra (\ref{opppo})-(\ref{mixx}),
in particular
the space-time invariant $\tilde {\rm SP}_{\mu\nu,\rho\sigma;\alpha\beta}$
(\ref{uy}).
We can study the three-point function
$\langle T(x)T(y)T(z)\rangle $ by taking the $x\rightarrow y$
limit and using the OPE
$T(x)T(y)=\sum_n \hbox{$c_n(x-y)$}{\cal O}_n\left({x+y\over 2}\right)$.
We are lead to consider the two-point functions
$\langle {\cal O}_n\,T\rangle$. Now, we know that the there
are only two operators ${\cal O}_n$
that are not orthogonal to $T$: the stress-tensor itself
{\sl and} ${\cal T}^*$.
${\cal O}_n=T$ produces a contribution
$\langle TT\rangle$, which is again $c$.
This is the contribution to $a$ proportional to $c$.
In the N=2 theories at finite $N_c$
there is a second contribution
from ${\cal O}_n={\cal T}^*$.
Indeed, $\langle {\cal T}^*\, {\cal T}\rangle$ is non-vanishing and
precisely ${\cal O}(1)$, which explains
the ${\cal O}(1)$-difference between $c$ and
${8\over 7}a$.
$\langle {\cal T}^*\, {\cal T}\rangle$ is
not affected by an anomalous
dimension to the second-loop order
(the off-diagonal element of $a^{(2)}_{\cal T}$ vanishes),
which is what we expect, since both $c$ and $a$ are radiatively uncorrected.
The reader might have noted that the operator
${\cal T}_1$ appearing in the ${\cal T}_1{\cal T}_1$ OPE
has a coefficient proportional to $c-a$,
which means that the $\langle {\cal T}_1{\cal T}_1{\cal T}_1\rangle$
three-point function (which is unique by the usual arguments)
is proportional to $c-a$ and not to $5a-3c$ as in \cite{noi}.
The reason in that our $R$-current is not the same as the one
that is used in the N=1 context of ref. \cite{noi}: our present $R$-current
is $SU(2)$-invariant, while the one of \cite{noi} is associated with
a $U(1)$ subgroup of $SU(2)$.
Finally, we discuss the embedding of the N=4
open algebra of \cite{ics} into the N=2 open algebra of section 3.
With the terminology ``current multiplet'' we have always referred
to the subset of components that are generated by the $TT$ OPE.
The quantum conformal algebra
is embeddable
into a larger set of OPE's, containing
all the supersymmetric partners of the currents that
we have considered so far.
For example, in the N=4 theory
the $R$-currents (and in particular the object
called ${\cal T}_1$ in the N=2 frame)
are not $SU(4)$ invariant and so they do not appear in the $TT$ OPE,
but they appear in the superpartners of the $TT$ OPE.
From the N=2 point of view, instead, the current
${\cal T}_1$ is $SU(2)$-invariant and
indeed it does appear in the $TT$ OPE.
Currents that are kicked out of the restricted algebra
are of course always part of the larger web and inside that
web they ``move''.
Yet what is important is to know
the {\it minimally} closed
algebra.
Let us describe how the Konishi multiplet $\Sigma$ of \cite{ics}
emerges in the N=4 case. The spin-0 current ${\cal T}_0^*$ coincides precisely
with the spin-0 operator $\Sigma_0$ of \cite{ics}.
At the spin-1 level we have, from the N=2 point of view,
two operators: ${\cal T}_1^*$ and $\Lambda_1$ (${\cal T}_1$ having
been kicked out).
The operator $\Sigma_1={\cal A}^F_{1v}+{\cal A}^F_{1m}$
is the linear combination of ${\cal T}_1^*$ and $\Lambda_1$
that does not contain the scalar current ${\cal A}_1^S$, forbidden in the
N=4 algebra by $SU(4)$ invariance. We have $\Sigma_1=-{1\over 2}~{\cal T}_1^*+
2~\Lambda_1$.
Now, the N=2 supersymmetric algebra relates $\Sigma_1$ with
$-{1\over 2}~{\cal T}_2^*+
2~\Lambda_2$, which, however, is not $\Sigma_2$ and is not $SU(4)$-invariant.
Actually, one has
\begin{equation}
\Sigma_2=-{7\over 20}~{\cal T}_2^*+{7\over 3}~\Lambda_2-2~\Xi_2^*,
\label{sigma2}
\end{equation}
where $\Xi_2^*=\Xi_{2v}+\Xi_{2m}$ is orthogonal
to the N=4 operator $\Xi_2=5~\Xi_{2v}-{20\over 3}~\Xi_{2m}$.
The N=2 supersymmetric transformation does not generate directly $\Sigma_2$.
$\Sigma_2$ is recovered after a suitable $SU(4)$-invariant
reordering of the currents. The meaning of this fact is that
the quantum conformal algebra admits various different N=2 (and N=1)
``fibrations'', depending of the subgroup of $SU(4)$ that one
preserves, and that all of these fibrations are meaningful at arbitrary $g$.
\section{Conclusions}
In this paper we have analysed the quantum conformal algebras of N=2
supersymmetric theories, focusing in particular on finite theories.
Several novel features arise with respect to the quantum conformal algebra of
\cite{ics} and each of them has a nice interpretation in the
context of our approach.
Known and new properties of supersymmetric theories,
conformal or not, are elegantly grouped by our formal
structure and descend
from a general and unique mathematical notion.
Stimulated by the results of our analysis,
we have introduced a novel class of conformal field theories
in dimension higher than two, which have a closed quantum
conformal algebra.
The definition is completely general, valid in any dimension.
For example, analogous considerations apply to
three dimensional conformal field theory and
might be relevant for several problems in condensed matter physics.
Closed conformal field theory is nicely tractable from the
formal/axiomatic point of view. On the other hand, it is interesting
to identify the physical situations or limits of ordinary theories
that it describes. Various considerations suggest that
the closed algebra coincides in general with
the strongly coupled large-$N_c$ limit of an open algebra.
In some cases, like N=1
supersymmetric QCD
as well as ordinary QCD, the identification
of the closed limit, if any, is more subtle and still unclear.
In a closed conformal field theory the quantities $c$ and $a$ are always
proportional to each other, but the proportionality
factor depends on the theory, i.e. it feels
the structure of the quantum conformal algebra.
Closed conformal field theory is described uniquely by
these two central charges. We have worked out the
OPE algebra in detail and related it to known anomaly formulas.
The value of $c$ is given by the two-point functions,
while the value of $a$ is given by the three-point functions,
i.e. by the structure
of the algebra itself.
There is a critical case, $c=a$, in which the algebra admits a
closed subalgebra containing only the stress-tensor.
An open conformal field theory presents a richer set of phenomena.
In particular, some operators can mix under renormalization with
the stress-tensor and the main
effect of this mixing is that $c$ and $a$ are not
just proportional.
We think that
closed conformal field theory is now ready for a purely algebraic
study, i.e. mode expansion,
classification of unitary representations and so on.
In view of the applications,
we believe that it is worth to proceed in this direction.
\vskip .5truecm
| proofpile-arXiv_065-9147 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
There is mounting evidence that some interstellar material survives inside
objects in the solar system. Such evidence is found in the compositional
similarity between cometary ices and inter-planetary grains with those
found in the interstellar medium (Mumma 1997).
That cometary ices and organics may have supplied the molecular seeds for
prebiotic life on the earth, or even the oceans, leads to the intriguing possibility
that the molecules essential for life might have their
origin in interstellar space (e.g. Chyba \&
Sagan 1997).
As such it is important to gain an understanding of the
processes that lead to the formation of ices in the interstellar medium (ISM).
By far, the most abundant constituent of ices in the interstellar medium,
and also in comets, is H$_2$O.
Because the low temperature ion--molecule chemistry that is active in dense
regions of the ISM is incapable of reproducing
the observed \HtwoO\ ice observations (Jones \& Williams 1984),
it has been proposed that water--ice
is formed through the hydrogenation of oxygen atoms on the surfaces of
cold ($T_{d} \sim 10$ K) dust grains (c.f. Tielens \& Hagen 1982).
Until recently there have been
few alternatives to grain surface chemistry to
account for the abundance of ices.
However, we have found that water--ice mantles form quite naturally in a layer of
gas rich in water vapor that has been processed by a shock with a velocity in excess of 10
\ts {\rm km\ts s$^{-1}$}\ (Bergin, Melnick, \& Neufeld 1998; hereafter BMN).
The prediction that large quantities of water vapor are produced in shocked gas has recently
gained additional support through the direct observations of strong emission from
gaseous water toward Orion BN--KL (Harwit et al. 1998) using the
{\em Infrared Space Observatory}.
There are several observational constraints that might discriminate
between water-ice mantles created behind shocks and those formed by
chemical processes in grain mantles.
In this Letter we examine whether post--shock chemistry can account for the
observed abundances of deuterated water (HDO) and \COtwo\
in the ISM and in comets.
\section{Model}
We use the 3--stage model described in BMN to
examine the evolution of chemical abundances in pre--shocked (stage 1), shocked
(stage 2), and post--shock gas (stage 3).
The chemical evolution in each stage is treated independently, except that the final
composition of the preceding stage is used as the initial composition of the
next. The first stage is assumed to be initially atomic in composition and has
physical conditions appropriate for
quiescent gas in molecular cores ($T = 10 - 30$ K; \mbox{n$_{H_{2}}$}\ = $10^4 - 10^6$ \mbox{cm$^{-3}$} ).
The stage~1 pre--shock chemistry evolves until $t$ = 10$^6$ yr when the gas
is assumed to be shocked. The stage~2 shock dynamics
are modeled as an increase in the {\em gas} temperature which scales
with the shock velocity --- peak gas temperatures range
between 400 and $\sim$2000 K
for shock velocities, $v_s$, between 10 and 30 \ts {\rm km\ts s$^{-1}$}, respectively
(Kaufman \& Neufeld 1996). The high-temperature chemistry in the shock
is allowed to evolve for a cooling timescale ($\sim 100$ yr for a 30 \ts {\rm km\ts s$^{-1}$}\ shock;
see BMN), whereupon
the third stage commences with a return to quiescent temperature conditions.
We use the UMIST RATE95
database (Millar, Farquhar, \& Willacy 1997) for all computations
of non-deuterated molecules. For the deuterium chemistry we have created a smaller
network containing all the important species and reactions that lead to the
formation and destruction of HDO and OD.
The database was created using reactions taken from the literature
(Croswell \& Dalgarno 1985; Millar, Bennet, \& Herbst 1989;
Pineau des For\^{e}ts, Roueff, \& Flower 1989;
Rodgers \& Millar 1996). The principle reaction in
the deuterium chemistry is
$\rm{H_3^+ + HD \leftrightarrow H_2D^+ + H_2}$, for which
we have used the rate coefficients in Caselli et al. (1998).
We have adopted the \Hthreep\ electron recombination
rate and branching ratios given in Sundstrom et al. (1994) and Datz et al. (1995),
respectively.
The \HtwoDp\ recombination rate and branching ratios are from Larsson et al. (1996).
The deuterium chemical network
was tested against the larger network of Millar, Bennet, \& Herbst (1989) and
the degree of deuterium fractionation in \HtwoDp , OD, and HDO was found to be
in excellent agreement for the temperature ranges considered here.
As a corollary to the high temperature reactions which form \HtwoO\ in the
shocked gas (see Kaufman \& Neufeld 1996), we have included similar
reactions which form HDO. These reactions and rate coefficients are listed in
Table 1.
We have used the gas-grain adaptation of the UMIST network discussed in
Bergin \& Langer (1997). For the first and second stages we assume that
molecules are depleting onto bare silicate grains, while in the third stage the molecules
deplete onto a grain mantle dominated by solid \HtwoO .
This will increase the binding potential for all species by a factor of 1.47.
We have
used the measured binding energies of CO, CO$_2$, and \HtwoO\ to \HtwoO\ given
in Sandford \& Allamandola (1990).
The dust temperature, which is critically important for the rate of thermal evaporation,
is assumed to be equivalent to the gas temperature, except in
the shock (stage 2) where we artificially raise the temperature to 200 K to account for
the removal of the ice mantle due to sputtering or grain--grain collisions.
\section{Results}
Figure~1 presents the time evolution of abundances for the
3--stage gas--grain chemical model.
The peak gas-phase \HtwoO\ abundance is low
([\HtwoO ]/[\Htwo ] $= 3 \times 10^{-7}$) in the pre--shock gas, rises to $\sim 10^{-4}$
following the passage of a shock, and depletes
onto the grain surface at $t \simeq 10^{5}$ yr (for \mbox{n$_{H_{2}}$}\ = 10$^{5}$ \mbox{cm$^{-3}$} )
in the post--shock gas.
In the shock stage we assume $v_s = 20$ \ts {\rm km\ts s$^{-1}$} , which
will heat the gas to $\sim$1000 K; similar results would be found for any shock
velocity between $\sim$10 and 40 \ts {\rm km\ts s$^{-1}$} .
In the pre-shock quiescent stage we find significant levels of
D--fractionation: at $t =$ \mbox{10$^6$}\ yr we find [HDO]/[\HtwoO ] $\sim
10^{-3}$ in both the gas and solid phases.
Thus [HDO]/[\HtwoO ] $>$ [HD]/[\Htwo ] $= 2.8 \times 10^{-5}$,
which is the result of
low temperatures and low electron abundances favoring
production of \HtwoDp\ relative to \Hthreep . These enhancements are mirrored
in the daughter products of \HtwoDp , such as HDO.
In the shock itself HDO is released from the grain surface leading
to an increase in its gas phase abundance. Thus, in stage 2 the HDO abundance
shows little change because
the fractionation via the ion-molecule chemistry is halted at higher temperatures.
There is some production via the high--T reactions listed in Table~1;
however, these reactions do not increase
the fractionation significantly.
Therefore, the rapid hydrogenation of oxygen is not followed by a similar increase
in the HDO abundance and in the shock [HDO]/[\HtwoO ] $= 5 \times 10^{-5}$.
Pineau des For\^{e}ts et al. (1989) examined the D-chemistry in shocks
and found little change in the water D/H ratio. As noted by the authors,
their work did not examine the high temperatures required to
produce abundant \HtwoO\ and therefore did not probe this change.
In the post--shock gas (stage 3 in Figure~1) the abundance of gaseous HDO is
approximately constant with a small, but increasing, abundance frozen on the
grain surfaces. At later times, $t \sim 4 \times 10^{4}$ yr,
\HtwoO\ begins to deplete onto the grain surfaces, removing
the primary destruction pathway for \Hthreep\ and
\HtwoDp\
As the abundance of \HtwoDp\ ions increase, they
react directly with water forming HDO (along with OD and O), and further increase
the fractionation.
As shown the top panels of Figure~1,
the abundance of \DCOp\ is quite high with [\DCOp ]/[\HCOp ]
$= 0.002$, but in the shock the abundances of both \DCOp\ and \HCOp\ decline
(due to reactions with \HtwoO ) and fractionation is reversed such that
[\DCOp ]/[\HCOp ] $\sim 2 \times 10^{-5}$ (see also Pineau des For\^{e}ts et al. 1989).
Thus, \DCOp\ should be an excellent tracer of quiescent gas in
star forming regions, as chemical theory predicts it will be destroyed in an outflow.
In the first two stages active CO sublimation at $T_d = 30$ K suppresses the net CO depletion
onto bare silicate grains. In stage~3, at $\sim 10^5$
yr, the gaseous CO abundance declines because
we have assumed that in the third stage the grains are {\em a priori} coated
by water--ice which results in a greater binding energy, resulting in an exponentially
decreasing sublimation rate
compared to bare silicates (see \S 2). This leads
a solid CO abundance
given by [CO]$_{gr}$/[H$_2$] $ = 8 \times 10^{-5}$ at $t = 10^7$ yr.
Second, on similar timescales
the abundance of OH is enhanced due to the destruction of water molecules.
The reaction of OH with CO will form
CO$_2$, which readily depletes onto grain surfaces.
\section{Discussion}
\subsection{Comparison with Observations}
Observations of ices in the ISM and in comets have yielded limits on the
level of deuterium fractionation in water along with the quantity of \COtwo\
frozen {\em in the water matrix} (see Whittet et al.
1998).
One question is whether our model can simultaneously reproduce
{\em both} the observed [\COtwo ]/[\HtwoO ] and [HDO]/[\HtwoO ] ratios.
Equilibrium models of diffusion-limited
grain surface chemistry (i.e. chemistry that is limited by the depletion
rate onto the surface) can reproduce high D--fractionation
of water (Tielens 1983) and produce [\COtwo ]/[\HtwoO ] $\sim 0.01 - 10$\%
(Tielens \& Hagen 1982; Shalabiea, Caselli, \& Herbst 1998).
In Figure 2 we present the comparison of observations with our model predictions
as a function of time. To illustrate the
sensitivity of these results to our assumed gas and dust temperature of 30~K,
results for higher (40~K) and lower (25~K) gas and dust temperatures are also shown.
The small circles (with error bars) depict
HDO and \COtwo\ measurements for Comets Halley, Hyakutake, and Hale--Bopp,
the only comets for which all of these quantities have been determined.
\footnote{Note that while the three comets have similar [HDO]/[\HtwoO ]
ratios, Comet Hale-Bopp shows a larger [\COtwo ]/[\HtwoO ] ratio than
either Halley or Hyakutake. These differences are significant
relative to the observational uncertainties, but may be
an artifact of the various Sun-comet distances at which the
measurements were made: ISO observations of
\COtwo\ in Hale--Bopp were obtained when the comet was 2.9 AU from the Sun
while the other cometary measurements were made at a
Sun--comet distance of $\sim 1$ AU.
The lower binding energy of \COtwo, compared to
\HtwoO , could therefore lead to differences in the
out-gassing of these species at larger Sun--comet distances.
Interestingly, ISO observations of the short period Comet 103P/Hartley~2 at 1 AU from the Sun
also indicate high \COtwo\ abundances, but this comet does not have a measured D/H ratio
(Crovisier 1998). }
In the ISM there are no similar complementary measurements; HDO and \HtwoO\
have been observed in the gas phase in molecular hot cores, while
\COtwo\ ices have only been observed along single lines of sight
towards stars behind molecular clouds or embedded within them.
We have therefore represented these two independent
measurements for the ISM as a dotted box in Figure 2.
In hot cores, the gas--phase D--fractionation is believed to trace the solid (D/H) ratio
because the levels of deuterium fractionation are larger than expected for
pure gas--phase chemistry
evolving at the observed
temperatures (T $\sim 100 - 200$ K).
Thus, the observed HDO and \HtwoO\ are proposed to be ``fossilized''
remnants from
a previous low temperature phase, remnants that were frozen on the grains and evaporate
in the hot core (eg. Gensheimer et al. 1996).
The contours of time-dependent abundance ratios for the range of dust temperatures
provided in Figure~2 nicely encompass the observations. For this model to be
applicable to interstellar ices, the initial gas clouds that collapsed and ultimately
formed the molecular hot cores
must have evolved at $T \sim 25 - 40$ K to account for
the observed fractionation.
For both \COtwo\ and HDO production and where \COtwo\ ices have
been observed, our model also suggests that the
quiescent clouds that formed hot cores
would have undergone a shock(s) within the past $10^6 - 10^7$ yr. This timescale is
consistent with the range of inferred shock timescales in molecular clouds
(Reipurth et al. 1998; BMN).
Over the past twenty years or more, there has been considerable
debate about whether comets are composed of relative pristine
interstellar material or whether significant
processing has taken place within the proto--solar nebula (or indeed
whether comets represent a mixture of pristine and processed
material). The striking similarity between the composition of comets
and that of interstellar ices, and in particular the very similar
level of deuterium fractionation, might suggest that the
degree of processing in the proto--solar nebula is relatively small.
That the composition of interstellar ices was preserved during their passage into the
outer parts of the proto--solar nebula is perhaps unsurprising given
the relatively benign physical conditions; in particular, the
typical shock velocity during accretion onto the trans-Neptunian region
(where it is believed that the comets originally formed; e.g. Safronov 1969)
was only $\sim 3\,\rm km \, s^{-1}$ (e.g.\ Neufeld \& Hollenbach 1994),
too small to result in significant water production in the gas phase.
\subsection{Sensitivity to Physical Conditions}
Although our model assumes \mbox{n$_{H_{2}}$}\ $= 10^5$ \mbox{cm$^{-3}$} , the results are similar at
both higher and lower densities because
most chemical processes show the same dependence on density.
We adopt $A_V = 10$ mag in our models,
which are therefore only applicable in regions where the ultraviolet field is
heavily attenuated.
Variations in the dust temperature can affect our results, and our model
can only create ice mantles at dust temperatures
greater than 25 K.
Below 25~K, \Otwo\ remains
frozen on the grain surface in the pre--shock quiescent stage
leaving little free gas--phase O
for shock chemistry to form \HtwoO\ in abundance.
Under these conditions a problem arises in accounting for the high abundance
of water--ice ($\sim 10^{-4}$; Schutte 1998)
observed in the cold foreground gas towards Elias 16.
However, this limitation can be mitigated by adopting
a lower density, \mbox{n$_{H_{2}}$}\ $\leq$ 10$^4$ \mbox{cm$^{-3}$} , as the reduced depletion rate results in
a smaller and less constraining \Otwo\ abundance on grains.
One unresolved issue concerns the large abundance of CO ice predicted
in our model,
which is quite high ($\sim 70$\% relative to \HtwoO )
and is larger than observed in the ISM
or in comets where abundances are generally $< 20$\% (e.g.
Mumma 1997).
However, if we consider a single CO molecule approaching a surface with
nearly equal amounts of CO and \HtwoO , it is difficult to imagine that CO will
always be sharing a (stronger) physisorbed bond with \HtwoO\ rather than a (weaker)
bond to a CO molecule. In laboratory experiments
where CO and \HtwoO\ (with CO/\HtwoO $= 0.5$) are co-deposited onto metal substrate
at 10 K, the
evaporation rate of amorphous CO is dominated by the vapor pressure of pure
CO and is not affected by the presence of \HtwoO\ (Kouchi 1990).
Although the sublimation of pure CO is not included in our model (we include
sublimation of CO embedded in a water mantle), in reality some co--deposited
CO will evaporate off the grain mantle, provided that the dust temperature exceeds
17 K, the sublimation temperature of pure CO ice.
The large abundance of frozen CO predicted by our model (under the assumption
that CO sublimation is negligible) may therefore be unrealistic and
should be regarded as an upper limit.
\section{Conclusion}
In this Letter we have demonstrated that in well shielded regions
the gas phase chemistry after the passage
of a shock is capable of reproducing the observed abundance of frozen water,
the HDO/\HtwoO\ ratio, and the abundance of \COtwo\ ice.
Thus, the creation of grain mantles behind shock waves is a viable alternate theory to the
production of ices through grain surface chemistry. We note that this mechanism does not
preclude grain surface chemistry, and indeed it may act as a supplement.
A future extension of our model will be to determine
whether shock models can account for the enhanced abundances of
methanol observed in shocked regions relative to surrounding gas (e.g.
Bachiller et al.\ 1995). Such an extension of our model might explain the formation of
methanol ices -- an observed constituent of interstellar
grain mantles -- as arising from the
``freeze-out'' of \CHthreeOH\ gas produced by gas phase chemistry
in the warm shocked gas.
Other tests could involve an
examination of other proposed outflow chemistry tracers (e.g. SO) and
an expanded examination of the deuterium chemistry, including species such
as DCN, HDCO, and D$_2$CO.
It has previously been suggested that the high degree of deuterium fractionation
observed in cold cloud cores can explain the observed deuterium fractionation in
cometary ices (e.g. Geiss \& Reeves 1981). However, our results indicate that shock waves
are a critical ingredient in the production of ices from gas phase chemistry.
Ion--molecule chemistry in quiescent gas can reproduce the degree of
deuterium fractionation observed in molecular
hot cores; however, it cannot reproduce the observed
abundance of water--ice (Jones \& Williams 1984; Bergin, Langer, \& Goldsmith 1995).
In our model, the abundance of water--ice in the pre--shock gas at 10$^{6}$ yr
is $< 10^{-5}$, much less than the value estimated
along the line of sight towards Elias 16 ($\sim 10^{-4}$; Schutte 1998).
At \mbox{n$_{H_{2}}$}\ = 10$^{5}$ \mbox{cm$^{-3}$}, the gas--phase chemistry could, with
time, eventually approach the measured abundance. However, the molecular ice
absorption observations are taken along a lines of sight that likely
have lower densities,
which makes reproducing the observed abundances more difficult.
Thus, either grain surface chemistry, creation of ice mantles behind shock waves, or
other unknown theories, are necessary to account for the presence of
molecular ices.
We are grateful to A. Dalgarno, N. Balakrishnan, and D. Clary for discussions on
high temperature reactions for deuterium--bearing molecules. E.A.B. is also
grateful to J. Crovisier for discussions of cometary ice abundances.
E.A.B. acknowledges support from NASA's SWAS contract to the
Smithsonian Institution (NAS5-30702); and
D.A.N. acknowledges the support
of the Smithsonian subcontract SV-62005 from the SWAS program.
| proofpile-arXiv_065-9157 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction and Summary}
\label{intro}
The Bose-Einstein condensation (BEC) has been observed
in various systems \cite{bec93},
including the liquid Helium \cite{He4},
excitons in photo excited semiconductors \cite{BECexciton},
and atoms trapped by laser beams \cite{BECatom1,BECatom2,BECatom3}.
Although BEC was originally discussed
for free bosons,
a condensate of free bosons does not have
the superfluidity \cite{bec93},
hence many-body interactions are essential to interesting behaviors
of condensates.
The condensed state of interacting bosons in a box of finite volume $V$
is conventionally taken as
the state in the Bogoliubov approximation,
which we denote by $| \alpha_0, {\bf y}^{cl} \rangle^{cl}$
[see Refs.\ \cite{Noz,popov} and Eq.\ (\ref{GScl}) below].
In this state,
the boson number $N$ has finite fluctuation,
whose magnitude is
$\langle \delta N^2 \rangle \gtrsim \langle N \rangle$
[Eq.\ (\ref{dN2cl})].
This fluctuation is non-negligible in small systems,
such as Helium atoms in a micro bubble \cite{He4bubble} and
laser-trapped atoms \cite{BECatom1,BECatom2,BECatom3}, where
$\langle N \rangle$ is typically $10^3 - 10^6$,
and thus
$
\sqrt{\langle \delta N^2 \rangle} / \langle N \rangle
=
3 - 0.1 \%
$.
This means that
in such systems
if one fixes $N$ with the accuracy better than $3 - 0.1 \%$,
then $\langle \delta N^2 \rangle < \langle N \rangle$, thus
the state $| \alpha_0, {\bf y}^{cl} \rangle^{cl}$ is forbidden,
and another state should be realized.
In most real systems, there is a finite probability of
exchanging bosons between the box and the environment.
Hence, if one fixes $N$ at some time,
$N$ will fluctuate at later times.
Namely, the boson state undergoes a nonequilibrium
time evolution when its number fluctuation is initially suppressed.
The purpose of this paper is to investigate
the time evolution of the interacting bosons in such a case,
and to discuss how an order parameter is developed.
We first review and discuss the case where the box is closed
and the boson number $N$ is exactly fixed (section \ref{sec_pre}).
The ground-state wavefunction of such a case
may be obtained
by the superposition of
Bogoliubov's solution $| \alpha_0, {\bf y}^{cl} \rangle^{cl}$
over various values of the phase of $\alpha_0$
[Eq.\ (\ref{inverse_tr_cl})].
The resulting state
$| N, {\bf y}^{cl} \rangle^{cl}$
has the same energy as
$| \alpha_0, {\bf y}^{cl} \rangle^{cl}$
because of the degeneracy with respect to the phase of $\alpha_0$.
(This degeneracy leads to the symmetry breaking.)
However, such an expression is not convenient for the
analysis of physical properties.
To find the ground state
in the form that is convenient for analyzing
physical properties,
we derive an effective Hamiltonian
$\hat H$ [Eq.\ (\ref{H})], which includes quantum fluctuations of all modes
including ${\bf k} = 0$,
from the full Hamiltonian
$\hat H_B$ of interacting bosons.
Here, we neglect effects due to spatial inhomogeneity
of the boson states in the box,
because we are not interested in such effects here,
and also because we expect that main physics of the nonequilibrium evolution
of our interest would not be affected by such effects.
A renormalization constant $Z_g$ appears in $\hat H$.
Although $Z_g$ formally diverges [Eq.\ (\ref{Zg})],
the divergence is successfully
renormalized, i.e., the final results are independent of $Z_g$
and finite.
As the ground state of $\hat H$,
we use a
variational form ${| N, {\bf y} \rangle}$,
which is similar to that of Girardeau and Arnowitt \cite{GA}.
This form takes a compact form
[Eq.\ (\ref{GS in terms of a})]: it is obtained
by operating a simple unitary operator $e^{i G({\bf y})}$ on the
$N$-particle state of free bosons,
where $G({\bf y})$ is a simple bi-quadratic function of the bare
operators [Eq.\ (\ref{G})].
This state has the same energy as
$| \alpha_0, {\bf y}^{cl} \rangle^{cl}$, or, equivalently,
$| N, {\bf y}^{cl} \rangle^{cl}$.
(Precisely speaking, they have the same energy
density in the macroscopic limit, i.e., when $V \to \infty$ while
keeping the density $n$ finite.)
Using the unitary operator $e^{i G({\bf y})}$,
we then identify
a ``natural coordinate'' ${\hat b_0}$
[Eq.\ (\ref{bz})] of the interacting bosons,
by which many physical properties
can be simply described (section \ref{sec_natural}).
Unlike the quasi-particle operators obtained by
the Bogoliubov transformation,
${\hat b_0}$ is a nonlinear function of bare operators.
Moreover, the Hamiltonian is {\em not} diagonal with respect to
${\hat b_0}$.
Such a nonlinear operator, however,
describes the physics very simply.
For example,
${| N, {\bf y} \rangle}$ is simply represented as
a number state of $\hat b_0$.
We thus call ${| N, {\bf y} \rangle}$ the ``number state of
interacting bosons'' (NSIB).
We can also define, through ${\hat b_0}$, the cosine and sine operators
for interacting bosons [see below].
Moreover, using ${\hat b_0}$, we decompose the boson field $\hat \psi$
into two parts [Eq.\ (\ref{decomposition of psi})]:
one behaves anomalously as $V \to \infty$
and the other is a normal part.
In the decomposition,
the non-unity ($|Z| < 1$) renormalization constant $Z$
(which should not be confused with $Z_g$ appeared in the Hamiltonian)
is correctly
obtained.
This decomposition formula turns out to be
extremely useful in the following
analyses.
Using these results,
we study the nonequilibrium time evolution of interacting
bosons in a leaky box (section \ref{sec_evolution}).
The time evolution is induced if one fixes $N$ at some time
(thus the boson state at that time is
the NSIB),
because in most real systems there is a finite probability of
exchanging bosons between the box and the environment,
hence $N$ will fluctuate at later times.
We simulate this situation by the following gedanken experiment:
At some time $t<0$
one confines {\em exactly} $N$ bosons in a box of
volume $V$,
and that at $t=0$
a small hole is made in the box,
so that a small leakage flux $J$ of the bosons is induced.
We concentrate on the analysis of
the most interesting and nontrivial
time stage;
the {\em early time stage} for
which $Jt \ll N$, because it is clear that at later times
the system approaches the equilibrium state.
We are interested in
the reduced density operator of the bosons in the box;
$
\hat \rho (t)
\equiv
{\rm Tr^E} [ \hat \rho^{total}(t) ]
$,
where
$\hat \rho^{total}(t)$ denotes the density operator of the total
system, and
${\rm Tr^E}$ the trace operation over environment's
degrees of freedom.
We successfully evaluate the time evolution of
$
\hat \rho (t)
$
by a method which is equivalent to solving the master equation.
Our method gives a physical picture more clearly than
the master-equation method.
We obtain
$
\hat \rho (t)
$
in a general form in which all the
details of the box-environment interaction $\hat H^{ES}$ have been
absorbed in the magnitude of the leakage flux $J$.
We show that
the time evolution can be
described very simply in terms of ${\hat b_0}$,
as the evolution from a single NSIB at $t < 0$,
into a classical mixture, with a time dependent distribution, of
NSIBs of various values of $N$ at $t > 0$
[Eq.\ (\ref{rho Poisson})].
We then discuss the phase $\phi$ as a variable approximately conjugate to
the number $N$ (section \ref{sec_number vs phase}).
To treat the quantum phase properly, we consider the sine
and cosine operators, $\hat {\sin \phi}$ and $\hat {\cos \phi}$.
It is generally very difficult to define such operators
for interacting many-particle systems.
Fortunately, however, in terms of the natural coordinate $\hat b_0$
we successfully define
$\hat {\sin \phi}$ and $\hat {\cos \phi}$ for interacting bosons,
using which we can analyze the phase property
in a fully quantum-mechanical manner.
We define a ``coherent state of interacting bosons''
(CSIB) [Eq.\ (\ref{cs})],
which, unlike Bogoliubov's ground state
$| \alpha_0, {\bf y}^{cl} \rangle^{cl}$,
{\em exactly} has the minimum value of the
number-phase uncertainty product [Eq.\ (\ref{NPUP_CSIB})].
We also define a new state
${| \xi, N, {\bf y} \rangle}$ [Eq.\ (\ref{NPIB})],
which we call
the ``number-phase squeezed state of interacting bosons'' (NPIB),
which has a smaller phase fluctuation than the CSIB, while
keeping the number-phase uncertainty product
minimum [Eq.\ (\ref{NPUP_NPIB})].
We point out that
$\hat \rho(t)$ for $t > 0$
can be represented as
the phase-randomized mixture (PRM) of NPIBs.
Among many possible representations of $\hat \rho(t)$,
this representation is particularly convenient
for analyzing the phase fluctuations and the order parameter.
We also discuss the action of the measurements (or,
their equivalents) of $N$ and of $\phi$
(section \ref{sec_action of meas.}).
The forms of $\hat \rho(t)$ after such measurements
are discussed.
As an example of the phase measurement,
we discuss an interference experiment of
two condensates which are prepared independently in
two boxes.
It was already established for {\em non-interacting} bosons
that the interference pattern is developed for {\em each}
experimental run (although the interference
disappears in the {\em average} over many runs)
\cite{theory1,theory2,theory3}.
Using our formula for interacting bosons,
we show that the same conclusion is drawn very clearly and naturally
for {\em interacting} bosons.
We finally consider the order parameter according to
a few typical definitions, as well as their time
evolution (section \ref{sec_OP}).
We show that
the off-diagonal long-range order (ODLRO)
does not distinguish
NSIB, NPIB and CSIB.
Hence, the order parameter $\Xi$ defined
from ODLRO [Eq.\ (\ref{Xi})],
does not distinguish them either.
On the other hand,
the other order parameter $\Psi$, defined as the expectation value
of the boson operator $\hat \psi$,
has different values among these states.
In particular, for each element of
the PRM of NPIBs, we show that
$\Psi$ evolves from
zero to a finite value very quickly:
After the leakage of only two or three bosons,
each element acquires a
full, stable and definite (non-fluctuating) value of $\Psi$.
\section{Preliminaries}
\label{sec_pre}
\subsection{Phase transitions in finite systems}
\label{sec_PT}
We consider the phase transition of
interacting bosons confined in a large, but {\em finite} box
of volume $V$.
Phase transitions are usually discussed in systems
with an {\em infinite} volume
(or, the $V \to \infty$ limit is taken at the
end of the calculation), because
infinite degrees of freedom
are necessary in the relevant energy scale
for {\em strict} transitions \cite{gold}.
In such a case,
we must select a single physical Hilbert space among
many possibilities,
which selection corresponds to a strict phase transition.
However, phase transitions do occur
even in systems of finite $V$
in the sense that a single phase lasts longer than the time of observation
if its linear dimension exceeds the correlation length
at the temperature of interest
\cite{gold}.
Hence, it is physically interesting and important to
explore phase transitions in finite systems.
Because of the finiteness of $V$
(and the fact that the interaction potential $U$ is well-behaved),
the von Neumann's uniqueness theorem can be applied.
This allows us to
develop a theory in a unique Hilbert space.
However, since $V$ is large,
some quantities, which become anomalous in the
limit of $V \to \infty$ due to a strict phase transition,
behave quasi anomalously.
In later sections, we shall identify such a quasi anomalous
operator, and discuss how an order parameter is
developed.
\subsection{Effective Hamiltonian}
\label{sec_H}
We start from the standard Hamiltonian for interacting bosons
confined in a large, but finite box of volume $V$:
\begin{equation}
\hat H_B
=
\int_V d^3 r
\hat \psi^\dagger({\bf r})
\left(- {\hbar^2 \over 2m} \nabla^2 \right)
\hat \psi({\bf r})
+
{1 \over 2}
\int_V d^3 r
\int_V d^3 r'
\hat \psi^\dagger({\bf r})
\hat \psi^\dagger({\bf r'})
U({\bf r} - {\bf r'})
\hat \psi({\bf r'})
\hat \psi({\bf r}).
\label{starting H}\end{equation}
Here, we neglect a confining potential of the box
because in the present work we are not interested in its effects
such as the spatial inhomogeneity of the boson states in the box,
and also because we expect that main physics of the nonequilibrium evolution
of our interest would not be affected by such effects.
(Mathematically, our model under the periodic boundary condition
assumes bosons confined in a three-dimensional torus.)
The ${\bf r}$ dependence of the boson field
$\hat \psi({\bf r})$ (in the Schr\"odinger picture)
can be expanded in terms of plane waves as
\begin{equation}
\hat \psi({\bf r})
=
{1 \over \sqrt{V}} \hat a_0
+
\sum_{{\bf k} \neq {\bf 0}} {e^{i {\bf k} \cdot {\bf r}} \over \sqrt{V}}
\hat a_{\bf k},
\label{psi in terms of a}\end{equation}
where
$\hat a_{\bf p}$ and $\hat a_{\bf p}^\dagger$
are called creation and annihilation operators,
respectively, of bare bosons.
The total number of bosons is given by
\begin{equation}
\hat N
\equiv
\int_V d^3 r \hat \psi^\dagger (r) \hat \psi(r)
=
\sum_{\bf k} \hat a_{\bf k}^\dagger \hat a_{\bf k}.
\end{equation}
We assume zero temperature, and consider the case
where the interaction is weak and repulsive [Eq.\ (\ref{dilute}) below],
and where the boson density $n$ is finite
(hence, since $V$ is large, $N \gg 1$):
\begin{equation}
n \equiv \langle N \rangle /V > 0.
\label{density}\end{equation}
In such a case, BEC occurs and
typical matrix elements of
$\hat a_0$, $\hat a_0^\dagger$ and $\hat N$
are huge, whereas those of
$\hat a_{\bf k}$ and $\hat a_{\bf k}^\dagger$ (with ${\bf k} \neq {\bf 0}$) are small.
Taking up to the second order terms in these small quantities,
and using the identity,
$
\hat N
=
\hat a_0^\dagger \hat a_0
+
\sum_{{\bf k} \neq {\bf 0}} \hat a_{\bf k}^\dagger \hat a_{\bf k},
$
we obtain the effective Hamiltonian $\hat H$ in the following form.
\begin{equation}
\hat H
=
g (1+Z_g) {\hat N^2 \over 2 V}
- g (1+Z_g){1 \over 2 V} \hat a_0^\dagger \hat a_0
+
\sum_{{\bf k} \neq {\bf 0}}
\left(
\epsilon_k^{(0)}
+ g \frac{\hat N}{V}
\right)
\hat a_{\bf k}^\dagger \hat a_{\bf k}
+
\left[
{g \over 2V} \hat a_0 \hat a_0
\sum_{{\bf k} \neq {\bf 0}} \hat a_{\bf k}^\dagger \hat a_{-{\bf k}}^\dagger
+ {\rm h.c.}
\right].
\label{Horg}\end{equation}
Here, $\epsilon^{(0)}_k$ denotes the free-particle energy,
$
\epsilon^{(0)}_k
\equiv
{\hbar^2 k^2 / 2 m},
$
and $g$ is an effective interaction constant
defined by
\begin{equation}
g \equiv {4 \pi \hbar^2 a \over m}.
\end{equation}
Here, $a$ is the scattering length,
and
$Z_g$ is the first-order ``renormalization constant''
for the scattering amplitude \cite{LP};
\begin{equation}
Z_g
\equiv
{g \over 2 V} \sum_{{\bf k} \neq {\bf 0}} {1 \over \epsilon^{(0)}_k}.
\label{Zg}\end{equation}
The formal divergence of the sum in Eq.\ (\ref{Zg}) does not matter because
the final results are independent of $Z_g$ \cite{LP},
hence the renormalization is successful.
We have assumed that
\begin{equation}
0 < n a^3 \ll 1,
\label{dilute}\end{equation}
under which the approximation $\hat H_B \approx \hat H$ is good.
We have confirmed by explicit calculations
that the term
$
- \{ g (1+Z_g)/2 V \} \hat a_0^\dagger \hat a_0
$
in Eq.\ (\ref{Horg}) gives only negligible contributions
in the following analysis.
We thus drop it henceforth:
\begin{equation}
\hat H
=
g (1+Z_g) {\hat N^2 \over 2 V}
+
\sum_{{\bf k} \neq {\bf 0}}
\left(
\epsilon_k^{(0)}
+ g \frac{\hat N}{V}
\right)
\hat a_{\bf k}^\dagger \hat a_{\bf k}
+
\left[
{g \over 2V} \hat a_0 \hat a_0
\sum_{{\bf k} \neq {\bf 0}} \hat a_{\bf k}^\dagger \hat a_{-{\bf k}}^\dagger
+ {\rm h.c.}
\right].
\label{H}\end{equation}
Since this $\hat H$ commutes with $\hat N$,
we can in principle find its eigenstates for which $N$ is
exactly fixed.
In each subspace of fixed $N$,
$\hat H$ is equivalent to
\begin{equation}
\hat H(N)
\equiv
{1 \over 2} g (1+Z_g) n^2 V
+
\sum_{{\bf k} \neq {\bf 0}} \epsilon'_k \hat a_{\bf k}^\dagger \hat a_{\bf k}
+
\left[
{g \over 2V} \hat a_0 \hat a_0
\sum_{{\bf k} \neq {\bf 0}} \hat a_{\bf k}^\dagger \hat a_{-{\bf k}}^\dagger
+ {\rm h.c.}
\right],
\label{HN}\end{equation}
where
\begin{equation}
\epsilon'_k
\equiv
\epsilon_k^{(0)} + g n.
\label{ek}\end{equation}
Note that if we regard
$\hat a_0$ in $\hat H(N)$
as a classical complex number\cite{root}
\begin{equation}
\hat a_0
\ \to \
e^{i \phi} \sqrt{N_0} \equiv \alpha_0,
\end{equation}
and $\hat a_0^\dagger$ as $\alpha_0^*$,
we would then obtain
the ``semiclassical'' Hamiltonian $\hat H^{cl}$ as
\begin{equation}
\hat H^{cl}
=
{1 \over 2} g (1+Z_g) n^2 V
+
\sum_{{\bf k} \neq {\bf 0}} \epsilon'_k \hat a_{\bf k}^\dagger \hat a_{\bf k}
+
\left(
{1 \over 2} g n e^{2 i \phi}
\sum_{{\bf k} \neq {\bf 0}} \hat a_{\bf k}^\dagger \hat a_{-{\bf k}}^\dagger
+ {\rm h.c.}
\right),
\label{Hcl}\end{equation}
where
we have replaced $N_0$ with $N$ in the last parenthesis
because the replacement just gives correction which is
of higher order in $g$.
This Hamiltonian can be diagonalized exactly (See, e.g., Ref.\ \cite{LP}
in which $\phi = 0$).
We shall utilize this fact later to find the ground state of $\hat H$.
\subsection{Known results for non-interacting bosons}
When $g=0$, the ground state of {\em free} bosons whose number $N$
is fixed, is simply a number state,
\begin{equation}
| N \rangle
\equiv
{1 \over \sqrt{N!}} (\hat a_0^\dagger)^N | 0 \rangle,
\end{equation}
where $| 0 \rangle$ denotes
the vacuum of the bare operators;
\begin{equation}
\hat a_{\bf k} | 0 \rangle =0 \quad \mbox{for all ${\bf k}$}.
\label{vac of a}\end{equation}
The energy of $| N \rangle$,
\begin{equation}
E_{N}
=
N \epsilon_0^{(0)}
= 0,
\end{equation}
is degenerate with respect to $N$. Hence,
any superposition of $| N \rangle$ is also a ground state.
For example, the coherent state
\begin{equation}
| \alpha \rangle
\equiv
e^{-|\alpha|^2/2}
\sum_{N =0}^\infty
\frac{\alpha^N}{\sqrt{N!}} | N \rangle
=
e^{-|\alpha|^2/2} e^{\alpha \hat a_0^\dagger} | 0 \rangle,
\end{equation}
where
$
\alpha \equiv e^{i \phi} \sqrt{N},
$
is also a ground state that has the same expectation value of
$\hat N$ as $|N \rangle$.
On the other hand,
$| \alpha \rangle$ has a finite fluctuation of $N$,
\begin{equation}
\langle \delta N^2 \rangle
\equiv
\langle (\hat N - \langle \hat N \rangle )^2 \rangle
\nonumber\\
=
|\alpha|^2
\nonumber\\
=
\langle N \rangle,
\end{equation}
whereas $|N \rangle$ has a definite $N$.
The inverse transformation from
$| \alpha \rangle$ to $|N \rangle$ can be accomplished as
\begin{equation}
| N \rangle
=
\int_{-\pi}^{\pi} \frac{d \phi}{2 \pi}
| \alpha \rangle.
\label{inverse_tr_free}\end{equation}
\subsection{Known results for $\hat H^{cl}$}
\label{known1}
Neither
$|N \rangle$ nor $| \alpha \rangle$ is an eigenstate when $g > 0$.
If we can regard
$\hat a_0$ and $\hat a_0^\dagger$ as classical numbers
$\alpha_0$ ($= e^{i \phi} \sqrt{N_0}$)
and $\alpha_0^*$, respectively,
we can use $\hat H^{cl}$ as the Hamiltonian, and
its ground state was given by Bogoliubov as \cite{bec93,Noz,popov}
\begin{equation}
| \alpha_0, {\bf y}^{cl} \rangle^{cl}
\equiv
\exp \left[
\left( \alpha_0 \hat a_0^\dagger
- {1 \over 2}
\sum_{{\bf q} \neq {\bf 0}} y_q^{cl *} \hat a_{\bf q}^\dagger \hat a_{-{\bf q}}^\dagger
\right)- {\rm h.c.}
\right]
| 0 \rangle,
\label{GScl}\end{equation}
where
\begin{eqnarray}
y_q^{cl}
&=&
|y_q^{cl}|e^{-2 i \phi},
\label{ycl}\\
\cosh |y_q^{cl}|
&=&
\sqrt{
\epsilon_q + \epsilon^{(0)}_q + gn
\over
2 \epsilon_q
},
\label{cosh y cl}\\
\sinh |y_q^{cl}|
&=&
{
gn
\over
\sqrt{2 \epsilon_q ( \epsilon_q + \epsilon^{(0)}_q + gn)}
}.
\label{sinh y cl}\end{eqnarray}
Here, $\epsilon_q$ is the quasi-particle energy,
\begin{equation}
\epsilon_q
\equiv
\sqrt{
\epsilon^{(0)}_q (\epsilon^{(0)}_q + 2 gn)
},
\end{equation}
whose dispersion is linear for
$\epsilon^{(0)}_q \ll gn$;
\begin{equation}
\epsilon_q
\approx
\sqrt{gn \over m} |q|.
\end{equation}
The ground state energy is calculated as \cite{LP}
\begin{eqnarray}
E_{\alpha, {\bf y}^{cl}}^{cl}
&=&
{1 \over 2} g n N
\left( 1 + {128 \over 15} \sqrt{n a^3 \over \pi} \right)
\label{Ecl}\\
&=&
{1 \over 2} g n N
+ O(g^{2.5}).
\end{eqnarray}
The absence of the $O(g^2)$ term in
$E_{\alpha, {\bf y}^{cl}}^{cl}$ means that
the large (formally divergent because $Z_g \to \infty$)
positive energy $g Z_g n N/2$ of $\hat H^{cl}$
is canceled by a large negative term arising from
the pair correlations.
The expectation value and variance of $\hat N$ for
$| \alpha_0, {\bf y}^{cl} \rangle^{cl}$ are evaluated as
\begin{eqnarray}
\langle N \rangle^{cl}
&=&
N_0 + \sum_{{\bf q} \neq {\bf 0}} (\sinh |y_q^{cl}|)^2,
\label{Ncl}\\
\langle \delta N^2 \rangle^{cl}
&=&
\langle N \rangle^{cl}
+
\sum_{{\bf q} \neq {\bf 0}} (\sinh |y_q^{cl}|)^4
\label{dN2cl}.
\end{eqnarray}
\subsection{Ground state of a fixed number of bosons}
\label{sec_GS}
In analogy to Eq.\ (\ref{inverse_tr_free}),
it is possible to construct
an approximate ground state of $\hat H(N)$
from $| \alpha_0, {\bf y}^{cl} \rangle^{cl}$ as
\begin{equation}
| N, {\bf y}^{cl} \rangle^{cl}
\equiv
\int_{-\pi}^{\pi} \frac{d \phi}{2 \pi}
| \alpha_0, {\bf y}^{cl} \rangle^{cl},
\label{inverse_tr_cl}\end{equation}
where $N \equiv \langle N \rangle^{cl}$, Eq.\ (\ref{Ncl}).
We can obtain an explicit form of
$| N, {\bf y}^{cl} \rangle^{cl}$ in the form of
an infinite series expansion,
by inserting Eq.\ (\ref{GScl}) into the right-hand side (rhs) of
Eq.\ (\ref{inverse_tr_cl}), expanding the exponential function,
and performing the $\phi$ integral.
However, such an expression is not convenient for the
analysis of physical properties.
Note that in many-particle physics,
it is generally difficult to evaluate physical properties
even if the wavefunction is known.
It is therefore essential to find the ground state
in the form that is convenient for analyzing
physical properties.
Several formulations were developed for the condensation of interacting
bosons with a fixed $N$.
Lifshitz and Pitaevskii \cite{LP} developed a {\em formal} discussion
for the case of fixed $N$.
However, they did not treat $\hat a_0$ as an operator, hence
$\hat N$ was {\em not} conserved.
For example, $\hat b_p^\dagger |m,N \rangle$
(in their notations)
did not have exactly $N+1$ bosons.
To treat interacting bosons with fixed $N$ more accurately,
one has to include quantum fluctuations of all modes
(including ${\bf k}={\bf 0}$), by treating
$\hat a_0$ as an operator.
Such treatment was developed, for example, by
Girardeau and Arnowitt \cite{GA},
Gardiner \cite{gardiner}, and Castin and Dum \cite{castin}.
A variational form was proposed in Ref.\ \cite{GA}
for the wavefunction of the ground state.
The variational form takes account of
{\em four}-particle correlations in an elaborate manner,
and is normalized exactly.
On the other hand,
in Refs.\ \cite{gardiner} and \cite{castin} no explicit form
was derived for the ground-state wavefunction.
[These references are more interested in excited states and
the spatially inhomogeneous case, rather than
the ground-state wavefunction.]
We here use a variational form, which is
similar to that of Ref.\ \cite{GA},
as the ground state of $\hat H(N)$;
\begin{equation}
{| N, {\bf y} \rangle}
\equiv
e^{i G({\bf y})} {1 \over \sqrt{N!}} (\hat a_0^\dagger)^N | 0 \rangle.
\label{GS in terms of a}\end{equation}
Here, $\hat G({\bf y})$ is the hermite operator defined by
\begin{equation}
\hat G({\bf y})
\equiv
{-i \over 2 nV}
\hat a_0^\dagger \hat a_0^\dagger
\sum_{{\bf q} \neq {\bf 0}} |y_q^{cl}| \hat a_{\bf q} \hat a_{-{\bf q}}
+ {\rm h.c.},
\label{G}\end{equation}
where ${\bf y} \equiv \{y_q\}$ are a set of variational
parameters, which are taken as
\begin{equation}
y_q = |y_q^{cl}|.
\label{yq}\end{equation}
Using the well-known formula for arbitrary operators $\hat A$ and $\hat B$,
\begin{equation}
e^{\hat A} {\hat B} e^{-\hat A}
=
{\hat B} + [{\hat A}, {\hat B}]
+ {1 \over 2!} [{\hat A},[{\hat A}, {\hat B}]]
+ \cdots,
\label{formula}\end{equation}
we find from Eqs.\ (\ref{GS in terms of a})-(\ref{G}) that
\begin{equation}
E_{N, {\bf y}} \equiv
{\langle N, {\bf y} |} \hat H {| N, {\bf y} \rangle}
=
{1 \over 2} g n N
+ o(g^2).
\label{E 2nd order}\end{equation}
where $o(g^2)$ denotes terms which tend to zero as $g \to 0$,
faster than $g^2$.
This demonstrates that
the large (formally divergent because $Z_g \to \infty$)
positive energy $g Z_g n N/2$ in $\hat H(N)$
is canceled by a large negative term arising from
the {\em four-particle} correlations of
the state of Eq.\ (\ref{GS in terms of a}).
Moreover, we can also show that
in the macroscopic limit ($V \to \infty$ while keeping $n$ constant),
$E_{N, {\bf y}}$ becomes
as low as $E_{\alpha, {\bf y}^{cl}}^{cl}$ of Eq.\ (\ref{Ecl}).
Therefore,
the form of Eq.\ (\ref{GS in terms of a})
is a good approximation to the ground state.
Note that ${| N, {\bf y} \rangle}$ is an eigenstate of $\hat N$;
\begin{equation}
\hat N {| N, {\bf y} \rangle}
=
N {| N, {\bf y} \rangle},
\end{equation}
hence ${| N, {\bf y} \rangle}$
is an (approximate) groundstate of $\hat H$;
\begin{equation}
\hat H {| N, {\bf y} \rangle}
=
\hat H(N) {| N, {\bf y} \rangle}.
\end{equation}
Note also that
${| N, {\bf y} \rangle}$ is exactly normalized to unity
because
$e^{i G({\bf y})}$ in Eq.\ (\ref{GS in terms of a}) is a
unitary transformation (although it becomes non-unitary in
the limit of $V \to \infty$).
We should make a remark here.
In the case of
$| \alpha, {\bf y}^{cl} \rangle^{cl}$ discussed in subsection
\ref{known1},
$E_{\alpha, {\bf y}^{cl}}^{cl}$ becomes low enough only for the specific
choice of the phase of $y_q^{cl}$ [Eq.\ (\ref{ycl})].
This phase relation is sometimes called
the ``phase locking'' \cite{Noz}.
From this viewpoint,
it is sometimes argued \cite{Noz,Leg} that
``having a definite phase'' and the ``phase locking''
are necessary to achieve a low energy.
However, such a statement is rather misleading:
In our case, the phase locking corresponds to the fact that $y_q$'s are
real and positive.
On the other hand,
our ground state ${| N, {\bf y} \rangle}$ has no fluctuation in $N$,
hence {\it has no definite phase} \cite{phase}
because of the number-phase uncertainty relation,
\begin{equation}
\langle \delta N^2 \rangle
\langle \delta \phi^2 \rangle
\gtrsim
1/4.
\label{NPUR}\end{equation}
Nevertheless, the energy of ${| N, {\bf y} \rangle}$ is as low as
$E_{\alpha, {\bf y}^{cl}}^{cl}$.
That is,
``having a definite phase'' is {\it not} necessary to
achieve the ground-state energy, and thus
the term ``phase locking'' should be taken carefully.
\subsection{Ground state of $N - \Delta N$ bosons}
The ground state of $N - \Delta N$ bosons
is given by
Eqs.\ (\ref{GS in terms of a}) and (\ref{G}) in which
$N$ is all replaced with $N - \Delta N$.
However, we are interested in the case where
[see section \ref{sec_evolution}]
\begin{equation}
|\Delta N| \ll N.
\end{equation}
In this case, $\hat G({\bf y})$ and ${\bf y}$
(which are functions of the density of bosons) of $N - \Delta N$ bosons are
almost identical to those of $N$ bosons
because $(N - \Delta N)/V \approx N/V = n$.
Therefore, we can simplify the calculation by using
the same $\hat G({\bf y})$ and ${\bf y}$ for all $\Delta N$.
That is, we take
\begin{equation}
{| N - \Delta N, {\bf y} \rangle}
=
e^{i G({\bf y})} {1 \over \sqrt{(N - \Delta N) !}}
(\hat a_0^\dagger)^{N - \Delta N}
| 0 \rangle,
\label{GS of N'bosons}\end{equation}
where $\hat G({\bf y})$ and ${\bf y}$ are those of $N$ bosons.
Despite the approximation,
this state is exactly normalized and
has exactly $N - \Delta N$ bosons;
\begin{equation}
\hat N {| N - \Delta N, {\bf y} \rangle}
=
(N - \Delta N) {| N - \Delta N, {\bf y} \rangle}.
\end{equation}
\section{Natural coordinate}
\label{sec_natural}
\subsection{Nonlinear Bogoliubov transformation}
Since we assume that $V$ is finite,
$e^{i G({\bf y})}$ is a
unitary operator (which, however, becomes non-unitary in
the limit of $V \to \infty$).
Utilizing this fact,
we define new boson operators $\hat b_{\bf k}$ by
\begin{equation}
\hat b_{\bf k}
\equiv
e^{i \hat G({\bf y})} \hat a_{\bf k} e^{-i \hat G({\bf y})}.
\label{b}\end{equation}
This operator satisfies the same commutation relations as
$\hat a_{\bf k}$;
\begin{equation}
[\hat b_{\bf p}, \hat b_{\bf q}^\dagger] = \delta_{{\bf p}, {\bf q}},
\quad
[\hat b_{\bf p}, \hat b_{\bf q}] =
[\hat b_{\bf p}^\dagger, \hat b_{\bf q}^\dagger] = 0.
\end{equation}
Note that these relations are exact,
in contrast to the operator
$b_{\bf k}$ (${\bf k} \neq {\bf 0}$) of Ref.\ \cite{gardiner}.
Owing to the exact commutation relations,
we can define
the vacuum of $\hat b_{\bf k}$'s by
\begin{equation}
\hat b_{\bf k} {| 0, {\bf y} \rangle}
=0
\quad \mbox{for all ${\bf k}$}.
\label{vac of b}\end{equation}
From Eqs.\ (\ref{vac of a}), (\ref{G}) and (\ref{b}),
we have \cite{note_vac}
\begin{equation}
{| 0, {\bf y} \rangle}
=
e^{i \hat G({\bf y})} | 0 \rangle.
\end{equation}
The transformation (\ref{b}) somewhat resembles the Bogoliubov
transformation which diagonalizes $\hat H^{cl}$ \cite{LP}.
However, in contrast to Bogoliubov's quasi-particles
(whose total number differs from $\hat N$ as an operator),
the total number operator of $\hat b_{\bf k}$'s is
identical to that of $\hat a_{\bf k}$'s because
$[\hat N, \hat G({\bf y})] = 0$;
\begin{equation}
\sum_{\bf k} \hat b_{\bf k}^\dagger \hat b_{\bf k}
=
e^{i \hat G({\bf y})} \hat N e^{-i \hat G({\bf y})}
=
\hat N.
\end{equation}
This property is very useful in the following analyses.
On the other hand,
the transformation (\ref{b}) is much more complicated than the Bogoliubov
transformation:
The latter is
a {\it linear} transformation connecting the bare operators with
quasi-particle operators, whereas the former
is a {\it nonlinear} transformation between
the bare operators and the new boson operators.
For example,
${\hat b_0}$ defined by
\begin{equation}
{\hat b_0}
\equiv
e^{i \hat G({\bf y})} \hat a_0 e^{-i \hat G({\bf y})}
\label{bz}\end{equation}
is a rather complicated,
nonlinear function of
$\hat a_{\bf k}$'s and $\hat a_{\bf k}^\dagger$'s
(of various ${\bf k}$'s including ${\bf k} = {\bf 0}$),
as can be seen using Eq.\ (\ref{formula}).
Such a nonlinear operator $\hat b_0$, however, describes the physics quite
simply.
Namely, we show that
${\hat b_0}$ (and ${\hat b_0^\dagger}$) is a
``natural coordinate''
\cite{hermitian} of interacting bosons
in the sense that many physical properties can be simply described.
(It is crucial to find such a coordinate for the analysis
of many-particle systems, because generally the knowledge of
the wavefunction is not sufficient to perform
the analysis.)
For example,
from Eqs.\ (\ref{GS in terms of a}), (\ref{b}) and (\ref{vac of b}),
we find that in terms of ${\hat b_0}$ the ground state
${| N, {\bf y} \rangle}$ is simply a number state;
\begin{equation}
{| N, {\bf y} \rangle}
=
{1 \over \sqrt{N !}} ({\hat b_0}^\dagger)^N {| 0, {\bf y} \rangle}.
\label{GS}\end{equation}
In particular,
\begin{equation}
\hat b_0 {| N, {\bf y} \rangle}
=
\sqrt{N} {| N-1, {\bf y} \rangle}.
\label{app_b0}\end{equation}
Since $\hat b_{\bf k}$'s of ${\bf k} \neq {\bf 0}$
commute with $\hat b_0$,
we also find that
\begin{equation}
\hat b_{\bf k} {| N, {\bf y} \rangle}
= 0
\quad \mbox{for all ${\bf k} \neq {\bf 0}$}.
\label{GS is vac of bk}\end{equation}
Therefore, in terms of the new boson operators
the ground state of the interacting bosons
can be simply viewed as a {\em single-mode}
(${\bf k} = {\bf 0}$) number state.
Note, however, that
this does not mean that $\hat H$ were
bilinear and diagonal with respect to
$\hat b_0$.
In fact, if it were the case then
the energy $E_{N, {\bf y}}$ would be linear in $N$, in contradiction
to Eq.\ (\ref{E 2nd order}) which shows $E_{N, {\bf y}} \propto N^2$
(recall that $n = N/V$).
The usefulness of ${\hat b_0}$ are strongly suggested by
Eqs.\ (\ref{GS}) and (\ref{GS is vac of bk}).
We will show in the following
discussions that this is indeed the case.
\subsection{Decomposition of $\hat \psi$}
Some matrix elements of $\hat b_0$ become anomalously large,
among the ground (and excited) states of different $N$.
For example,
\begin{equation}
{\langle N - 1, {\bf y} |} {\hat b_0} {| N, {\bf y} \rangle} = \sqrt{N} = \sqrt{nV}.
\label{anomalous}\end{equation}
This
indicates that in the $V \to \infty$ limit
(while keeping the density $n$ finite)
${\hat b_0}$ does not
remain an annihilation operator of
the physical Hilbert space,
signaling that a strict phase transition should occur as $V \to \infty$.
Since this anomaly should have important effects even for a finite $V$,
it is appropriate to separate $\hat b_0$ from the other terms
of $\hat \psi$.
That is,
we decompose the boson field in a finite system as
({\it cf.} Eq.\ (\ref{psi in terms of a}))
\begin{equation}
\hat \psi
=
{Z^{1/2} \over \sqrt{V}} \hat b_0 + \hat \psi',
\label{decomposition of psi}\end{equation}
where $Z$ is a complex renormalization constant.
Since we have specified nothing about $\hat \psi'$ at this stage,
this decomposition is always possible and $Z$ is arbitrary.
Following Ref.\ \cite{LP}, we define
the ``wavefunction of the condensate'' $\Xi$ by
\begin{equation}
\Xi
\equiv
{\langle N - 1, {\bf y} |} \hat \psi {| N, {\bf y} \rangle}.
\label{wf of condensate}\end{equation}
Since $\Xi$ is independent of ${\bf r}$ (because
both ${| N - 1, {\bf y} \rangle}$ and ${| N, {\bf y} \rangle}$
have the translational symmetry),
we can take $Z$ as
\begin{equation}
\Xi
=
Z^{1/2} {\langle N - 1, {\bf y} |} \hat b_0 {| N, {\bf y} \rangle} /\sqrt{V}.
\label{takeZ}\end{equation}
That is, from Eq.\ (\ref{GS}),
\begin{equation}
Z^{1/2} =
\Xi / \sqrt{n}.
\label{Z}\end{equation}
Then, by taking the matrix element of
Eq.\ (\ref{decomposition of psi}) between ${| N - \Delta N, {\bf y} \rangle}$ and ${| N, {\bf y} \rangle}$,
we find
\begin{equation}
{\langle N - \Delta N, {\bf y} |} \hat \psi' {| N, {\bf y} \rangle}
=
0
\quad \mbox{for $^\forall \Delta N$ ($|\Delta N| \ll N$)}.
\label{property of psi'}\end{equation}
We now define two number operators by
\begin{eqnarray}
\hat N'
&\equiv&
\int d^3 r \hat \psi^{\prime \dagger} (r) \hat \psi'(r),
\\
\hat N_0
&\equiv&
\hat N - \hat N'.
\label{N0}\end{eqnarray}
Then, from Eqs.\ (\ref{GS}), (\ref{decomposition of psi}),
(\ref{Z}), and (\ref{property of psi'}), we find
\begin{equation}
{\langle N, {\bf y} |} \hat N {| N, {\bf y} \rangle}
=
V |\Xi|^2 + {\langle N, {\bf y} |} \hat N' {| N, {\bf y} \rangle}.
\end{equation}
Hence, from Eq.\ (\ref{N0}),
\begin{equation}
{\langle N, {\bf y} |} \hat N_0 {| N, {\bf y} \rangle}
=
V |\Xi|^2,
\label{N0 vs Xi}\end{equation}
which may be interpreted as the
``number of condensate particles'' \cite{LP}.
That is,
in agreement with the standard result \cite{LP},
$|\Xi|^2$ is the density of
the condensate particles;
\begin{equation}
|\Xi|^2 = \langle N_0 \rangle / V \equiv n_0,
\end{equation}
where
we have denoted the expectation value simply by $\langle \cdots \rangle$.
We can therefore write $\Xi$ as
\begin{equation}
\Xi = \sqrt{n_0} e^{i \varphi}.
\label{Xi_n0}\end{equation}
We thus find the formula for the decomposition of $\hat \psi$ as
\begin{equation}
\hat \psi
=
e^{i \varphi} \sqrt{n_0 \over n V} \hat b_0
+
\hat \psi',
\label{formula of decomposition}\end{equation}
which is extremely useful in the following analysis.
Note that we have obtained the finite renormalization;
\begin{equation}
|Z| = n_0 / n < 1.
\end{equation}
\subsection{Relation to the previous work}
Lifshitz and Pitaevskii \cite{LP} {\em introduced} an operator $\hat \Xi$
that transforms an eigenstate with $N$ bosons into
the corresponding eigenstate with $N-1$ bosons,
{\em without} giving an explicit form of $\hat \Xi$.
(They defined $\hat \Xi$ through its matrix elements
between eigenstates with different values of $N$.
However, as mentioned in section \ref{sec_GS},
they did not give the forms of the eigenstates of fixed $N$.)
They decomposed $\hat \psi$ as (Eq.\ (26.4) of Ref.\ \cite{LP})
\begin{equation}
\hat \psi
=
\hat \Xi + \hat \psi'.
\label{decomposition of psi by LP}\end{equation}
In the present paper,
from Eqs.\ (\ref{bz}) and (\ref{formula of decomposition}),
we obtain the {\em explicit} expression for $\hat \Xi$ as
\begin{equation}
\hat \Xi
= e^{i \varphi} \sqrt{n_0 \over n V} \hat b_0
= e^{i \varphi} \sqrt{n_0 \over n V}
e^{i \hat G({\bf y})} \hat a_0 e^{-i \hat G({\bf y})}
\end{equation}
From Eqs.\ (\ref{app_b0}) and (\ref{Xi_n0}), we confirm that
\begin{equation}
\hat \Xi {| N, {\bf y} \rangle}
=
\Xi {| N-1, {\bf y} \rangle},
\label{app_Xi}\end{equation}
which was {\em assumed} in Ref.\ \cite{LP}.
The operator $\hat \Xi$ characterizes the condensation by having
a finite matrix element \cite{LP}.
In the following we will reveal a striking property of
$\hat \Xi$ (or, equivalently, $\hat b_0$); it
is a ``natural coordinate'' of interacting bosons.
On the other hand,
other operators, which also characterize the condensation,
were introduced in Refs.\ \cite{GA,gardiner,castin}.
Girardeau and Arnowitt \cite{GA}
defined $\hat \beta_0 \equiv \hat a_0 \hat N_0^{-1/2}$,
Gardiner \cite{gardiner} introduced
$\hat A = \hat a_0 (\hat N_0/\hat N)^{1/2}$ (which is
an operator form of Eqs.\ (9) and (10) of Ref.\ \cite{gardiner}),
and Castin and Dum \cite{castin} introduced
$\hat a_{\Phi_{ex}}
\equiv
\int dr \Phi_{ex}^* (r,t) \hat \psi(r,t)
$.
These operators
are totally different from $\hat \Xi$ or $\hat b_0$
because complicated many-particle correlations,
which are included in $\hat \Xi$ and $\hat b_0$,
are not included in $\hat \beta_0$, $\hat A$ and $\hat a_{\Phi_{ex}}$.
For example, $\hat a_{\Phi_{ex}}$ is a {\em linear} combination of
{\em annihilation} operators of free bosons,
whereas $\hat b_0$ is a {\em nonlinear} function of
{\em both} the annihilation and creation operators of free bosons.
As a result,
in contrast to Eqs.\ (\ref{app_b0}) and (\ref{app_Xi}),
application of either $\hat \beta_0$, $\hat A$ or $\hat a_{\Phi_{ex}}$
to the ground state of $N$ bosons does not yield
the ground state of $N-1$ bosons;
it yields an excited state which is not an eigenstate.
Moreover, they are not a natural coordinate of interacting bosons
in the sense explained in the following.
Therefore, we do not use these operators,
although they would be useful in other problems.
\subsection{Low-lying excited states}
Excited states of a fixed number of interacting bosons
were discussed in Refs.\ \cite{GA,gardiner,castin}.
In the present formulation, we may obtain low-lying excited states
by the application
to ${| N, {\bf y} \rangle}$ of
functions of $\hat b_{\bf k}^\dagger$'s with ${\bf k} \neq {\bf 0}$.
However,
since we do not need any
explicit expressions of the excited states in the following analysis,
we do not seek for them in the present paper.
\section{Time evolution of bosons in a leaky box}
\label{sec_evolution}
The time evolution of a condensate(s) in an open box(es)
was discussed previously
for the cases of {\em non-interacting} bosons
in Refs.\ \cite{theory1,theory2,theory3} and
for the case of {\em two-mode} interacting bosons in Ref.\ \cite{RW}.
In the present paper,
using $\hat b_0$,
we study the case of infinite-mode interacting bosons.
\subsection{Gedanken experiment}
In most real systems, there is a finite probability of
exchanging bosons between the box and the environment.
Hence, even if one fixes $N$ at some time,
$N$ will fluctuate at later times.
Namely, the boson state undergoes a nonequilibrium
time evolution when its number fluctuation is initially suppressed.
To simulate this situation,
we consider the following gedanken experiment [Fig.\ \ref{gedanken}].
\begin{figure}[h]
\begin{center}
\epsfile{file=PRA-fig1.eps,scale=0.4}
\end{center}
\caption{
Our gedanken experiment.
$N$ bosons are confined in a closed box for $t<0$.
At $t=0$ a small hole is made in the box,
so that a small leakage flux $J$ is induced, and
the expectation value $\langle N(t) \rangle$ of the number
of bosons in the box decreases with time.
}
\label{gedanken}
\end{figure}
Suppose that bosons are confined in a box
which is kept at zero temperature,
and that the wall of the box is not permeable to
the flow of the bosons, {\it i.e.},
the probability of boson's permeating through the wall
within a time scale of our interest is negligible.
If one measures the number of the bosons
at a time $t=t_p$ ($< 0$),
and if the box is kept closed until $t=0$,
then the density operator $\hat \rho(t)$ of the bosons in the box for
$t_p < t < 0$ is
\begin{equation}
\hat \rho(t) = {| N, {\bf y} \rangle} {\langle N, {\bf y} |}
\quad \mbox{for $t_p < t < 0$}.
\label{past rho}\end{equation}
Assume that this box is placed in a large room, which has
no bosons initially.
Suppose now that at $t=0$ he makes
a small hole(s), or slightly lowers the potential of the wall
of the box,
so that a small but finite flow $J$ of the bosons from
the inside to the outside of the box
becomes possible for $t \geq 0$.
We study the time evolution for $t \geq 0$ of the density operator
$\hat \rho(t)$ of the bosons in the box.
The expectation value of $N$ will be a decreasing function
of $t$, which we denote ${\langle N(t) \rangle}$
(hence $\langle N(0) \rangle =N$).
It is obvious that
as $t \to \infty$ the system approaches the equilibrium state.
Therefore, we are most interested in
{\em the early stage of the time evolution},
for which
\begin{equation}
|N - {\langle N(t) \rangle}| \ll N.
\label{early}\end{equation}
Note that if $J$ were not small enough,
then the state in the box would evolve into
a nonequilibrium excited state.
The property of such a nonequilibrium state would
depend strongly on details of the structures of the box
and the hole or wall.
In the present paper, we are not interested in such structure-sensitive
states.
Therefore, we assume that $J$ is small enough
that only transitions between the ground states
for different values of $N$ are possible.
\subsection{Total Hamiltonian}
Let ${\cal V}$ denote the volume of the room,
which is much larger than the volume $V$ of the box;
\begin{equation}
{\cal V} \gg V.
\end{equation}
The total boson field $\hat \psi^{total}({\bf r})$
is defined on ${\cal V}$,
\begin{equation}
\hat \psi^{total}({\bf r})
=
\hat \psi({\bf r}) + \hat \psi^E({\bf r}),
\end{equation}
where $\hat \psi({\bf r})$ is localized in the box, and
$\hat \psi^E \equiv \hat \psi^{total} - \hat \psi$
is the boson field of ``environment.''
Then, the total Hamiltonian may take the following form:
\begin{equation}
\hat H^{total}
=
\hat H
+
\hat H^E
+
\hat H^{ES},
\label{Htotal}\end{equation}
where $\hat H$ and
$\hat H^E$ are the Hamiltonians
of the $\hat \psi$ and $\hat \psi^E$ fields,
respectively, when they are closed.
Small but finite amplitudes of scattering between
the $\hat \psi$ and $\hat \psi^E$ fields
are caused by the residual Hamiltonian
$\hat H^{ES}$.
Under our assumption (\ref{dilute}),
the probability of multi-particle collisions during
the escape from the box is negligible.
Therefore, $\hat H^{ES}$ should take the following form:
\begin{equation}
\hat H^{ES}
=
\lambda
\int d^3 r
\hat \psi^{E \dagger}({\bf r})
f({\bf r})
\hat \psi({\bf r})
+ {\rm h.c.},
\label{HES}\end{equation}
where $\lambda$ is a constant which has the dimension of energy,
and $f({\bf r})$ is a dimension-less
function which takes values of order unity
when ${\bf r}$ is located in
the boundary region between the box and the environment,
and $f({\bf r})=0$ otherwise.
Although the value of $\lambda$ and the form of $f$
depend on the structures of the box and the hole or walls,
our final results (e.g., Eq.\ (
\ref{rho Poisson})) are independent of such details.
\subsection{Low-lying states of the total system}
We here list states of the total system which are
relevant to the following analysis.
Since $\hat H^{ES}$ is weak,
quasi eigenstates of the total system are well approximated by
the products of eigenstates of the box and of the environment.
Recall that
$| N - \langle N(t) \rangle| \ll N$
for the time interval of our interest,
and that
$J$ is small enough so
that only transitions between the ground states
for different values of $N$ are possible.
Therefore, among many possible states of the box
the relevant states are
${| N - \Delta N, {\bf y} \rangle}$'s with $|\Delta N| \ll N$.
On the other hand, there are no
bosons in the environment at $t<0$.
That is, the environment is initially in
the vacuum, which we denote $| 0^{E} \rangle$.
Hence, from Eq.\ (\ref{past rho}),
the initial density operator of the total system is
\begin{equation}
\hat \rho^{total}(t)
=
{| N, {\bf y} \rangle} | 0^{E} \rangle \langle 0^{E} | {\langle N, {\bf y} |}
\quad (t<0) .
\label{initial rho total}\end{equation}
Bosons escape from the box into the environment for $t \geq 0$.
Since ${\cal V} \gg V$, the boson density of the environment is kept
essentially zero,
and BEC does not occur in the environment,
for the time period of Eq.\ (\ref{early}).
We can therefore take
the simple number states
$| n^{E}_{\bf k}, n^{E}_{\bf k'}, \cdots \rangle$'s
of free bosons
as eigenstates of the environment, where
$n_{\bf k}$ denotes the number of bosons in mode ${\bf k}$.
For example,
we shall write
$| 1_{\bf k}^{E} \rangle$
to denote the environment state
in which mode ${\bf k}$ are occupied by a single boson whereas
the other modes are empty.
Therefore, the relevant states of the total system,
{\it i.e.}, low-lying quasi eigenstates of $\hat H^{total}$,
can be written as
\begin{equation}
{| N - \Delta N, {\bf y} \rangle} | n^{E}_{\bf k}, n^{E}_{\bf k'}, \cdots \rangle
\quad (|\Delta N| \l N),
\label{relevant states}\end{equation}
where, since
$\hat H^{total}$ conserves the total number of bosons,
\begin{equation}
\Delta N
=
\sum_{\bf k} n^{E}_{\bf k}.
\label{conservation of total N}\end{equation}
\subsection{Time evolution in a short time interval $\Delta t$}
We are interested in the reduced density operator of
bosons in the box for $t \ge 0$;
\begin{equation}
\hat \rho (t)
\equiv
{\rm Tr^E} [ \hat \rho^{total}(t) ],
\label{reduced rho}\end{equation}
where ${\rm Tr^E}$ is the trace operation over the environment degrees of
freedom.
The expectation value of any observable $\hat Q$ in the box
can be evaluated from $\hat \rho (t)$ as
\begin{eqnarray}
\langle Q(t) \rangle
&\equiv&
{\rm Tr^{total}} [ \hat \rho^{total}(t) \hat Q ]
\nonumber\\
&=&
{\rm Tr} [ \hat \rho(t) \hat Q ],
\end{eqnarray}
where
${\rm Tr}$ denotes the trace operation over the degrees of
freedom in the box.
Equation (\ref{initial rho total}) yields
\begin{eqnarray}
\hat \rho(0)
&=&
{| N, {\bf y} \rangle} {\langle N, {\bf y} |}
\label{rho at 0}\\
\langle Q(0) \rangle
&=&
{\langle N, {\bf y} |} \hat Q {| N, {\bf y} \rangle}.
\end{eqnarray}
Although we may evaluate $\hat \rho (t)$
by solving a master equation,
we here present a different
(but equivalent) method,
by which the physical meaning can be seen clearly.
We begin with noting that a single action of
$\hat H^{ES}$ of Eq.\ (\ref{HES}) can only change $(N, N^{E})$
by either $(-1,+1)$ or $(+1,-1)$, and that
the latter change is impossible for the initial density operator
(\ref{initial rho total}).
Therefore,
after a short time interval $\Delta t$ which satisfies $J \Delta t \ll 1$,
the state vector
${| N, {\bf y} \rangle} | 0^{E} \rangle$ evolves
into a state of the following form:
\begin{equation}
e^{-i E_{N, {\bf y}} \Delta t / \hbar}
{| N, {\bf y} \rangle} | 0^{E} \rangle
+
\sum_{\bf k}
c_{\bf k}^{(1)} (\Delta t)
e^{-i (E_{N-1, {\bf y}} + \epsilon^{(0)}_k) \Delta t / \hbar}
{| N - 1, {\bf y} \rangle} | 1^{E}_{\bf k} \rangle
+
O(\lambda^2),
\label{state after Dt}
\end{equation}
where
\begin{eqnarray}
c_{\bf k}^{(1)} (\Delta t)
&\equiv&
{1 \over i \hbar}
\int_0^{\Delta t}
M_{\bf k} e^{i (\epsilon^{(0)}_k - \mu) \tau / \hbar} d \tau,
\\
M_{\bf k}
&\equiv&
\langle 1_{\bf k}^{E}| {\langle N - 1, {\bf y} |}
\hat H^{ES}
{| N, {\bf y} \rangle} | 0^{E} \rangle,
\\
\mu
&\equiv&
E_{N, {\bf y}} - E_{N-1, {\bf y}}.
\end{eqnarray}
Therefore,
the reduced density operator
is evaluated as
\begin{equation}
\hat \rho (\Delta t)
=
w(0;\Delta t) {| N, {\bf y} \rangle} {\langle N, {\bf y} |}
+
w(1;\Delta t) {| N - 1, {\bf y} \rangle} {\langle N - 1, {\bf y} |}
+
O(\lambda^3),
\label{rho at Delta t}\end{equation}
where
\begin{eqnarray}
w(0;\Delta t)
&\equiv&
1 - w(1;\Delta t),
\label{w0}\\
w(1;\Delta t)
&\equiv&
\sum_{\bf k} |c_{\bf k}^{(1)}(\Delta t)|^2.
\end{eqnarray}
Here, we have normalized $\hat \rho (\Delta t)$ to order $\lambda^2$ by
Eq.\ (\ref{w0}).
We now take $\Delta t$ in such a way that
\begin{equation}
\hbar / E_c < \Delta t \ll 1/J,
\label{range of Delta t}\end{equation}
where $E_c$ is the energy range of $\epsilon^{(0)}_{\bf k}$ in which
$|M_{\bf k}|^2$ is finite and approximately constant.
Then,
since ${\bf k}$ of the environment takes quasi-continuous values,
$w(1;\Delta t)$ becomes proportional to $\Delta t$:
\begin{equation}
w(1;\Delta t)
=
J \Delta t.
\end{equation}
To evaluate $J$, we calculate $|M_{\bf k}|^2$ using
Eqs.\ (\ref{property of psi'}) and (\ref{HES}) as
\begin{eqnarray}
|M_{\bf k}|^2
&=&
\left|
\lambda
\int d^3 r
f({\bf r})
\langle 1_{\bf k}^{E} | {\langle N - 1, {\bf y} |}
\hat \psi^{E \dagger}({\bf r})
\hat \Xi
{| N, {\bf y} \rangle} | 0^{E} \rangle
\right|^2
\nonumber\\
&=&
N {n_0 \over n}
\left|
{\lambda \over \sqrt{V}}
\int d^3 r
f({\bf r}) \varphi_{\bf k}^{E *}({\bf r})
\right|^2,
\end{eqnarray}
where $\varphi_{\bf k}^{E}({\bf r})$ is the mode function of mode
${\bf k}$ of the environment.
Regarding the volume dependence,
$\varphi_{\bf k}^{E}$ behaves as $\sim 1/\sqrt{\cal V}$,
whereas $f$ is localized in the boundary region,
whose volume is denoted by $v$,
of the box and the environment.
Therefore,
\begin{equation}
|M_{\bf k}|^2
\approx
{n_0 \over n}
|\lambda|^2
{v^2 \over V {\cal V}}.
\end{equation}
On the other hand, from Eq.\ (\ref{E 2nd order}),
an escaping boson has
the energy of $g n$.
The density of states of the environment at this energy is
\begin{equation}
{
{\cal V}
\over
2 \pi^2 \hbar^3
}
\sqrt{m^3 g n \over 2}.
\end{equation}
Therefore, the leakage flux $J$ is estimated as
\begin{equation}
J
\approx
N
{n_0 \over n}
{v \over V}
{|\lambda|^2 v \over \hbar^4}
\sqrt{m^3 g n}
=
\frac{
n_0 |\lambda|^2 v^2
}{
\hbar^4
}
\sqrt{m^3 g n},
\label{J}
\end{equation}
where numerical factors of order unity have been absorbed in $\lambda$.
We observe that
$J$ is reduced by the factor $v/V$ ($\ll 1$), which means
that the escape process is a ``surface effect'',
i.e., it occurs only in the boundary region.
On the other hand,
$J$ is enhanced by the factor $N$ ($\gg 1$).
This enhancement is typical to the boson condensation.
From our assumption of small $J$,
the rhs of Eq.\ (\ref{J}) should be smaller than
the critical value $J_{cr}$
of the flux above which bosons in the box get excited \cite{Jcr}.
It is seen that this condition is satisfied when
$v$ and/or $|\lambda|$ is small enough.
\subsection{Time evolution for $0 \leq Jt \ll N$}
\label{sec_later}
We have found in the previous subsection that
the reduced density operator $\hat \rho$ evolves
from the pure state (\ref{rho at 0})
to the mixed state (\ref{rho at Delta t}) after
a small time interval $\Delta t$ ($\ll 1/J$).
Since the latter
is a classical mixture of two different states,
${| N, {\bf y} \rangle} {\langle N, {\bf y} |}$ and ${| N - 1, {\bf y} \rangle} {\langle N - 1, {\bf y} |}$, we can
separately solve the time evolution
for each state.
For each state, further transitions occur in the
subsequent time intervals,
$(\Delta t, 2 \Delta t]$, $(2 \Delta t, 3 \Delta t]$, $\cdots$.
Since ${\cal V} \gg V$,
the recursion time of an escaped boson to return to
the original position is extremely long
(except for rare events whose probability $\to 0$
as ${\cal V} \to \infty$.)
Therefore, as long as $J \ll J_{cr}$,
we can neglect any quantum as well as classical correlations
between transitions of different time intervals.
This allows us to take
the no-boson state, $| 0^{E} \rangle$,
as the initial state of the environment for {\em each} time interval
$(\ell \Delta t, (\ell+1) \Delta t]$, where $\ell = 0, 1, 2, \cdots$.
Hence, for every time interval,
we may use the same formula
(\ref{rho at Delta t}).
Furthermore, we can neglect the
$N$ dependencies of $w$ and $J$ under
our assumption of Eq.\ (\ref{early}).
Therefore,
\begin{eqnarray}
&& \hat \rho (2 \Delta t)
\nonumber\\
&&
=
w(0;\Delta t)
\left\{ w(0;\Delta t) {| N, {\bf y} \rangle} {\langle N, {\bf y} |}
+
w(1;\Delta t) {| N - 1, {\bf y} \rangle} {\langle N - 1, {\bf y} |}
\right\}
\nonumber\\
&&
\quad +
w(1;\Delta t)
\left\{
w(0;\Delta t) {| N - 1, {\bf y} \rangle} {\langle N - 1, {\bf y} |}
+
w(1;\Delta t) {| N - 2, {\bf y} \rangle} {\langle N - 2, {\bf y} |}
\right\}
\nonumber\\
&&
=
w(0;2 \Delta t) {| N, {\bf y} \rangle} {\langle N, {\bf y} |}
\nonumber\\
&&
\quad +
w(1;2 \Delta t) {| N - 1, {\bf y} \rangle} {\langle N - 1, {\bf y} |}
\nonumber\\
&&
\quad +
w(2;2 \Delta t) {| N - 2, {\bf y} \rangle} {\langle N - 2, {\bf y} |},
\label{rho at 2 Delta t}\end{eqnarray}
where
\begin{eqnarray}
w(0;2 \Delta t)
&\equiv&
w(0;\Delta t)^2
\nonumber\\
&=&
(1-J \Delta t)^2
\label{w02}\\
w(1;2 \Delta t)
&\equiv&
w(0;\Delta t) w(1;\Delta t)
+
w(1;\Delta t) w(0;\Delta t)
\nonumber\\
&=&
2 (1-J \Delta t)J \Delta t
\label{w12}\\
w(2;2 \Delta t)
&\equiv&
w(1;\Delta t) w(1;\Delta t)
\nonumber\\
&=&
J^2 (\Delta t)^2.
\label{w22}\end{eqnarray}
The time evolution
in the subsequent times can be calculated in a similar manner.
Let
\begin{equation}
t = M \Delta t,
\end{equation}
where $M$ ($<N$) is a positive integer.
We find
\begin{equation}
\hat \rho(t)
=
\sum_{m = 0}^{M} w(m;t) {| N - m, {\bf y} \rangle} {\langle N - m, {\bf y} |},
\label{rho at t}\end{equation}
where $w(m; t)$ is the binomial distribution;
\begin{equation}
w(m; t)
=
{M \choose m}
(1-J \Delta t)^{M - m}
(J \Delta t)^{m}.
\label{w binomial}\end{equation}
We find from Eq.\ (\ref{rho at t}) that
$w(m;t)$ is the probability of finding
$N - m$ bosons in the box at $t$.
From the conservation of the total number of bosons,
Eq.\ (\ref{conservation of total N}),
this probability equals the probability
that $m$ bosons have escaped from the box by the time $t$.
Using Eq.\ (\ref{w binomial}), we find
\begin{eqnarray}
\langle N(t) \rangle
&=&
{\rm Tr} [ \hat \rho(t) \hat N ]
=
\sum_{m = 0}^{M} w(m;t) (N - m)
=
N - Jt
\\
\langle N^{E}(t) \rangle
&=&
{\rm Tr} [ \hat \rho(t) \hat N^{E} ]
=
\sum_{m = 0}^{M} w(m;t) m
=
J.
\end{eqnarray}
Since $E_c$ in Eq.\ (\ref{range of Delta t})
is of the order of the atomic energy, we can take
$\Delta t$ extremely small such that
\begin{equation}
M \gg 1
\ \mbox{and} \
M \gg Jt,
\label{M is large}\end{equation}
for a finite $t$ that satisfies Eq.\ (\ref{early}).
In this case,
Eq.\ (\ref{w binomial}) can be approximated by the Poisson
distribution;
\begin{equation}
w(m; t)
\approx
K(M, Jt)
e^{- J t}
{
(J t)^{m}
\over
{m}!
}.
\label{w Poisson}\end{equation}
Here, $K$ is the normalization factor,
\begin{equation}
{1 \over K(M,x)}
\equiv
e^{- x}
\sum_{m = 0}^{M}
{
x^{m}
\over
{m}!
},
\end{equation}
which approaches unity for all $x$ as $M \to \infty$.
For large but finite $M$, we can easily show that
\begin{equation}
\left|
{1 \over K(M,x)}
-
1
\right|
\sim
{
e^{-x}
\over
\sqrt{2 \pi (M+1)}
}
\left(
{
e x
\over
M + 1
}
\right)^{M+1}.
\label{K is unity}\end{equation}
Therefore, under the condition (\ref{M is large}),
we can take $K(M, Jt) = 1$
to a very good approximation, and
we henceforth drop $K$ from Eq.\ (\ref{w Poisson}).
Furthermore,
since $w(m,t) \approx 0$ for $m \gg Jt$,
we may extend the summation of Eq.\ (\ref{rho at t}) to
$N$.
We thus obtain
\begin{eqnarray}
\hat \rho(t)
&\approx&
e^{- J t}
\sum_{m = 0}^N
{
(J t)^{m}
\over
{m}!
}
{| N - m, {\bf y} \rangle} {\langle N - m, {\bf y} |}
\nonumber\\
&=&
e^{- J t}
\sum_{m = 0}^N
{
(J t)^{N - m}
\over
(N - m)!
}
{| m, {\bf y} \rangle} {\langle m, {\bf y} |}.
\label{rho Poisson}\end{eqnarray}
Since this final result is valid even at $t=0$ (despite
our use of the assumption $M \gg 1$),
it is valid for all $t$ as long as
\begin{equation}
0 \leq Jt \ll N
\quad \mbox{and} \quad
N \gg 1.
\label{assumption}\end{equation}
Note that the final result (\ref{rho Poisson}) is quite general
because all the
details of the box-environment interaction $\hat H^{ES}$ have been
absorbed in $J$.
The probability $P(m,t)$ of finding $m$ bosons in the box at $t$ is
evaluated as
\begin{eqnarray}
P(m,t)
&=&
w(N-m;t)
\nonumber\\
&=&
e^{- J t}
{
(J t)^{N - m}
\over
(N - m)!
}.
\label{sP}\end{eqnarray}
We call this distribution
the ``shifted Poisson distribution'',
because it is obtained by shifting the center of
the Poisson distribution.
The expectation values and variances are evaluated as
\begin{eqnarray}
\langle N(t) \rangle
&=&
N - J t
\label{mean N}\\
\langle N^{E}(t) \rangle
&=&
J t
\label{mean NE}\\
\langle \delta N(t)^2 \rangle
&=&
\langle \delta N^{E}(t)^2 \rangle
=
J t.
\label{variance}\end{eqnarray}
\section{Number versus phase}
\label{sec_number vs phase}
\subsection{Cosine and sine operators of interacting many bosons}
Roughly speaking, the conjugate
observable of the number is the phase.
More precisely, however,
physical observables are not the phase itself, but the
cosine and sine of the phase.
Namely,
any physical measurement of a phase actually measures the cosine or sine
of the phase \cite{phase}.
In the case of a single-mode boson
(i.e., a harmonic oscillator of a single degrees of freedom),
it has been discussed that
various definitions are possible for the cosine and sine operators
\cite{mandel}.
This ambiguity does not matter in our case, because
we are treating the case where
the number of bosons is extremely large,
whereas differences among different definitions
appear only when the number of bosons is small.
On the other hand, the crucial point
in our case is how to select
a single ``coordinate'' (dynamical variable)
with which the phase is defined,
among a huge degrees of freedom.
To find such a ``proper coordinate'' is generally very
difficult in many-body interacting systems.
Fortunately, we find that
${\hat b_0}$ is the proper coordinate of interacting bosons,
with which we can successfully define
the {\em cosine and sine operators of interacting many bosons} by
\begin{eqnarray}
\hat{\cos \phi}
&\equiv&
{1 \over 2 \sqrt{{\hat b_0^\dagger} {\hat b_0} + 1}} {\hat b_0}
+
{\hat b_0^\dagger} {1 \over 2 \sqrt{{\hat b_0^\dagger} {\hat b_0} + 1}}
\label{cos}\\
\hat{\sin \phi}
&\equiv&
{1 \over 2 i \sqrt{{\hat b_0^\dagger} {\hat b_0} + 1}} {\hat b_0}
-
{\hat b_0^\dagger} {1 \over 2 i \sqrt{{\hat b_0^\dagger} {\hat b_0} + 1}}.
\label{sin}\end{eqnarray}
These are the same forms as those of
a single harmonic oscillator \cite{mandel}.
In our case, however, there are a huge degrees of freedom
with mutual interactions.
As a result, the Hamiltonian does {\em not} take
the simple bilinear form with respect to ${\hat b_0}$,
hence the motion of ${\hat b_0} + {\hat b_0^\dagger}$ is not that of
a harmonic-oscillator coordinate.
Nevertheless,
many formulas for the single mode case are applicable if
they are based only on the commutation relations
of a boson operator.
In particular,
owing to Eq.\ (\ref{GS is vac of bk}),
we can treat any states that have the form of
$\sum_m C_m | N-m, {\bf y} \rangle$
as if we were treating a single-mode problem.
It will turn out in the following discussions that
the above operators give reasonable results for
the quantum phase of interacting bosons.
\subsection{Number and phase fluctuations of ${| N, {\bf y} \rangle}$}
As we have shown in section \ref{sec_natural},
the ground state of a fixed number of bosons
${| N, {\bf y} \rangle}$ can be represented simply as a number state
if we use a ``natural coordinate'' ${\hat b_0}$.
Note that this state has a finite fluctuation of
$\hat a_0^\dagger \hat a_0$ due to many-body interactions.
Nevertheless, the total number of bosons $\hat N$
has a definite value;
\begin{eqnarray}
\langle N \rangle_{N, {\bf y}}
&\equiv&
{\langle N, {\bf y} |} \hat N {| N, {\bf y} \rangle}
=
N,
\\
\langle \delta N^2 \rangle_{N, {\bf y}}
&\equiv&
{\langle N, {\bf y} |} \delta \hat N^2 {| N, {\bf y} \rangle}
=
0.
\label{N and dN of GS}\end{eqnarray}
On the other hand, using the simple representation (\ref{GS}),
we can easily show that
\begin{eqnarray}
\langle \cos \phi \rangle_{N, {\bf y}}
&\equiv&
{\langle N, {\bf y} |} \hat{\cos \phi} {| N, {\bf y} \rangle}
= 0
\label{cos of Ny}\\
\langle \sin \phi \rangle_{N, {\bf y}}
&\equiv&
{\langle N, {\bf y} |} \hat{\sin \phi} {| N, {\bf y} \rangle}
= 0.
\label{sin of Ny}\end{eqnarray}
Therefore, the ground state of a fixed number of bosons
does not have a definite phase \cite{phase},
as expected from the number-phase uncertainty relation,
Eq.\ (\ref{NPUR}).
It was sometimes argued that
although $\hat N$ is definite
the fluctuation of $\hat a_0^\dagger \hat a_0$
might allow for a definite phase \cite{forster}.
However, our results (\ref{cos of Ny}) and (\ref{sin of Ny})
show explicitly that the fluctuation of
$\hat a_0^\dagger \hat a_0$ does not help to
develop a definite phase.
Note that this is {\em not} due to our special choice of
the cosine and sine operators, because
the same conclusion is obtained also
when the cosine and sine operators of $\hat a_0$
are used instead of Eqs.\ (\ref{cos}) and (\ref{sin}).
We will touch on this point again in section \ref{sec_OP}.
\subsection{Coherent state of interacting bosons}
We define a coherent state of interacting bosons (CSIB) by
\begin{equation}
| \alpha, {\bf y} \rangle
\equiv
e^{-|\alpha|^2/2}
\sum_{m = 0}^\infty
{
\alpha^n
\over
\sqrt{m!}
}
{| m, {\bf y} \rangle},
\label{cs}\end{equation}
which is labeled by ${\bf y}$ and a complex number,
\begin{equation}
\alpha \equiv e^{i \phi} \sqrt{N}.
\end{equation}
The inverse transformation is
\begin{equation}
{| N, {\bf y} \rangle}
=
\int_{-\pi}^{\pi} \frac{d \phi}{2 \pi}
| \alpha, {\bf y} \rangle.
\label{inverse_tr}\end{equation}
Regarding the number and phase fluctuations,
we can easily show that
$| \alpha, {\bf y} \rangle$ has the same properties as
a coherent state of a single-mode harmonic
oscillator \cite{mandel}. Namely,
\begin{eqnarray}
\langle N \rangle_{\alpha, {\bf y}}
&\equiv&
\langle \alpha, {\bf y} | \hat N | \alpha, {\bf y} \rangle
=
|\alpha|^2
=
N,
\\
\langle \delta N^2 \rangle_{\alpha, {\bf y}}
&\equiv&
\langle \alpha, {\bf y} | \delta \hat N^2 | \alpha, {\bf y} \rangle
=
|\alpha|^2
=
N,
\label{N and dN of CS}\end{eqnarray}
and, for $|\alpha|^2 = N \gg 1$,
\begin{eqnarray}
\langle \sin \phi \rangle_{\alpha, {\bf y}}
&\equiv&
\langle \alpha, {\bf y} | \hat {\sin \phi} | \alpha, {\bf y} \rangle
\nonumber\\
&=&
[1 - 1 / (8|\alpha|^2) + \cdots ]
\sin \phi,
\\
\langle \delta \sin^2 \phi \rangle_{\alpha, {\bf y}}
&\equiv&
\langle \alpha, {\bf y} | (\delta \hat{\sin \phi})^2 | \alpha, {\bf y} \rangle
\nonumber\\
&=&
(1 / 4|\alpha|^2)
(1 - \sin^2 \phi)
+ \cdots,
\end{eqnarray}
and similar results for the cosine operator.
It is customary to express the results for the sine and
cosine operators symbolically as
\begin{eqnarray}
\langle \phi \rangle_{\alpha, {\bf y}}
&\approx&
\phi,
\\
\langle \delta \phi^2 \rangle_{\alpha, {\bf y}}
&\approx&
1 / (4|\alpha|^2)
= 1/ (4N).
\end{eqnarray}
Therefore, $| \alpha, {\bf y} \rangle$ is the minimum-uncertainty state
in the sense that it
has the {\em minimum} allowable value of the number-phase uncertainty
product (NPUP) [Eq.\ (\ref{NPUR})];
\begin{equation}
\langle \delta N^2 \rangle_{\alpha, {\bf y}}
\langle \delta \phi^2 \rangle_{\alpha, {\bf y}}
\approx
1/4.
\label{NPUP_CSIB}\end{equation}
The magnitude of the number fluctuation is conveniently measured with
the ``Fano factor'' $F$,
which is defined by
\begin{equation}
F
\equiv
\langle \delta N^2 \rangle / \langle N \rangle.
\label{Fano}\end{equation}
For $| \alpha, {\bf y} \rangle$, we find
\begin{equation}
F_{\alpha, {\bf y}}
\equiv
{
\langle \delta N^2 \rangle_{\alpha, {\bf y}}
\over
\langle N \rangle_{\alpha, {\bf y}}
}
=
1.
\label{Fano_CSIB}\end{equation}
Therefore, using ${\hat b_0}$, we have successfully constructed a very
special state of interacting bosons, $| \alpha, {\bf y} \rangle$,
whose Fano factor is {\em exactly} unity, and which
has the {\em minimum} allowable value of the NPUP.
This should be contrasted with
Bogoliubov's ground state
$| \alpha_0, {\bf y}^{cl} \rangle^{cl}$,
for which
\begin{equation}
F^{cl}
\equiv
{
\langle \delta N^2 \rangle^{cl}
\over
\langle N \rangle^{cl}
}
=
1
+
\frac{
\sum_{{\bf q} \neq {\bf 0}} (\sinh |y_q^{cl}|)^4
}{
|\alpha_0|^2 + \sum_{{\bf q} \neq {\bf 0}} (\sinh |y_q^{cl}|)^2
}
>
1,
\label{Fano cl}\end{equation}
and the NPUP is larger than 1/4.
The CSIB should not be confused with
$| \alpha_0, {\bf y}^{cl} \rangle^{cl}$.
\subsection{Number-phase squeezed state of interacting bosons}
\label{sec_NPIB}
We define a new state ${| \xi, N, {\bf y} \rangle}$ by
\begin{eqnarray}
{| \xi, N, {\bf y} \rangle}
&\equiv&
\sqrt{K(N, |\xi|^2)} \ e^{-|\xi|^2/2}
\sum_{n = 0}^N
{
\xi^{*(N - n)}
\over
\sqrt{(N - n)!}
}
{| m, {\bf y} \rangle}
\\
&=&
\sqrt{K(N, |\xi|^2)} \ e^{-|\xi|^2/2}
\sum_{n = 0}^N
{
\xi^{*(N - n)}
\over
\sqrt{(N - n)! n!}
}
({\hat b_0}^\dagger)^n
{| 0, {\bf y} \rangle},
\label{NPIB}\end{eqnarray}
which is labeled by ${\bf y}$ and a complex number,
\begin{equation}
\xi
\equiv
e^{i \phi} |\xi|.
\end{equation}
We henceforth assume that
\begin{equation}
|\xi|^2 \ll N
\quad \mbox{and} \quad
N \gg 1,
\label{assumption_xi}\end{equation}
which allows us to set $K(N, |\xi|^2)=1$ to a very good approximation.
The probability $P(m)$ of finding $m$ bosons for the state ${| \xi, N, {\bf y} \rangle}$ obeys
the shifted Poisson distribution [{\it cf.} Eq.\ (\ref{sP})],
\begin{equation}
P(m)
=
e^{- |\xi|^2}
{
|\xi|^{2(N - m)}
\over
(N - m)!
}.
\label{sPm}\end{equation}
The number fluctuation and the Fano factor are evaluated as
\begin{eqnarray}
\langle N \rangle_{\xi, N, {\bf y}}
&\equiv&
{\langle \xi, N, {\bf y} |} \hat N {| \xi, N, {\bf y} \rangle}
=
N - |\xi|^2,
\\
\langle \delta N^2 \rangle_{\xi, N, {\bf y}}
&\equiv&
{\langle \xi, N, {\bf y} |} \delta \hat N^2 {| \xi, N, {\bf y} \rangle}
=
|\xi|^2,
\\
F_{\xi, N, {\bf y}}
&\equiv&
{
\langle \delta N^2 \rangle_{\xi, N, {\bf y}}
\over
\langle N \rangle_{\xi, N, {\bf y}}
}
=
{
|\xi|^2
\over
N - |\xi|^2
}
\approx
{
|\xi|^2
\over
N
}
\ll 1.
\end{eqnarray}
As compared with Eqs.\ (\ref{N and dN of CS}) and
(\ref{Fano_CSIB}), we observe that
the state ${| \xi, N, {\bf y} \rangle}$ has a very narrow distribution of the boson number.
On the other hand,
${| \xi, N, {\bf y} \rangle}$ has a well-defined phase \cite{phase} when
\begin{equation}
1 \ll |\xi|^2 \ll N.
\label{xi N}\end{equation}
In fact, under this condition we can easily show that
\begin{eqnarray}
\langle \sin \phi \rangle_{\xi, N, {\bf y}}
&\equiv&
{\langle \xi, N, {\bf y} |} \hat {\sin \phi} {| \xi, N, {\bf y} \rangle}
\nonumber\\
&=&
\left[1 - 1 /(8|\xi|^2) + \cdots \right]
\sin \phi
\\
\langle \delta \sin^2 \phi \rangle_{\xi, N, {\bf y}}
&\equiv&
{\langle \xi, N, {\bf y} |} (\delta \hat{\sin \phi})^2 {| \xi, N, {\bf y} \rangle}
\nonumber\\
&=&
(1 / 4|\xi|^2)
(1 - \sin^2 \phi)
+ \cdots,
\end{eqnarray}
and similar results for the cosine operator.
As in the case of the CSIB,
we may express these results symbolically as
\begin{eqnarray}
\langle \phi \rangle_{\xi, N, {\bf y}}
&\approx&
\phi
\\
\langle \delta \phi^2 \rangle_{\xi, N, {\bf y}}
&\approx&
1 / (4|\xi|^2).
\end{eqnarray}
Therefore,
just as $| \alpha, {\bf y} \rangle$ does,
${| \xi, N, {\bf y} \rangle}$ has the minimum value of
the NPUP;
\begin{equation}
\langle \delta N^2 \rangle_{\xi, N, {\bf y}}
\langle \delta \phi^2 \rangle_{\xi, N, {\bf y}}
\approx 1/4
\quad \mbox{(for $1 \ll |\xi|^2 \ll N$)}.
\label{NPUP_NPIB}\end{equation}
Since each component of the product satisfies
$
\langle \delta \hat N^2 \rangle_{\xi, N, {\bf y}}
\ll
\langle \delta \hat N^2 \rangle_{\alpha,{\bf y}}
$
and
$
\langle \delta \phi^2 \rangle_{\xi, N, {\bf y}}
\gg
\langle \delta \phi^2 \rangle_{\alpha,{\bf y}}
$,
${| \xi, N, {\bf y} \rangle}$
is obtained by
``squeezing'' $| \alpha, {\bf y} \rangle$
in the direction of $\hat N$,
while keeping the NPUP minimum.
({\it cf.} The conventional squeezed state
has a larger NPUP \cite{mandel}.)
We thus call
${| \xi, N, {\bf y} \rangle}$ the
``number-phase squeezed state of interacting bosons'' (NPIB).
\subsection{Phase-randomized mixture of
number-phase squeezed states of interacting bosons}
\label{sec_PRM}
We now take
\begin{equation}
\xi
=
e^{i \phi} \sqrt{Jt}
\equiv
\xi(t).
\end{equation}
That is,
$
|\xi|^2 = J t
$.
Then, inequalities (\ref{assumption_xi}) are satisfied because of
our assumption (\ref{assumption}).
We can show by explicit calculation that
Eq.\ (\ref{rho Poisson}) can be rewritten as
\begin{equation}
\hat \rho(t)
=
\int_{- \pi}^{\pi} {d \phi \over 2 \pi}
{| e^{i \phi} \sqrt{Jt}, N, {\bf y} \rangle} {\langle e^{i \phi} \sqrt{Jt}, N, {\bf y} |}.
\label{rho random phase}\end{equation}
Therefore,
the boson state in the box can be viewed
either as the shifted Poissonian mixture, Eq.\ (\ref{rho Poisson}),
of NSIBs,
or as the phase-randomized mixture (PRM), Eq.\ (\ref{rho random phase}),
of NPIBs.
Both representations are simply described in terms of $\hat b_0$.
In contrast, the same $\hat \rho(t)$ would be described in an
very complicated manner in terms of bare operators.
We have thus obtained {\em double pictures} (or {\em representations}),
Eqs.\ (\ref{rho Poisson}) and (\ref{rho random phase}),
for the {\em same} physical state \cite{double,double_pic}.
According to the former picture,
the state of the box is one of NSIBs,
for which the number of bosons is definite (but unknown), whereas
the phase is completely indefinite.
According to the latter picture, on the other hand,
the state is one of NPIBs,
for which the number of bosons has a finite fluctuation
$\langle \delta N^2 \rangle \approx Jt$, whereas
the phase is almost definite \cite{phase} (but unknown),
$\langle \delta \phi^2 \rangle \approx 1/(4 Jt)$.
What allows these double pictures is the superposition principle
\cite{double_pic}.
In addition to Eqs.\ (\ref{rho Poisson}) and (\ref{rho random phase}),
there are many other ways to express $\hat \rho (t)$ as
different mixtures.
Among them, Eq.\ (\ref{rho Poisson}) is the form in which
{\em each} element of the mixture has the smallest value of
the number fluctuation,
whereas in Eq.\ (\ref{rho random phase})
{\em each} element of the mixture has the smallest value of
the phase fluctuation.
Therefore, the latter representation is particularly convenient for
discussing physical properties that are related to the phase,
as will be shown in sections \ref{sec_pm} and \ref{sec_OP}.
\subsection{Origin of the direction of the time evolution}
As the time evolves,
the number of bosons decreases as
\begin{equation}
\langle N(t) \rangle
=
N - Jt.
\label{Nt}\end{equation}
As a result, the energy of the bosons in the box decreases with $t$.
For example, for each element of Eq.\ (\ref{rho random phase}),
\begin{equation}
{\langle e^{i \phi} \sqrt{Jt}, N, {\bf y} |} \hat H {| e^{i \phi} \sqrt{Jt}, N, {\bf y} \rangle}
<
{\langle N, {\bf y} |} \hat H {| N, {\bf y} \rangle}
\quad \mbox{for $t>0$}.
\end{equation}
However, we note that this energy difference is just
a consequence of the difference in $\langle N \rangle$.
Namely,
${| N, {\bf y} \rangle}$ and the NPIB have the same energy
if they have the same value of $\langle N \rangle$;
\begin{equation}
\langle e^{i \phi} \sqrt{Jt}, N+Jt, {\bf y} |
\hat H
| e^{i \phi} \sqrt{Jt}, N+Jt, {\bf y} \rangle
\approx
{\langle N, {\bf y} |} \hat H {| N, {\bf y} \rangle}.
\label{same_energy}\end{equation}
We can therefore conclude that
the direction of the time evolution,
from ${| N, {\bf y} \rangle}$ to the PRM of ${| \xi, N, {\bf y} \rangle}$, is
{\em not} determined by an energy difference.
Hence, it must be due to difference in the
nature of the wavefunctions.
The study of such a nature, however, needs many pages of analysis,
which is beyond the scope
of this paper,
thus will be described elsewhere \cite{unpublished}.
\section{Action of measurement or its equivalence}
\label{sec_action of meas.}
In the previous section,
we have obtained the double pictures,
Eqs.\ (\ref{rho Poisson}) and (\ref{rho random phase}).
Depending on the physical situation,
either picture is convenient \cite{double_pic}.
To explain this point, we discuss two examples in this section.
\subsection{Number measurement}
Suppose that one measures $N$
of the boson system whose density operator is given by
Eq.\ (\ref{rho Poisson}), or,
equivalently, by Eq.\ (\ref{rho random phase}).
In this case, the former expression is convenient.
In fact,
if the measurement error
\begin{equation}
\delta N_{err}
<
\sqrt{\langle \delta N(t)^2 \rangle}
=
\sqrt{Jt},
\end{equation}
and if the measurement is of the first kind \cite{SF},
then the action of the measurement is
to narrower the number distribution as small as
$\delta N_{err}$, of
the rhs of Eq.\ (\ref{rho Poisson}).
That is,
the density operator immediately after the measurement
is generally given by \cite{mtheory}
\begin{equation}
\hat \rho_{\bar{N}}(t)
\equiv
\sum_{m = 0}^N
W(m - \bar{N})
{| m, {\bf y} \rangle} {\langle m, {\bf y} |}.
\label{rho number meas}\end{equation}
Here,
$\bar{N}$ is the value of $\phi$ obtained by the measurement, and
$W$ is a smooth function which has
the following properties:
\begin{eqnarray}
&&
W(m - \bar{N}) \geq 0,
\\
&&
W(m - \bar{N}) \approx 0
\quad \mbox{for} \quad
|m - \bar{N}| \gtrsim \delta N_{err},
\\
&&
\sum_{m = 0}^N
W(m - \bar{N})
= 1.
\end{eqnarray}
The detailed form of $W$ depends on the detailed structures of the
measuring apparatus, and thus is of no interest here.
In an ideal case where $\delta N_{err} \to 0$,
$W$ becomes Kronecker's delta, and
\begin{equation}
\hat \rho_{\bar{N}}(t)
\to
{| \bar{N}, {\bf y} \rangle} {\langle \bar{N}, {\bf y} |}.
\end{equation}
On the other hand, if the measurement is rather inaccurate
in such a way that
\begin{equation}
\delta N_{err}
>
\sqrt{\langle \delta N(t)^2 \rangle}
=
\sqrt{Jt},
\end{equation}
then almost no change of $\hat \rho$ is induced by the measurement,
if it is of the first kind \cite{mtheory}.
\subsection{Phase measurement}
\label{sec_pm}
Suppose that one measures $\phi$
of the boson system whose density operator is
Eq.\ (\ref{rho Poisson}), or,
equivalently, Eq.\ (\ref{rho random phase}).
In this case, the latter representation is convenient.
In fact,
if the measurement error
\begin{equation}
\delta \phi_{err}
>
\sqrt{\langle \delta \phi^2 \rangle_{\xi, N, {\bf y}}}
=
{1
\over
2 |\xi(t)|
}
=
{1
\over
2 \sqrt{Jt}
},
\end{equation}
and if
the measurement is performed in such a way that
the backaction of the measurement is minimum,
then the action of the measurement is
just to find (or, get to know) the ``true'' value of $\phi$,
to the accuracy of $\delta \phi_{err}$,
among many possibilities in the rhs side of
Eq.\ (\ref{rho random phase}).
Therefore,
the density operator just after the measurement is generally given
by \cite{mtheory}
\begin{equation}
\hat \rho_{\bar{\phi}}(t)
\equiv
\int_{- \pi}^{\pi} {d \phi \over 2 \pi}
D(\phi - \bar{\phi}) {| e^{i \phi} \sqrt{Jt}, N, {\bf y} \rangle} {\langle e^{i \phi} \sqrt{Jt}, N, {\bf y} |}.
\label{rho phase meas}\end{equation}
Here,
$\bar{\phi}$ is the value of $\phi$ obtained by the measurement, and
$D(\phi - \bar{\phi})$ is a smooth function which has
the following properties:
\begin{eqnarray}
&&
D(\phi - \bar{\phi}) \geq 0,
\\
&&
D(\phi - \bar{\phi}) \approx 0
\quad \mbox{for} \quad
|\phi - \bar{\phi}| \gtrsim \delta \phi_{err},
\\
&&
\int_{- \pi}^{\pi} {d \phi \over 2 \pi}
D(\phi - \bar{\phi})
= 1.
\end{eqnarray}
The detailed form of $D$ depends on the detailed structures of the
measuring apparatus, and thus is of no interest here.
On the other hand, if the measurement is very accurate
in such a way that
\begin{equation}
\delta \phi_{err}
<
\sqrt{\langle \delta \phi^2 \rangle_{\xi, N, {\bf y}}}
=
{1
\over
2 |\xi(t)|
}
=
{1
\over
2 \sqrt{Jt}
},
\end{equation}
then $\hat \rho$ will ``collapse'' into
another state whose phase fluctuation is less than
$\langle \delta \phi^2 \rangle_{\xi, N, {\bf y}}$.
However, such an accurate measurement is practically difficult when
$Jt \gg 1$.
Therefore, in most experiments
we may take
Eq.\ (\ref{rho phase meas}) for
the density operator after the measurement,
if the measurement is performed in such a way that
the backaction of the measurement is minimum.
\begin{figure}[h]
\begin{center}
\epsfile{file=PRA-fig2.eps,scale=0.4}
\end{center}
\caption{Bosons are confined independently in two boxes.
The number of bosons in each box is fixed for $t<0$.
If holes are made in the boxes at $t>0$, then the leakage fluxes
are induced, which exhibit interference.
}
\label{two boxes}
\end{figure}
For example, suppose that
one prepares bosons {\em independently}
in two boxes [Fig.\ \ref{two boxes}],
where the number of bosons in each box is fixed for $t<0$.
At $t=0$ a small hole is made in each box, and
small fluxes of bosons escape from the boxes.
Since $\hat \rho$ of each box evolves as
Eq.\ (\ref{rho random phase}),
it is clear that
these fluxes can interfere
{\em at each experimental run}
(as in the cases of non-interacting bosons \cite{theory1,theory2,theory3}
and two-mode bosons \cite{RW})
as the interference between
two NPIBs of two boxes.
From the interference pattern,
one can measure the relative phase $\bar \phi$ of two condensates.
After the measurement,
$\hat \rho$ of each box would take
the form of eq.\ (\ref{rho phase meas}).
Such an experiment would be possible
by modifying the experiment of Ref.\ \cite{andrews}
in such a way that $J$ becomes small enough.
\section{Order parameter}
\label{sec_OP}
The order parameter of BEC
can be defined in various ways.
For the ground state
$| \alpha, {\bf y}^{cl} \rangle^{cl}$
of the semiclassical Hamiltonian $\hat H^{cl}$,
different definitions give the same result.
However, this is not the case for
${| N, {\bf y} \rangle}$
and ${| \xi, N, {\bf y} \rangle}$.
We explore these points in this section.
\subsection{Off-diagonal long-range order}
We first consider the two-point
correlation function defined by
\begin{equation}
\Upsilon({\bf r}_1, {\bf r}_2)
\equiv
{\rm Tr} [ \hat \rho
\hat \psi^\dagger({\bf r}_1) \hat \psi({\bf r}_2)].
\label{Upsilon}\end{equation}
The system is said to possess the
off-diagonal long-range order (ODLRO) if \cite{penrose,yang,odlro}
\begin{equation}
\lim_{|{\bf r}_1 - {\bf r}_2| \to \infty}
\Upsilon({\bf r}_1, {\bf r}_2)
\neq
0.
\label{ODLRO}\end{equation}
This limiting value cannot be finite
without the condensation of a macroscopic number of bosons.
(Without the condensation, we simply has
$
\lim_{|{\bf r}_1 - {\bf r}_2| \to \infty}
\Upsilon({\bf r}_1, {\bf r}_2)
=0
$
for the ground state and for any finite excitations.)
In this sense,
Eq.\ (\ref{ODLRO}) is a criterion of the condensation.
If the system possesses the ODLRO,
it is customary to define
the order parameter $\Xi$ by
the asymptotic form of $\Upsilon$ as
\begin{equation}
\Upsilon({\bf r}_1, {\bf r}_2)
\sim
\Xi^*({\bf r}_1) \Xi({\bf r}_2).
\label{Xi}\end{equation}
According to this definition,
we obtain the same results
for {\em all} of
${| N, {\bf y} \rangle}$, ${| \xi, N, {\bf y} \rangle}$, and
$| \alpha, {\bf y} \rangle$,
where $\xi = e^{i \phi} \sqrt{Jt}$,
$Jt \ll N$ and $\alpha = e^{i \phi} \sqrt{N}$.
Namely, using Eqs.\ (\ref{property of psi'}) and
(\ref{formula of decomposition}), we find
\begin{equation}
\lim_{|{\bf r}_1 - {\bf r}_2| \to \infty}
\Upsilon({\bf r}_1, {\bf r}_2)
=
n_0,
\ \mbox{hence} \ \Xi = \sqrt{n_0} e^{i \varphi},
\label{ODLRO of all states}\end{equation}
for {\em all} of these states.
Therefore, neither the ODLRO nor $\Xi$
is able to distinguish between these states.
\subsection{Definition as a matrix element}
As an order parameter of
the state for which $N$ is exactly fixed,
Ref.\ \cite{LP} uses the ``wavefunction of
the condensate'' $\Xi$, as defined by Eq.\ (\ref{wf of condensate}).
It is clear that this definition is just a special case of
that of the previous section.
In fact, for ${| N, {\bf y} \rangle}$ Eq.\ (\ref{wf of condensate}) yields
\begin{equation}
\Xi
=
{\langle N - 1, {\bf y} |} \hat \psi {| N, {\bf y} \rangle}
= \sqrt{n_0} \ e^{i \varphi},
\end{equation}
in agreement with Eq.\ (\ref{ODLRO of all states}).
\subsection{Definition as the expectation value of $\hat \psi$}
Another definition
of the order parameter is
the expectation value of $\hat \psi$, which we denote by $\Psi$;
\begin{equation}
\Psi ({\bf r}) \equiv \langle \hat \psi({\bf r}) \rangle.
\end{equation}
According to this definition,
the ground state ${| N, {\bf y} \rangle}$
of a fixed number of bosons
does not have a finite order parameter;
\begin{equation}
\Psi
=
{\langle N, {\bf y} |} \hat \psi({\bf r}) {| N, {\bf y} \rangle}
=
0.
\label{Psi_is_zero}\end{equation}
This result is rather trivial because $\hat \psi$
alters $N$ exactly by one.
On the other hand, it was sometimes conjectured
in the literature \cite{forster} that
the expectation value of the bare operator $\hat a_0$
might be finite
$
{\langle N, {\bf y} |} \hat a_0 {| N, {\bf y} \rangle} \neq 0
$
in the presence of many-body interactions
because the number of bosons in the bare state of ${\bf k} = 0$
fluctuates due to the many-body scatterings.
However, this conjecture is wrong because
by integrating Eq.\ (\ref{Psi_is_zero}) over ${\bf r}$
we obtain
\begin{equation}
{\langle N, {\bf y} |} \hat a_0 {| N, {\bf y} \rangle} = 0.
\end{equation}
That is, although $\hat a_0^\dagger \hat a_0$ fluctuates
in the state ${| N, {\bf y} \rangle}$ it does not lead to a finite
${\langle N, {\bf y} |} \hat a_0 {| N, {\bf y} \rangle}$.
In our gedanken experiment,
$\hat \rho(t)$ evolves as
Eq.\ (\ref{rho Poisson}), or, equivalently,
as Eq.\ (\ref{rho random phase}).
For this mixed ensemble,
\begin{equation}
\Psi = {\rm Tr}[\hat \rho(t) \hat \psi] = 0.
\end{equation}
This is the {\em average over all elements}
in the mixed ensemble,
and corresponds to the {\em average over many experimental runs}.
On the other hand,
$\Psi$ of {\em each element},
which corresponds to {\em a possible
result for a single experimental run},
is different between
the two expressions, Eqs.\ (\ref{rho Poisson}) and (\ref{rho random phase}).
That is, for each element of the mixtures,
$\Psi = 0$ for Eq.\ (\ref{rho Poisson})
because of Eq.\ (\ref{Psi_is_zero}),
whereas for Eq.\ (\ref{rho random phase})
\begin{equation}
\Psi
=
{\langle e^{i \phi} \sqrt{Jt}, N, {\bf y} |} \hat \psi {| e^{i \phi} \sqrt{Jt}, N, {\bf y} \rangle}
\label{maxPsi}\end{equation}
can be finite.
As discussed in sections \ref{sec_NPIB} and \ref{sec_PRM},
Eq.\ (\ref{rho random phase})
is the form in which
{\em each} element of the mixture has the smallest value of
the phase fluctuation.
This indicates that
each element of the mixture possesses
the most definite (non-fluctuating) value
of $\Psi$
when we take the representation (\ref{rho random phase}),
among many representations of the same $\hat \rho(t)$.
We are most interested in this case, because
$\Psi$ is usually taken as a macroscopic order parameter,
which has a definite value obeying the Ginzburg-Landau equation.
\begin{figure}[h]
\begin{center}
\epsfile{file=PRA-fig3.eps,scale=0.5}
\end{center}
\caption{
Left scale: $\langle (\hat{\sin} \phi)^2 \rangle$
of the $\phi=0$ element of Eq.\ (\protect\ref{rho random phase}).
The dashed line represents $1/(4Jt)$.
Right scale: $|\Psi|$ defined by Eq.\ (\protect\ref{maxPsi}).
The dotted line denotes $|\Psi|=\protect\sqrt{n_0}$.
Both $\langle (\hat{\sin} \phi)^2 \rangle$
and $|\Psi|$
are plotted against $Jt$, the number of escaped bosons.
}
\label{OP}
\end{figure}
Figure \ref{OP} plots
$|\Psi|$ of Eq.\ (\ref{maxPsi})
as a function of $Jt$ ($=$ the average number of escaped bosons).
We find that $|\Psi|$ grows very rapidly,
until it attains a constant value\cite{decrease},
\begin{equation}
\Psi
=
{\langle e^{i \phi} \sqrt{Jt}, N, {\bf y} |} \hat \psi {| e^{i \phi} \sqrt{Jt}, N, {\bf y} \rangle}
\to
e^{i (\phi + \varphi)} \sqrt{n_0}
\quad \mbox{for $Jt \gtrsim 2$}.
\end{equation}
This value equals
$\Psi$ of
$| \alpha, {\bf y} \rangle$ with $\alpha = e^{i \phi} \sqrt{N}$;
\begin{equation}
\Psi
=
\langle \alpha, {\bf y} | \hat \psi | \alpha, {\bf y} \rangle
=
e^{i \varphi} \sqrt{n_0 \over n V} \alpha
=
e^{i (\phi + \varphi)} \sqrt{n_0}.
\end{equation}
Note here that $|\Psi|$ of $| \alpha, {\bf y} \rangle$ is
renormalized by the factor $\sqrt{|Z|} = \sqrt{n_0/n}$ because of the
many-body interactions.
We have also plotted $\langle (\hat{\sin} \phi)^2 \rangle$
of the $\phi=0$ element of Eq.\ (\ref{rho random phase})
in Fig.\ \ref{OP}.
This is a measure of
the phase fluctuation of the $\phi=0$ element.
Because of the rotational symmetry with respect to $\phi$,
we can regard it as a measure of
the phase fluctuation of every element.
We find that
$\langle (\hat{\sin} \phi)^2 \rangle$ decreases rapidly
as $Jt$ is increased, until
it behaves as
\begin{equation}
\langle (\hat{\sin} \phi)^2 \rangle
\approx
1/(4Jt),
\end{equation}
for $Jt \gtrsim$ 3.
Therefore,
{\em after the leakage of only two or three bosons,
${| e^{i \phi} \sqrt{Jt}, N, {\bf y} \rangle}$ acquires the
full, stable and definite (non-fluctuating) values of $\Psi$ and $\phi$},
and the {\em gauge symmetry is broken} in this sense.
One might expect that
$\langle \delta N^2 \rangle$ of the order of $\langle N \rangle$
would be necessary to achieve such stable $\Psi$ and $\phi$
because $\langle \delta N^2 \rangle = \langle N \rangle$ for
a CSIB.
Our result shows that this expectation is wrong, because
$\Psi$ and $\phi$ already become stable when $Jt \sim 2$, for which
$\langle \delta N^2 \rangle = Jt \ll \langle N \rangle$.
Practically, it seems rather difficult to
fix $N$ to such high accuracy that $\delta N \lesssim 2$.
In such a case,
$\delta N$ would be larger than 2 from the beginning,
and each element of the mixture
has the full and stable values of $\Psi$ and $\phi$ from the beginning.
We finally make a remark on the evolution at later times,
whereas we have only
considered the early time stage for which $Jt \ll N$.
It is clear that
the system eventually approaches the equilibrium state.
However, a question is; what is the state
after the early stage, but before the
system reaches the equilibrium?
It is expected that the state would be
some coherent state.
We can show that this is indeed the case \cite{unpublished}:
as $t \to \infty$,
$\hat \rho$ eventually approaches
the PRM of $| \alpha, {\bf y} \rangle$,
in which
$|\alpha|^2 = \langle N(t) \rangle$ ($<N$) \cite{decrease}
and ${\bf y}$ is given by Eqs.\
(\ref{cosh y cl}), (\ref{sinh y cl}) and (\ref{yq})
with $n = \langle N(t) \rangle/V$ \cite{decrease}.
To show this, we must extend the theory of section \ref{sec_evolution}.
This is beyond the scope
of this paper, and thus will be described elsewhere \cite{unpublished}.
The summary of the present paper has been given in section \ref{intro}.
\acknowledgments{
Helpful discussions with
M.\ Ueda, K.\ Fujikawa, H.\ Fukuyama,
T.\ Kimura and T.\ Minoguchi
are acknowledged.
The authors also thank M.\ D.\ Girardeau for informing them of
Ref.\ \cite{GA}.
}
| proofpile-arXiv_065-9158 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{INTRODUCTION}
\label{s-intro}
This bi-annual series of workshops is an outstanding opportunity
to focus on the enormous breadth and depth of fundamental physics
accessible from the study of the production and decay
of the tau lepton and the tau neutrino.
At each meeting, we have seen tremendous progress in the precision
with which Standard Model physics is measured,
and increasing sensitivity to new physics; and
this meeting, fifth in the series, is no exception.
Tau physics continues to be wonderfully rich and deep!
The study of the tau contributes to the field of particle physics
in many ways. I think of the ``sub-fields'' as follows:
\begin{itemize}
\item Precision electroweak physics:
neutral current ($Z^0$) and charged current ($W^\pm$) couplings,
Michel parameters, leptonic branching fractions
and tests of universality. These measurements typically
have world-average errors better than 1\%.
They give indirect information on new physics at high mass scales
($Z^\prime$, $W_R$, $H^\pm$, {\it etc.}).
The higher the precision, the better chance of seeing
the effects of new physics, so continued improvement is necessary.
\item Very rare (``upper limit'') physics:
Direct searches for new physics, such as
processes forbidden in the Standard Model:
lepton-flavor-violating neutrinoless decays,
lepton-number-violating decays such as $\tau^-\to\mu^+ X$,
and CP-violating effects that can result from
anomalous weak and electromagnetic dipole moments.
\item Non-perturbative hadronic physics:
Our inability to reliably predict the properties
and dynamics of light mesons and baryons
at intermediate energies is the greatest failing
of particle physics. Tau decays provide a clean beam
of intermediate energy light mesons,
including vectors, axial-vectors, scalars and tensors.
One can measure mesonic couplings, and tune models
of resonances and Lorentz structure.
Topics of current interest are the presence
and properties of radial excitations such as
the $a_1^\prime$, and $K^{*\prime}$,
mixing, SU(3)$_f$ violation, and isospin decomposition
of multihadronic final states.
\item Inclusive QCD physics: the total and differential
inclusive rate for tau decays to hadrons (spectral functions),
can be used to measure $\alpha_S(s)$,
non-perturbative quark and gluon condensates,
and quark masses; and can be used to test QCD sum rules.
Here, in particular, there is a
rich interaction between theory and experiment.
\item Neutrino mass and mixing physics:
Aside from its fundamental significance, the presence
of tau neutrino mass and mixing has important implications
for cosmology and astrophysics $-$
it is our window on the universe.
\end{itemize}
The study of taus is thus an important tool in many fields,
and continual progress is being made in all of them,
as evidenced in the many contributions to this workshop
which I review in the following sections.
Some of the more exciting future goals in tau physics
were discussed in the first talk of the workshop,
by Martin Perl \cite{ref:perl}.
This was followed by a very comprehensive
overview of the theory of tau physics,
including future prospects for testing the theory,
by J.~Kuhn \cite{ref:kuhn}.
These talks are reviews in and of themselves;
I will focus, in this review, on the subsequent presentations.
\section{$\mathbf{Z^0}$ COUPLINGS}
\label{s-Z0couplings}
SLD and the four LEP experiments study the reaction
$e^+e^- \to Z^0 \to \tau^+\tau^-$
to extract a wealth of information
on rates and asymmetries,
and ultimately, on the neutral current
vector and axial couplings $v_f$ and $a_f$
for each fermion species $f$,
and from these, values for the effective weak mixing angle
$\sin^2\theta_W^f$.
Lepton universality is the statement that
these couplings (and the charged current couplings
to be discussed later) are the same for the electron,
muon and tau leptons.
The simplest such observable is
the partial width of the $Z^0$ into fermion pairs,
which to lowest order is:
$$ \Gamma_f \equiv \Gamma (Z^0 \to f\bar{f}) =
\frac{G_F M_Z^3}{6\sqrt{2}\pi}
\left( v_f^2 + a_f^2 \right). $$
Equivalently, the ratio of the total
hadronic to leptonic width $R_\ell \equiv \Gamma_{had}/\Gamma_\ell$,
with $\ell=e, \mu, \tau$, can be measured with high precion.
The angular distribution of the outgoing fermions
exhibit a parity-violating forward-backward asymmetry
$A_f^{FB} = \frac{3}{4}{\cal A}_e{\cal A}_f$
which permits the measurement of the asymmetry parameters
$${\cal A}_f \equiv
\frac{2 v_f a_f}{v_f^2 + a_f^2} .$$
From these one can extract
the neutral current couplings $v_f$ and $a_f$.
The Standard Model predicts that the
outgoing fermions from $Z^0$ decay are polarized,
with a polarization that depends on the scattering angle
$\cos\theta_f$ with respect to the incoming electron or positron:
$${\cal P}_f(\cos\theta_f) =
- \frac{{\cal A}_f(1+\cos^2\theta_f) + 2{\cal A}_e\cos\theta_f}
{(1+\cos^2\theta_f) + 2{\cal A}_f{\cal A}_e\cos\theta_f} .$$
If the incoming electron beam is also polarized,
as at the SLC, it further modifies the polarization
of the outgoing fermions.
In the case of the tau ($f = \tau$), this polarization can be
measured at the statistical level by analyzing the
decay products of the tau, providing an independent way
to determine ${\cal A}_\tau$ and thus $v_\tau$ and $a_\tau$.
LEP and SLD have used almost all of the major tau decay modes
($e$, $\mu$, $\pi$, $\rho$, $a_1$)
to analyze the tau spin polarization as a function of $\cos\theta_\tau$.
SLD has measured the dependence of these polarizations on beam ($e^-$)
polarization.
SLD and all four LEP experiments measure
all these quantities.
At TAU 98, the results for the ratios of partial widths $R_\ell$
and forward-backward asymmetries $A_{FB}^\ell$ for $\ell=e, \mu, \tau$
are reviewed in \cite{ref:sobie} and shown in Fig.~\ref{fig:rl_afb}.
The tau polarization measurements at LEP are presented
in \cite{ref:alemany}.
The beam polarization dependent asymmetries at SLD
are described in \cite{ref:reinertsen}, and shown in Fig.~\ref{fig:sld}.
The procedure for combining the LEP results on ${\cal P}_\tau$
is reviewed in \cite{ref:roney},
in which it is concluded that the results from the four
LEP experiments are consistent but not too consistent.
These are nearly the final results from LEP on this subject.
\begin{figure}[ht]
\psfig{figure=figs/rl_afb.ps,width=2.6in}
\caption[]{LEP-I averages for
$A_{FB}$ versus $R_\ell$ for the three
lepton species and for the combined result.
The Standard Model prediction is given by the lines \cite{ref:sobie}.}
\label{fig:rl_afb}
\end{figure}
\begin{figure}[ht]
\psfig{figure=figs/sld_costh.ps,width=2.6in}
\caption[]{Polar angle distributions for leptonic final states,
from SLD with polarized beams \cite{ref:reinertsen}.}
\label{fig:sld}
\end{figure}
Note that LEP and SLD have completely consistent results
on the leptonic neutral current couplings;
the addition of measurements of the
heavy quark couplings to their
Standard Model averages for the Weinberg angle
$\sin^2\theta_W$ pull the LEP average away from that of SLD's
(and that discrepancy is shrinking as LEP and SLD results are updated).
The partial widths for the three charged leptons
agree with one another (and therefore with lepton universality
in the neutral current) to 3 ppm.
These results, along with those from the $e^+e^-$ and $\mu^+\mu^-$
final states, fit well to the Standard Model predictions
with a rather light Higgs mass:
$m_H < 262$ GeV/c$^2$ at 95\%\ C.L.
The results for the vector and axial couplings
of the three leptons is shown in Fig.~\ref{fig:gvga}.
Note that the tau pair contour is smaller than that for
mu pairs, because of the added information
from tau polarization.
\begin{figure}[ht]
\psfig{figure=figs/gvga.ps,width=2.6in}
\caption[]{Results on the vector and axial couplings of the leptons
to the $Z^0$, from LEP-I \cite{ref:sobie}.}
\label{fig:gvga}
\end{figure}
\section{$\mathbf{W\to \tau\nu}$}
\label{s-Wtotau}
This TAU workshop is the first to see results from LEP-II,
including new results on
the production of taus from real $W$ decays.
All four LEP experiments identify the leptonic decays of
$W$ bosons from $e^+e^-\to W^+W^-$ at center-of-mass energies
from 160 to 189 GeV.
The results are consistent between experiments (see Fig.~\ref{fig:wbrlep})
and the averaged branching fractions they obtain
are summarized in \cite{ref:moulik}:
\begin{eqnarray*}
{\cal B}(W\to e\nu) &=& 10.92\pm0.49\% \\
{\cal B}(W\to \mu\nu) &=& 10.29\pm0.47\% \\
{\cal B}(W\to \tau\nu) &=& 9.95\pm0.60\% \\
{\cal B}(W\to \ell\nu) &=& 10.40\pm0.26\%
\end{eqnarray*}
where the last result assumes universality of the
charged current couplings.
These results, and results on the measured cross-sections
for $W^+W^-$ as a function of center-of-mass energy,
are in good agreement with Standard Model predictions.
Lepton universality from real $W$ decays
is tested at the 4\%\ level at LEP:
$g_\mu/g_e = 0.971\pm0.031$,
$g_\tau/g_e = 0.954\pm0.040$.
\begin{figure}[ht]
\psfig{figure=figs/wbrlep.ps,width=2.6in}
\caption[]{Branching fractions for $W\to \ell\nu$ from the
4 LEP experiments \cite{ref:moulik}.}
\label{fig:wbrlep}
\end{figure}
The CDF and D0 experiments at the Tevatron
can also identify $W\to \tau\nu$ decays,
and separate them from a very large background
of QCD jets.
The methods and results are summarized in \cite{ref:protop}.
The average of results from UA1, UA2, CDF and D0,
shown in Fig.~\ref{fig:gtauge},
confirm lepton universality in real $W$ decays
at the 2.5\%\ level:
$g_\tau/g_e = 1.003\pm0.025$.
The LEP and Tevatron
results are to be compared with the ratios of couplings
to virtual $W$ bosons in $\tau$ decays,
summarized in the next section.
\begin{figure}[ht]
\psfig{figure=figs/gtauge.ps,width=2.6in}
\caption[]{Measurements of $g_\tau^W/g_e^W$
at hadron colliders \cite{ref:protop}.}
\label{fig:gtauge}
\end{figure}
CDF has also looked for taus produced from
decays of top quarks,
charged Higgs bosons, leptoquarks, and techni-rhos.
Limits on these processes are reviewed in \cite{ref:Gallinaro}.
For the charged Higgs searches, they are shown
in Fig.~\ref{fig:chiggs}.
\begin{figure}[ht]
\psfig{figure=figs/chiggs.ps,width=2.6in}
\caption[]{Excluded values for the charged Higgs boson mass,
as a function of $\tan\beta$, are shaded \cite{ref:Gallinaro}.}
\label{fig:chiggs}
\end{figure}
\section{TAU LIFETIME AND
LEPTONIC BRANCHING FRACTIONS}
\label{s-leptonic}
The primary properties of the tau lepton are its mass, spin,
and lifetime. That its spin is 1/2 is well established,
and its mass is well measured \cite{ref:pdg98}:
$m_\tau = 1777.05^{+0.29}_{-0.26}$ GeV/c$^2$.
The world average tau lifetime has changed considerably
in the last 10 years,
but recent results have been in good agreement,
converging to a value that is stable and of high precision.
At TAU 98, new measurements were presented from L3 \cite{ref:Colijn}
and DELPHI.
The new world average tau lifetime represents the work of
6 experiments (shown in Fig.~\ref{fig:taulife}),
each utilizing multiple techniques,
and each with $\stackrel{<}{\scriptstyle\sim} 1\%$ precision.
The result is presented in \cite{ref:wasserbach}:
$\tau_\tau = (290.5\pm 1.0)$ fs.
\begin{figure}[ht]
\psfig{figure=figs/taulife.ps,width=2.6in}
\caption[]{Recent measurements of the $\tau$ lifetime \cite{ref:wasserbach}.}
\label{fig:taulife}
\end{figure}
We now turn to the decays of the tau.
The leptonic decays of the tau comprise 35\%\ of the total,
and can be both measured and predicted with high accuracy.
In the Standard Model,
$$\Gamma(\tau\to \ell\nu_\tau\antibar{\nu}_\ell) =
\frac{G_{\ell\tau}^2 m_\tau^5}{192\pi^3}
f\left(\frac{m_\ell}{m_\tau}\right) (1+\delta).$$
Here, $f$ is a known function of the masses,
$f = 1$ for $e\nu\nu$ and 0.9726 for $\mu\nu\nu$;
$\delta$ is a small correction of $-0.4\%$
due to electromagnetic and weak effects,
and $G_{\ell\tau}$ defines the couplings:
$$G_{\ell\tau} = \frac{g_\ell g_\tau}{4\sqrt{2} m_W^2} = G_F?$$
By comparing the measured leptonic branching fractions
to each other and to the decay rate of the muon to
$e\nu\nu$, we can compare the
couplings of the leptons
to the weak charged current, $g_e$, $g_\mu$, and $g_\tau$.
At TAU 98, new measurements on leptonic branching fractions
were presented by DELPHI \cite{ref:stugu}, L3 and OPAL \cite{ref:robertson}.
The results are summarized in \cite{ref:stugu}:
\begin{eqnarray*}
{\cal B}(\tau\to e\nu\antibar{\nu}) &=& {\cal B}_e = 17.81\pm0.06\% \\
{\cal B}(\tau\to \mu\nu\antibar{\nu}) &=& {\cal B}_\mu = 17.36\pm0.06\%.
\end{eqnarray*}
These new world average branching fractions
have an accuracy of 3 ppm,
and are thus beginning to probe the corrections
contained in $\delta$, above.
The resulting ratios of couplings are consistent
with unity (lepton universality in the charged current couplings)
to 2.5 ppm:
\begin{eqnarray*}
g_\mu/g_e &=& 1.0014\pm0.0024 \\
g_\tau/g_\mu &=& 1.0002\pm0.0025 \\
g_\tau/g_e &=& 1.0013\pm0.0025.
\end{eqnarray*}
Some of the consequences of these precision measurements
are reviewed in \cite{ref:swain}.
The Michel parameter $\eta$ (see next section)
is constrained to be near zero to within 2.2\%.
One can extract a limit on the tau neutrino mass
(which, if non-zero, would cause the function $f$
defined above to depart from its Standard Model value)
of 38 MeV, at 95\%\ C.L.
One can also extract limits on mixing of the tau neutrino
to a 4$^{th}$ generation (very massive) neutrino,
and on anomalous couplings of the tau.
One can even measure fundamental properties of the strong
interaction through precision measurement of these
purely leptonic decays. The total branching fraction
of the tau to hadrons is assumed to be $1-{\cal B}_e -{\cal B}_\mu$.
This inclusive rate for semi-hadronic decays
can be formulated within QCD:
\begin{eqnarray*}
R_\tau &\equiv& \frac{{\cal B}_h}{{\cal B}_e} =
\frac{1-{\cal B}_e-{\cal B}_\mu}{{\cal B}_e} = 3.642 \pm 0.019 \\
&=& 3 (V_{ud}^2 + V_{us}^2) S_{EW} (1+\delta_{pert}+\delta_{NP}),
\end{eqnarray*}
where $V_{ud}$ and $V_{us}$ are Cabibbo-Kobayashi-Maskawa (CKM)
matrix elements,
and $S_{EW}$ is a small and calculable electroweak correction.
The strong interaction between the final state quarks
are described by
$\delta_{pert}$, the prediction from perturbative QCD
due to radiation of hard gluons, expressible as a
perturbation expansion in the strong coupling constant
$\alpha_S(m_\tau^2)$,
and $\delta_{NP}$, which describes non-perturbative
effects in terms of incalculable expectation values
of quark and gluon operators (condensates).
The value of $\delta_{NP}$ is estimated to be small,
and values for these expectation values can be extracted
from experimental measurements of the spectral functions
in semi-hadronic decays (see section~\ref{s-spectral}).
The extraction of $\alpha_S(m_\tau^2)$ from $R_\tau$
depends on an accurate estimate of $\delta_{NP}$
and on a convergent series for $\delta_{pert}(\alpha_S)$.
Recent improvements in the techniques for calculating
this series (``Contour-improved Perturbation Theory'', CIPT)
and extrapolating to the $Z^0$ mass scale,
are reviewed in ~\cite{ref:Maxwell}.
The resulting values of $\alpha_S$ evaluated
at the tau mass scale, and then run up to the $Z^0$ mass scale, are:
\begin{eqnarray*}
\alpha_S(m_\tau^2) &=& 0.334\pm 0.010 \\
\alpha_S(m_Z^2 ) &=& 0.120\pm 0.001.
\end{eqnarray*}
We will return to this subject in section \ref{s-spectral}.
\section{LORENTZ STRUCTURE}
\label{s-lorentz}
The dynamics of the leptonic decays
$\tau^-\to \ell^-\nu_\ell\antibar{\nu}_\tau$
are fully determined in the Standard Model,
where the decay is mediated by the $V-A$ charged current
left-handed $W_L^-$ boson.
In many extensions to the Standard Model,
additional interactions can modify the
Lorentz structure of the couplings, and thus the dynamics.
In particular, there can be
weak couplings to scalar currents
(such as those mediated by the charged Higgs of the
Minimal Supersymmetric extensions
to the Standard Model, MSSM),
or small deviations from maximal parity violation
such as those mediated by a right-handed $W_R$ of
left-right symmetric extensions.
The effective lagrangian for the 4-fermion interaction
between $\tau-\nu_\tau-\ell-\nu_\ell$
can be generalized to include such interactions.
Michel and others in the 1950's
assumed the most general, Lorentz invariant,
local, derivative free, lepton number conserving,
4 fermion point interaction.
Integrating over the two unobserved neutrinos, they
described the differential distribution for the
daughter charged lepton ($\ell^-$) momentum
relative to the parent lepton ($\mu$ or $\tau$)
spin direction, in terms of the so-called Michel parameters:
\begin{eqnarray*}
\lefteqn{\frac{1}{\Gamma} \frac{d\Gamma}{dxd\cos\theta}
= \frac{x^2}{2} \times } \\
&& \left[ \left( 12(1-x) + \frac{4{ \rho}}{3}(8x-6)
+ 24{ \eta}{ \frac{m_\ell}{m_\tau}}
\frac{(1-x)}{x}\right) \right. \\
& & \left. \pm { P_\tau }
{ \xi} { \cos\theta} \left( 4(1-x)+
\frac{4}{3}{ \delta}(8x-6) \right)\right] \\
&& \propto x^2\left[ I(x\vert {\rho , \eta} )
\pm { P_\tau} A( x,{ \theta} \vert
{\xi ,\delta}) \right],
\end{eqnarray*}
where $\rho$ and $\eta$ are the spectral shape Michel
parameters and $\xi$ and $\delta$ are the spin-dependent
Michel parameters~\cite{ref:michel};
$x=E_{\ell}/E_{max}$ is the daughter charged lepton energy
scaled to the maximum energy $E_{max} = (m_{\tau}^2 +
m_{\ell}^2)/2m_{\tau}$ in the $\tau$ rest frame;
$\theta$ is the angle between the tau spin direction and the
daughter charged lepton momentum in the $\tau$ rest frame;
and $P_\tau$ is the polarization of the $\tau$.
In the Standard Model
(SM), the Michel Parameters have the values
$\rho=3/4$, $\eta=0$, $\xi = 1$ and $\delta = 3/4$.
There are non-trivial extensions to this approach.
In SUSY models, taus can decay into scalar neutralinos
instead of fermionic neutrinos.
These presumably are massive,
affecting the phase space for the decay
as well as the Lorentz structure of the dynamics.
In addition, there exists a non-trivial extension
to the Michel formalism that admits anomalous
interactions with a tensor leptonic current that includes
derivatives; see \cite{ref:seager} for details.
Such interactions will produce distortions
of the daughter charged lepton spectrum
which cannot be described with the Michel parameters.
DELPHI has used both leptonic and semihadronic decays
to measure the tensor coupling $\kappa_\tau^W$,
with the result
$\kappa_\tau^W = -0.029\pm 0.036\pm 0.018$
(consistent with zero).
There are new or updated results on Michel parameter measurements
for TAU 98 from ALEPH, OPAL, DELPHI, and CLEO
\cite{ref:michel}.
The world averages, summarized in \cite{ref:stahl},
also include results from ARGUS, L3, and SLD.
All results are consistent with the Standard Model,
revealing no evidence for departures from the $V-A$ theory.
These measurements have now reached rather high precision,
but they are still not competitive with the precision
on Michel parameters obtained from muon decay,
$\mu \to e \nu \antibar{\nu}$.
Results from the two decays $\tau\to e\nu\nu$ and $\tau\to \mu\nu\nu$
can be combined under the assumption of $e-\mu$ universality.
Such an assumption is clearly not called for
when one is searching for new physics that explicitly
violates lepton universality, such as charged Higgs
interactions, which couple to the fermions
according to their mass.
However, such couplings mainly affect the Michel parameter $\eta$,
and it is clear from the Michel formula above
that $\eta$ is very difficult to measure
in $\tau\to e\nu\nu$, since it involves a chirality flip
of the daughter lepton, which is suppressed for the light electron.
Thus, lepton universality is usually invoked to constrain
the $\rho$, $\xi$, and $\xi\delta$ parameters to be the same
for the two decays. Since the measurements of $\rho$ and $\eta$
in $\tau\to \mu\nu\nu$ decays are strongly correlated,
the constraint that $\rho_e = \rho_\mu \equiv \rho_{e\mu}$
significantly improves the errors on $\eta_\mu$.
Invoking universality in this sense,
the world averages for the four Michel parameters \cite{ref:stahl},
shown in Fig.~\ref{fig:michwa}, are:
\begin{eqnarray*}
\rho_{e\mu} &=& 0.7490\pm 0.0082 \q (SM = 3/4) \\
\eta_{e\mu} &=& 0.052 \pm 0.036 \q (SM = 0) \\
\xi_{e\mu} &=& 0.988 \pm 0.029 \q (SM = 1) \\
(\xi\delta)_{e\mu} &=& 0.734 \pm 0.020 \q (SM = 3/4).
\end{eqnarray*}
\begin{figure}[ht]
\psfig{figure=figs/michwa.ps,width=2.6in}
\caption[]{New world averages for the Michel parameters in leptonic $
\tau$ decays, assuming $e-\mu$ universality in the couplings \cite{ref:stahl}.}
\label{fig:michwa}
\end{figure}
A measurement of the spin-dependent Michel parameters
allows one to distinguish the Standard Model $V-A$ interaction
(left-handed $\nu_\tau$) from $V+A$ (right-handed $\nu_\tau$).
The probability that a right-handed (massless) tau neutrino
participates in the decay can be expressed as
$$P^\tau_R = 1/2 \left[1+ 1/9 \left(3 {\xi}
-16{\xi\delta} \right)\right], $$
and $P^\tau_R = 0$ for the SM $V-A$ interaction.
The Michel parameters measured in all experiments provide
strong constraints on right-handed $(\tau - W - \nu)_R$ couplings,
as shown in Fig.~\ref{fig:michcoup}.
However, they are unable to distinguish left-handed
$(\tau - W - \nu)_L$ couplings,
for example, between scalar, vector, and tensor currents,
without some additional information, such as
a measurement of the cross-section
$\sigma(\nu_\tau e^-\to \tau^- \nu_e)$.
\begin{figure}[ht]
\psfig{figure=figs/michcoup,width=2.6in}
\caption[]{Limits on the coupling constants $g^\kappa_{\epsilon\rho}$
in $\tau$ decays, assuming $e-\mu$ universality.
Here, $\kappa = S,V,T$ for scalar, vector, and tensor couplings,
and $\epsilon$ and $\rho$ are the helicities ($L$ or $R$)
of the $\nu_\tau$ and $\nu_\ell$, respectively.
The black circles are the corresponding
limits from $\mu$ decays \cite{ref:stahl}.}
\label{fig:michcoup}
\end{figure}
In the minimal supersymmetric extension to the
Standard Model (MSSM), a charged Higgs boson
will contribute to the decay of the tau
(especially for large mixing angle $\tan\beta$),
interfering with the left-handed $W^-$ diagram,
and producing a non-zero value for $\eta$.
Since the world average value of $\eta$
(from spectral shapes and the indirect limit from
${\cal B}(\tau\to \mu\nu\antibar{\nu})$)
is consistent
with zero, one can limit the mass of a charged Higgs
boson to be \cite{ref:stahl}:
$M(H^\pm) > 2.1 \tan\beta$ (in GeV/c$^2$)
at 95\%\ C.L., which is competitive with direct searches
only for $\tan\beta > 200$.
In left-right symmetric models,
there are two sets of weak charged bosons $W^\pm_1$ and $W^\pm_2$,
which mix to form the observed ``light''
left-handed $W^\pm_L$ and a heavier (hypothetical)
right-handed $W^\pm_R$.
The parameters in these models are
$\alpha = M(W_1)/M(W_2)$ (= 0 in the SM), and
$\zeta = $ mixing angle, $= 0$ in the SM.
The heavy right-handed $W^\pm_R$
will contribute to the decay of the tau,
interfering with the left-handed $W^-$ diagram,
and producing deviations from the Standard Model values
for the Michel parameters $\rho$ and $\xi$.
The limit on $M(W_R)$ is obtained from a
likelihood analysis \cite{ref:stahl}
which reveals a very weak minimum (less than 1$\sigma$)
at around 250 GeV, so that the 95\%\ C.L.~limit on the mass
of 214 GeV (for a wide range of mixing angles $\zeta$)
is actually slightly worse than it was at TAU 96.
The limit from muon decay Michel parameters is 549 GeV.
It is worth continuing to improve the precision
on the tau Michel parameters, to push the
limits on charged Higgs and right-handed $W$'s,
and perhaps open a window on new physics
at very high mass scales.
\section{SEARCHES FOR NEW PHYSICS}
\label{s-searches}
One can look directly for physics beyond the Standard Model
in tau decays by searching for decays which violate
lepton flavor (LF) conservation or lepton number (LN) conservation.
These two conservation laws are put into the Standard Model
by hand, and are not known to be the result of some symmetry.
The purported existence of neutrino mixing implies that
LF is violated at some level,
in the same sense as in the quark sector.
Four family theories,
SUSY, superstrings,
and many other classes of models also predict
LF violation (LFV).
If LF is violated, decays such as $\tau\to\mu\gamma$
become possible; and in general, decays containing
no neutrino daughters are possible (neutrinoless decays).
In theories such as GUTs, leptoquarks, {\it etc.},
a lepton can couple directly to a quark,
producing final states where LN is violated (LNV)
but $B-L$ (baryon number minus lepton number) is conserved.
In most cases, LFV and LNV are accompanied by
violation of lepton universality.
Examples of lepton flavor violating decays
which have been searched for at CLEO, ARGUS, SLC, and LEP include: \\
$\tau^- \to \ell^-\gamma$, $\ell^-\ell^+\ell^-$ \\
$Z^0 \to \tau^- e^+$, $\tau^-\mu^+$ \\
$\tau^- \to \ell^- M^0$, $\ell^- P_1^+ P_2^-$ \\
where the $P$'s are pseudoscalar mesons.
Decays which violate lepton number but conserve $B-L$ include
$\tau^- \to \bar{p} X^0$,
where $X^0$ is some neutral, bosonic hadronic system.
Decays which violate lepton number and $B-L$ include:
$\tau^- \to \ell^+ P_1^- P_2^-$.
A broad class of R-parity violating
SUSY models which predict LFV or LNV
through the exchange of lepton superpartners
were reviewed at this conference, in \cite{ref:kong}.
A different class of models containing heavy
singlet neutrinos, which produce LFV,
is discussed in \cite{ref:ilakovac}.
In many cases, branching fractions for
neutrinoless tau decay can be as high as $10^{-7}$
while maintaining consistency with existing data from
muon and tau decays.
CLEO has searched for
40 different neutrinoless decay modes \cite{ref:stroynowski},
and has set upper limits on the branching fractions
of $\stackrel{<}{\scriptstyle\sim}$ few $\times 10^{-6}$.
A handful of modes that have not yet been studied by CLEO,
including those containing anti-protons,
have been searched for by the Mark II and ARGUS experiments,
with branching fraction upper limits
in the $10^{-4} - 10^{-3}$ range.
Thus, the present limits are approaching levels
where some model parameter space can be excluded.
B-Factories will push below $10^{-7}$;
it may be that the most important results coming from
this new generation of ``rare $\tau$ decay experiments''
will be the observation of lepton flavor or lepton number violation.
\section{CP VIOLATION IN TAU DECAYS}
\label{s-cpv}
The minimal Standard Model contains no mechanism for
CP violation in the lepton sector.
Three-family neutrino mixing can produce
(presumably extremely small) violations of CP
in analogy with the CKM quark sector.
CP violation in tau production can occur
if the tau has a non-zero electric dipole moment or
weak electric dipole moment,
implying that the tau is not a fundamental
(point-like) object.
Although other studies of taus (such as production cross section,
Michel parameters, {\it etc.}) are sensitive to tau substructure,
``null'' experiments such as the search for CP violating
effects of such substructure can be exquisitely sensitive.
I discuss searches for dipole moments in the next section.
CP violation in tau decays can occur, for example,
if a charged Higgs with complex couplings
(which change sign under CP)
interferes with the dominant $W$-emission process:
$$|A(\tau^-\to W^-\nu_\tau) + g e^{i\theta} A (\tau^-\to H^- \nu_\tau)|^2 .$$
If the dominant process produces a phase shift
(for example, due to the $W\to \rho$, $a_1$, or $K^*$ resonance),
the interference will be of opposite sign
for the $\tau^+$ and $\tau^-$,
producing a measurable CP violation.
The effect is proportional to isospin-violation
for decays such as $\tau\to \pi\pi\nu_\tau$ $3\pi\nu_\tau$;
and SU(3)$_f$ violation for decays such as
$\tau\to K\pi\nu_\tau$, $K\pi\pi\nu_\tau$.
The various signals for CP violation in tau decays
are reviewed in \cite{ref:tsai}.
CLEO has performed the first direct search for CP violation
in tau decays \cite{ref:kass},
using the decay mode $\tau\to K\pi\nu_\tau$.
The decay is mediated by the usual p-wave vector exchange,
with a strong interaction phase shift provided by the $K^*$ resonance.
CP violation occurs if there is interference with an
s-wave scalar exchange, with a complex weak phase $\theta_{CP}$
and a different strong interaction phase.
The interference term is CP odd, so CLEO searches for
an asymmetry in a CP-odd angular observable
between $\tau^+$ and $\tau^-$.
They see no evidence for CP violation,
and set a limit on the imaginary part of the complex coupling
of the tau to the charged Higgs ($g$, in units of $G_F/2\sqrt{2}$):
$g\sin\theta_{CP} < 1.7$.
Tests of CP violation in tau decays can also be
made using $2\pi\nu_\tau$ and $3\pi\nu_\tau$,
and we can look forward to results from such analyses in the future.
\section{DIPOLE MOMENTS}
\label{s-dipole}
In the Standard Model, the tau couples to the photon and to the $Z^0$
via a minimal prescription, with a single coupling constant
and a purely vector coupling to the photon
or purely $v_f V + a_f A$ coupling to the $Z^0$.
More generally, the tau can couple to the neutral currents
with $q^2$ dependent vector or tensor couplings.
The most general Lorentz-invariant form of the coupling
of the tau to a photon of 4-momentum $q_\mu$
is obtained by replacing the usual $\gamma^\mu$ with
\begin{eqnarray*}
\Gamma^\mu &=& F_1(q^2)\gamma^\mu + \\
&& F_2(q^2)\sigma^{\mu\nu}q_\nu
- F_3(q^2)\sigma^{\mu\nu}\gamma_5 q_\nu .
\end{eqnarray*}
At $q^2 = 0$, we interpret $F_1(0) = q_\tau$ as the electric charge
of the tau, $F_2(0) = a_\tau = (g_\tau-2)/2$ as the
anomalous magnetic moment, and
$F_3(0) = d_\tau/q_\tau$, where $d_\tau$ is the
electric dipole moment of the tau.
In the Standard Model, $a_\tau$ is non-zero due to
radiative corrections, and has the value
$a_\tau \approx \alpha/2\pi \approx 0.001177$.
The electric dipole moment $d_\tau$ is zero
for pointlike fermions; a non-zero value
would violate $P$, $T$, and $CP$.
Analogous definitions can be made for the
{\it weak} coupling form factors
$F^w_1$, $F^w_2$, and $F^w_3$,
and for the weak static dipole moments $a^w_\tau$ and $d^w_\tau$.
In the Standard Model, the weak dipole moments
are expected to be small (the weak electric dipole moment
is tiny but non-zero due to CP violation in the CKM matrix):
$a_\tau^W = -(2.1 + 0.6i)\times 10^{-6}$, and
$d_\tau^W \approx 3\times 10^{-37}$ e$\cdot$cm.
Some extensions to the Standard Model predict
vastly enhanced values for these moments:
in the MSSM, $a_\tau^W$ can be as large as $10^{-5}$
and $d_\tau^W$ as large as a few $\times 10^{-20}$.
In composite models, these dipole moments
can be larger still.
The smallness of the Standard Model expectations
leaves a large window for discovery of
non-Standard Model couplings.
At the peak of the $Z^0$, the reactions
$Z^0\to \tau^+\tau^-$ are sensitive to the
weak dipole moments, while the electromagnetic dipole moments
can be measured at center of mass energies far below the $Z^0$
(as at CLEO or BES), and/or through the study of
final state photon radiation in
$e^+e^-\to(\gamma^*, Z^0)\to \tau^+\tau^-\gamma$.
Extensive new or updated
results on searches for weak and electromagnetic
dipole moments were presented at TAU 98.
In all cases, however, no evidence for non-zero
dipole moments or CP-violating couplings were seen,
and upper limits on the dipole moments were
several orders of magnitude larger than Standard Model expectations,
and is even far from putting meaningful limits
on extensions to the Standard Model.
There is much room for improvement in both technique
and statistical power, and, as in any search for
very rare or forbidden phenomena,
the potential payoffs justify the effort.
An anomalously large weak magnetic dipole moment
will produce a transverse spin polarization
of taus from $Z^0$ decay, leading to (CP-conserving)
azimuthal asymmetries in the subsequent tau decays.
The L3 experiment searched for these asymmetries
using the decay modes $\tau\to\pi\nu_\tau$ and
$\tau\to\rho\nu_\tau$, and observed none \cite{ref:vidal}.
They measure values for the real and imaginary parts of
the weak magnetic dipole moment
which are consistent with zero, and set upper limits
(at 95\%\ C.L.):
\begin{eqnarray*}
|Re(a_\tau^w)| &<& 4.5 \times 10^{-3} \\
|Im(a_\tau^w)| &<& 9.9 \times 10^{-3}.
\end{eqnarray*}
They measure, for the real part of the weak electric
dipole moment, a value which is consistent with zero
within a few $\times 10^{-17}$ e$\cdot$cm.
The ALEPH, DELPHI, and OPAL experiments search for
CP violating processes induced by a
non-zero weak electric dipole moment,
by forming CP-odd observables from the 4-vectors
of the incoming beam and outgoing taus, and the outgoing tau spin vectors.
These ``optimal observables'' \cite{ref:zalite}
pick out the CP-odd terms in the cross section for production
and decay of the $\tau^+\tau^-$ system;
schematically, the optimal observables are defined by:
$$d\sigma \propto |M_{SM} + d^w_\tau M_{CP}|^2,$$
$$O^{Re} = Re(M_{CP})/M_{SM}, \, O^{Im} = Im(M_{CP})/M_{SM}.$$
These observables are essentially CP-odd triple products,
and they require that the spin vectors of the taus be determined
(at least, on a statistical level).
Most or all of the decay modes of the tau
$(\ell,\pi,\rho,a_1)$ are used to spin-analyze the taus.
ALEPH, DELPHI, and OPAL measure the expectation values
$<O^{Re}>$ and $<O^{Im}>$ of these observables,
separately for each tau-pair decay topology.
From these, they extract measurements of the
real and imaginary parts of $d_\tau^w$.
The combined limits (at 95\%\ C.L.) \cite{ref:zalite} are:
\begin{eqnarray*}
|Re(d_\tau^w)| &<& 3.0 \times 10^{-18} \q\mbox{e$\cdot$cm} \\
|Im(d_\tau^w)| &<& 9.2 \times 10^{-18} \q\mbox{e$\cdot$cm} .
\end{eqnarray*}
They are consistent with the Standard Model,
and there is no evidence for CP violation.
SLD makes use of the electron beam polarization
to enhance its sensitivity to Im($d_\tau^W$).
Rather than measure angular asymmetries or expectations
values of CP-odd observables,
they do a full unbinned likelihood fit to the observed event
(integrating over unseen neutrinos)
using tau decays to leptons, $\pi$, and $\rho$,
in order to extract limits on the real and imaginary parts
of both $a_\tau^w$ and $d_\tau^w$.
They obtain preliminary results \cite{ref:barklow}
which are again consistent with zero, but which have
the best sensitivity to $Im(a_\tau^w)$ and $Im(d_\tau^w)$:
\begin{eqnarray*}
|Re(d_\tau^w)| &=& (18.3\pm7.8) \times 10^{-18} \q\mbox{e$\cdot$cm}; \\
|Im(d_\tau^w)| &=& (-6.6\pm4.0) \times 10^{-18} \q\mbox{e$\cdot$cm}; \\
|Re(a_\tau^w)| &=& (0.7\pm1.2) \times 10^{-3}; \\
|Im(a_\tau^w)| &=& (-0.5\pm0.6) \times 10^{-3}.
\end{eqnarray*}
\subsection{EM dipole moments}
\label{ss-emdipole}
The anomalous couplings to photons can be probed,
even on the peak of the $Z^0$, by searching for
anomalous final state photon radiation in
$e^+e^- \to \tau^+\tau^- \gamma$.
The L3 experiment
studies the distribution of photons
in such events, as a function of the photon energy,
its angle with respect to the nearest reconstructed tau,
and its angle with respect to the beam.
In this way, they can distinguish anomalous
final state radiation from initial state radiation,
photons from $\pi^0$ decays, and other backgrounds.
The effect on the photon distribution
due to anomalous electric couplings is very similar
to that of anomalous magnetic couplings,
so they make no attempt to extract values for
$a_\tau$ and $d_\tau$ simultaneously, but instead
measure one while assuming the other
takes on its Standard Model value.
They see no anomalous photon production \cite{ref:taylor},
and set the limits (at 95\%\ C.L.)
$-0.052 < a_\tau < 0.058$ and
$|d_\tau| < 3.1\times 10^{-16}$ e$\cdot$cm.
These results should be compared with those for the muon:
\begin{eqnarray*}
a_\mu^{theory} &=& 0.00116591596(67) \\
a_\mu^{expt} &=& 0.00116592350(780) \\
d_\mu^{expt} &=& (3.7\pm3.4) \times 10^{-19} \q\mbox{e$\cdot$cm}.
\end{eqnarray*}
Theoretical progress on the evaluation of $a_\mu$ in the
Standard Model is reviewed in \cite{ref:Czarnecki},
including the important contribution
from tau decays (see section \ref{ss-gminus2} below).
Progress in the experimental measurement at Brookhaven
is described in \cite{ref:Grosse}.
Clearly, there is much room for improvement of the
measurements of the anomalous moments of the tau.
\section{SPECTRAL FUNCTIONS}
\label{s-spectral}
The semi-hadronic decays of the tau are dominated by
low-mass, low-multiplicity hadronic systems:
$n\pi$, $n\le 6$; $Kn\pi$, $K\bar{K}$, $K\bar{K}\pi$, $\eta\pi\pi$.
These final states are dominated by resonances
($\rho$, $a_1$, $\rho^\prime$, $K^*$, $K_1$, {\it etc.}).
The rates for these decays, taken individually,
cannot be calculated from fundamental theory (QCD),
so one has to rely on models, and on
extrapolations from the chiral limit using
chiral perturbation theory.
However, appropriate sums of final states
with the same quantum numbers can be made,
and these semi-inclusive measures of the
semi-hadronic decay width of the tau
can be analyzed using perturbative QCD.
In particular, one can define spectral functions
$v_J$, $a_J$, $v_J^S$ and $a_J^S$, as follows:
\begin{eqnarray*}
\lefteqn{
\frac{d\Gamma}{dq^2} (\tau \to \hbox{hadrons} + {\nu_\tau}) =
{{G_F^2}\over{32 \pi^2 m_\tau^3}} (m_\tau^2-q^2)^2} \\
& \times & \left\{ |V_{ud}|^2 \left[
(m_\tau^2+2 q^2) \left(v_1(q^2)+a_1(q^2)\right) \right. \right. \\
& & \left. + m_\tau^2 \left(v_0(q^2)+a_0(q^2)\right) \right] \\
& & + |V_{us}|^2 \left[
(m_\tau^2+2 q^2) \left(v_1^S(q^2)+a_1^S(q^2)\right) \right. \\
& & \left.\left. + m_\tau^2 \left(v_0^S(q^2)+a_0^S(q^2)\right) \right]
\right\} .
\end{eqnarray*}
The spectral functions $v$ and $a$ represent the
contributions of the vector and axial-vector hadronic currents
coupling to the $W$.
The subscripts on these functions denote the spin $J$ of the hadronic system,
and the superscript $S$ denotes states with net strangeness.
The hadronization information contained in the spectral functions
falls in the low-energy domain of strong interaction
dynamics, and it cannot be calculated in QCD.
Nonetheless, many useful relations between the spectral functions
can be derived.
For example, in the limit of exact $SU(3)_L\times SU(3)_R$ symmetry,
we have $v_1(q^2) = a_1(q^2) = v_1^S(q^2) = a_1^S(q^2)$
and $v_0(q^2) = a_0(q^2) = v_0^S(q^2) = a_0^S(q^2) = 0$.
Relations amongst the spectral functions
depend on assumptions about how the $SU(3)$ symmetry is broken.
The Conserved Vector Current (CVC) hypothesis
requires that $v_0(q^2) = 0$, and that
$v_1(q^2)$ can be related to the total cross-section for
$e^+e^-$ annihilations into hadrons.
Several sum rules relate integrals of these spectral functions,
as described below.
From arguments of parity and isospin,
the final states containing an even number of pions
arise from the vector spectral function,
and those with an odd number of pions
arise from the axial-vector spectral functions $a_1$,
or in the case of a single pion, $a_0$.
Final states containing one or more kaons
can contribute to both types of spectral functions,
since $SU(3)_f$ is violated
(and because of the chiral anomaly).
Experimentally, one can determine
whether a final state contributes to $v$ or $a$
through a careful analysis of its dynamics.
The spectral functions can be measured experimentally
by adding up the differential distributions
from all the exclusive final states that
contribute to $v$ or $a$,
in a ``quasi''-inclusive analysis:
$$ v_1 = \frac{{\cal B}_{v}}{{\cal B}_e}
\frac{1}{N_{v}}
\frac{dN_{v}}{dq^2}
\frac{M_\tau^8}{\left(m_\tau^2-q^2\right)^2
\left(m_\tau^2+2q^2\right)} ,
$$
and similarly for $a$.
For $v(q^2)$, the result is dominated by the $2\pi$ and $4\pi$
final states, with small contributions from $6\pi$, $K\bar{K}$, and others.
For $a(q^2)$, the result is dominated by the $3\pi$ and $5\pi$
final states, with small contributions from $K\bar{K}\pi$ and others.
The $\pi\nu$ and $K\nu$ final states are delta functions
and must be handled separately.
OPAL and ALEPH have presented new or updated measurements
of these non-strange spectral functions \cite{ref:Menke,ref:Hoecker}.
In both cases,
the small contributions mentioned above were obtained
from Monte Carlo estimates, not the data.
the ALEPH results are shown in Fig.~\ref{fig:spec_aleph}.
\begin{figure}[!ht]
\psfig{figure=figs/specv_aleph.ps,width=2.6in}
\psfig{figure=figs/speca_aleph.ps,width=2.6in}
\caption[]{Total vector and axial-vector spectral functions from ALEPH.
The contributions from the exclusive channels, from data and MC,
are indicated \cite{ref:Hoecker}.}
\label{fig:spec_aleph}
\end{figure}
These spectral functions can be used to study many
aspects of QCD, as described in the following subsections.
\subsection{Moments of the Spectral functions}
\label{ss-moments}
Although the spectral functions themselves cannot be predicted
in QCD, the moments $R_{kl}$ of those functions:
$$ R_{kl}^{v/a} = \int^{m_\tau^2}_0 ds \left(1 - \frac{s}{m_\tau^2}\right)^k
\left(\frac{s}{m_\tau^2}\right)^l
\frac{1}{N_{v/a}}
\frac{dN_{v/a}}{ds},
$$
with $k = 1$, $l = 0 \cdots 3$ {\it are} calculable.
In direct analogy with $R_\tau$ (section \ref{s-leptonic}),
the moments (for non-strange final states) can be expressed as:
$$R_{kl}^{v/a} = \frac{3}{2} V_{ud}^2 S_{EW} (1+\delta_{pert}
+\delta_{mass}^{v/a}+\delta_{NP}^{v/a}),$$
where $V_{ud}$ is the CKM matrix element,
and $S_{EW}$ is a small and calculable electroweak correction.
$\delta_{pert}$ is a calculable polynomial
in the strong coupling constant $\alpha_S(m_\tau^2)$,
$\delta_{mass}^{v/a}$ is a quark mass correction:
$$\delta_{mass}^{v/a} \simeq - 16 \frac{\bar{m}_q^2}{m_\tau^2},$$
and $\delta_{NP}$ describes non-perturbative
effects in terms of incalculable expectation values
of quark and gluon operators in the operator product expansion (OPE):
$$\delta_{NP}^{v/a} \simeq
C_4^{v/a} \frac{\vev{\cal{O}}^4}{m_\tau^4}
+ C_6^{v/a} \frac{\vev{\cal{O}}^6}{m_\tau^6}
+ C_8^{v/a} \frac{\vev{\cal{O}}^8}{m_\tau^8}.$$
The $C_n$ coefficients discribe short-distance effects,
calculable in QCD; and the expectation values for the operators
are the non-perturbative condensates. For example,
$$\vev{\cal{O}}^4 \sim \vev{\frac{\alpha_S}{\pi} GG}
+ \vev{m \bar{\psi}_q \psi_q} .$$
The important point is that one can calculate
distinct forms for $\delta_{pert}$ and $\delta_{NP}$
for each of the moments (values of $k$ and $l$),
separately for $V$ and $A$.
One can measure several different moments,
and from these, extract values for $\alpha_S(m_\tau^2)$
and for each of the non-perturbative condensates.
The result depends only on the method used to obtain
the QCD perturbation expansion; several methods are available,
including the CIPT mentioned in section \ref{s-leptonic}.
Both OPAL and ALEPH measure the moments of their
quasi-inclusive spectral functions, and fit to extract
values for $\alpha_S(m_\tau^2)$
and for the non-perturbative condensates.
The results are presented in \cite{ref:Menke,ref:Hoecker}.
The value of $\alpha_S(m_\tau^2)$ is in good agreement
with the one determined solely from the electronic branching fraction
(section \ref{s-leptonic}), but without the assumption that
$\delta_{NP}$ is small. It extrapolates to a value
at the $Z^0$ pole, $\alpha_S(m_Z^2)$, which agrees well
with measurements made there from hadronic event shapes
and other methods.
More importantly, the non-perturbative condensates
indeed are measured to be small ($\sim 10^{-2}$).
\subsection{QCD Chiral Sum Rules}
One can use the structure of QCD, and/or chiral perturbation theory,
to predict the moments of the difference
$v(s)-a(s)$ of the spectral functions (with $s = q^2$).
The physics of these sum rules is reviewed in \cite{ref:Rafael}.
Four sum rules have been studied with tau decay data:
\begin{itemize}
\item First Weinberg sum rule:
$$\frac{1}{4\pi^2}
\int_0^\infty ds \left(v_1(s) - a_1(s)\right) = f_\pi^2 $$
\item Second Weinberg sum rule:
$$\frac{1}{4\pi^2}
\int_0^\infty ds \cdot s \left(v_1(s) - a_1(s)\right) = 0$$
\item Das-Mathur-Okubo sum rule:
$$\frac{1}{4\pi^2}
\int_0^\infty \frac{ds}{s}
\left(v_1(s) - a_1(s)\right) = f_\pi^2 \frac{\vev{r_\pi^2}}{3} - F_A$$
\item Isospin-violating sum rule:
\begin{eqnarray*}
\frac{1}{4\pi^2}
\int_0^\infty ds && s \ln\frac{s}{\Lambda^2}
\left(v_1(s) - a_1(s)\right) = \\
&& -\frac{16\pi^2 f_\pi^2}{3\alpha}
\left( m_{\pi^\pm}^2 - m_{\pi^0}^2 \right).
\end{eqnarray*}
\end{itemize}
The first, second, and fourth sum rule listed above
have definite predictions on their right-hand side,
and the data can be used to test those predictions.
However, the spectral functions measured in tau decay
extend up to $s = m_\tau^2$, not infinity.
So in practice, the tests only allow one to address the
question, is $m_\tau^2$ close enough to infinity;
is it ``asymptotia''?
So far, the data are {\it consistent} with the
sum rule predictions and with the assumption
that $m_\tau^2$ is sufficiently close to infinity
(see Fig.~\ref{fig:qcdsum} for the OPAL results);
however, the data are not yet
sufficiently precise to
provide a quantitative test of these predictions.
However, ALEPH has studied the evolution of
the integral of the spectral functions and their moments
as a function of the cutoff $s \le m_\tau^2$,
and compared them with the theoretical prediction
for the perturbative and non-perturbative terms
as a function of their renormalization scale $s$
(fixing them at $s=m_\tau^2$ to the values they
obtain from their fits).
In all cases, they find \cite{ref:Hoecker}
that the experimental
distributions and the theoretical predictions
overlap and track each other well
before $s=m_\tau^2$. It appears that $s=m_\tau^2$
{\it is} asymptotia.
One can use the third (DMO) sum rule to extract
a value for the pion electric polarizability
$$\alpha_E = \frac{\alpha F_A}{m_\pi f_\pi^2}.$$
This can be compared with predictions from
the measured value of the axial-vector form factor $F_A$,
which give $\alpha_E = (2.86\pm 0.33)\times 10^{-4}$ fm$^3$.
OPAL \cite{ref:Menke}
obtains $\alpha_E = (2.71\pm 0.88)\times 10^{-4}$ fm$^3$,
in good agreement with the prediction.
\begin{figure}[ht]
\centerline{
\psfig{figure=figs/qcdsum,width=1.2in}
\psfig{figure=figs/qcdsum2,width=1.2in}}
\centerline{
\psfig{figure=figs/qcdsum3,width=1.2in}
\psfig{figure=figs/qcdsum4,width=1.2in}}
\caption[]{QCD sum rule integrals versus the upper integration limit
from OPAL data, for the four sum rules given in the text.
The chiral prediction is given by the lines \cite{ref:Menke}.}
\label{fig:qcdsum}
\end{figure}
\subsection{$\mathbf{(g-2)_\mu}$ from $\mathbf{v(s)}$ and CVC}
\label{ss-gminus2}
As noted in section \ref{ss-emdipole},
the muon's anomalous magnetic moment $a_\mu$ is
measured with far higher precision than that of the tau,
and is in excellent agreement with the precise
theoretical prediction.
The experimental precision will soon improve considerably
\cite{ref:Grosse},
and threatens to exceed the precision with which
the theoretical prediction is determined.
Until recently, the contribution to $a_\mu$,
$$\left(\frac{g-2}{2}\right)_\mu \equiv a^\gamma_\mu
= a_\mu^{QED} + a_\mu^{W} + a_\mu^{had} ,$$
from virtual hadronic effects $a_\mu^{had}$ had large uncertainties.
The contribution from weak effects
($a_\mu^{W}$, from $W$-exchange vertex correction)
is small:
$a_\mu^W = (151\pm 40)\times 10^{-11}$.
Observing this contribution is one of the goals of the
current round of measurements.
(A more important goal is to look for effects of the same scale
due to physics beyond the Standard Model).
The contribution from hadronic effects
(quark loops in the photon propagator in the EM vertex correction,
and, to a lesser extent, ``light-by-light'' scattering \cite{ref:Czarnecki})
is much larger, and its uncertainty was larger than the
entire contribution from $a_\mu^W$:
$a_\mu^{had} = (7024\pm 153)\times 10^{-11}$.
This value for $a_\mu^{had}$ was obtained by relating
the quark loops in the photon propagator
to the total rate for $\gamma^*\to q\bar{q}$
as measured in $e^+e^-$ annihilation experiments at low
$s = q^2 <$ (2 GeV)$^2$.
Unfortunately, these experiments had significant
overall errors in their measured values for the
total hadronic cross section $\sigma(s)$.
These results are currently being improved,
as reported in \cite{ref:Eidelman}.
In the meantime, one can use the
vector part of the total decay rate
of the tau to non-strange final states, $v(s)$,
to determine $a_\mu^{had}$.
One must assume CVC, which relates the
vector part of the weak charged current
to the isovector part of the electromagnetic current;
and one must correct for the isoscalar part of the current
which cannot be measured in tau decay.
In addition, one must use other data
to estimate the contribution to $a_\mu^{had}$
from $s > m_\tau^2$; however, the contribution
from $s < m_\tau^2$ dominates the value and the error.
Using the vector spectral function $v(s)$
measured by ALEPH, one obtains \cite{ref:Davier}
a value for $a_\mu^{had}$ with improved errors:
$a_\mu^{had} = (6924\pm 62)\times 10^{-11}$.
Now the error is smaller than the contribution
from $a_\mu^W$, and the sensitivity of the
forthcoming precision experimental result
to new physics is greatly improved.
However, the use of tau data to determine
$a_\mu^{had}$ with high precision
relies on CVC to 1\%.
Is this a valid assumption?
\subsection{Testing CVC}
\label{ss-cvc}
To test the validity of CVC at the per cent level,
one can compare the new
VEPP-II data on $e^+e^-\to 2n\pi$ \cite{ref:Eidelman}
to data from tau decays (from ALEPH, DELPHI, and CLEO).
When this is done, small discrepancies appear,
both in individual channels ($2\pi$ and $4\pi$)
and in the total rate via the vector current.
Discrepancies are expected, at some level,
because of isospin violation.
These comparisons are made in \cite{ref:Eidelman},
where the data from VEPP-II are converted (using CVC)
into predictions for the branching fractions of the tau
into the analogous (isospin-rotated)
final states:
$$ \frac{{\cal B}(\tau\to\pi\pi\nu) - B_{CVC}}{{\cal B}(\tau\to\pi\pi\nu)}
= (3.2\pm 1.4)\%, $$
$$\frac{\Delta{\cal B}}{{\cal B}} (2\pi+4\pi+6\pi+\eta\pi\pi+K\bar{K})
= (3.6\pm 1.5)\%. $$
In addition, a comparison of the spectral function {\it shape}
extracted from $\tau\to 2\pi\nu_\tau$
and that extracted from $e^+e^-\to 2\pi$
shows discrepancies at the few per cent level.
It is not clear whether these comparisons mean that
CVC is only good to $\sim 3\%$,
or whether the precision in the data used for the comparison
needs improvement.
To be conservative, however, results that rely on
CVC should be quoted with an error
that reflects these discrepancies.
\section{EXCLUSIVE FINAL STATES}
\label{s-structure}
The semi-hadronic decays of the tau to
exclusive final states is the realm of low energy meson dynamics,
and as such cannot be described with perturbative QCD.
In the limit of small energy transfers,
chiral perturbation theory can be used to predict rates;
but models and symmetry considerations must be used to
extrapolate to the full phase space of the decay.
In general, the Lorentz structure of the decay
(in terms of the 4-vectors of the final state pions, kaons,
and $\eta$ mesons) can be specified; models are then required
to parameterize the a priori unknown form factors
in the problem.
Information from independent measurements in low energy
meson dynamics can be used to reduce the number of free
parameters in such models; an example is given in \cite{ref:bingan}.
Measurements of the total branching fractions
to different exclusive final states
({\it e.g.}, $h n\pi^0{\nu_\tau}$, $n = 0 ... 4$,
$3h n\pi^0{\nu_\tau}$, $n = 0 ... 3$,
$5h n\pi^0{\nu_\tau}$, $n = 0 ... 1$, with $h = \pi^\pm$ or $K^\pm$;
or $\eta n\pi^0{\nu_\tau}$, $n = 2 ... 3$)
have been refined for many years,
and remain important work;
recent results from DELPHI are presented in \cite{ref:Lopez}.
Branching fractions for final states containing kaons
($K^\pm$, $K^0_S$, and $K^0_L$)
are presented in \cite{ref:Kravchenko,ref:Andreazza,ref:Chen}
and are discussed in more detail in section \ref{s-kaons}.
The world average summaries of all the
semi-hadronic exclusive branching fractions
are reviewed in \cite{ref:Heltsley}.
Since the world average branching fractions for all
exclusive tau decays now sum to one with small errors,
emphasis has shifted to the detailed study of
the structure of exclusive final states.
At previous tau workshops, the focus was on the simplest
final states with structure: $\pi\pi^0{\nu_\tau}$ and $K\pi{\nu_\tau}$
\cite{ref:rhostructure}.
At this workshop, the attention has shifted to the
$3\pi$, $K\pi\pi$, and $K\bar{K}\pi$ final states,
which proceed dominantly through the axial-vector current.
The results for the $K\pi\pi$, and $K\bar{K}\pi$ final states
are discussed in section \ref{s-kaons};
here we focus on $3\pi$.
The $4\pi$ final state remains to be studied in detail.
Final states with 5 or 6 pions
contain so much resonant substructure,
and are so rare in tau decays, that detailed
fits to models have not yet been attempted.
However, isospin can be used to characterize
the pattern of decays; this is discussed,
using data from CLEO, in \cite{ref:Gan}.
Recent results on $\tau\to 3\pi{\nu_\tau}$ from
OPAL, DELPHI, and CLEO are discussed in \cite{ref:Schmidtler}.
There are two complementary approaches that can be taken:
describing the decay in a Lorentz-invariant way,
parameterized by form-factors which model
intermediate resonances; or via model-independent
structure functions, defined in a specified angular basis.
The model-dependent approach gives a simple picture
in terms of well-defined decay chains, such as
$a_1 \to \rho\pi\to 3\pi$ or $K_1\to (K^*\pi, K\rho) \to K\pi\pi$;
but the description is only as good as the model,
and any model is bound to be incomplete.
The structure function approach results in large tables of numbers
which are harder to interpret (without a model);
but it has the advantage that some of the functions,
if non-zero, provide model-independent evidence for sub-dominant processes
such as pseudoscalar currents ({\it e.g.}, $\pi^\prime(1300)\to 3\pi$)
or vector currents ({\it e.g.}, $K^{*\prime} \to K\pi\pi$).
OPAL and DELPHI present fits of their $3\pi$ to two simple models
for $a_1\to \rho\pi$,
neither of which describe the data in detail \cite{ref:Schmidtler}.
DELPHI finds that in order to fit the high $m_{3\pi}$ region
with either model, a radially-excited $a_1^\prime(1700)$ meson
is required, with a branching fraction
${\cal B}(\tau\to a_1^\prime{\nu_\tau}\to 3\pi{\nu_\tau})$
of a few $\times 10^{-3}$, depending upon model.
Even then, the fits to the Dalitz plot variables are poor.
The presence of an enhancement at high mass
(over the simple models) has important consequences
for the extraction of the tau neutrino mass using
$3\pi{\nu_\tau}$ events.
OPAL also analyzes their data in terms of structure functions,
and from these, they set limits on scalar currents:
$$\Gamma^{scalar}/\Gamma^{tot} (3\pi{\nu_\tau}) < 0.84\%,$$
and make a model-independent determination of the {\it signed}
tau neutrino helicity:
$$h_{\nu_\tau} = -1.29\pm 0.26\pm 0.11$$
(in the Standard Model, $h_{\nu_\tau} = -1$).
CLEO does a model-dependent fit to their
$\tau^-\to\pi^-\pi^0\pi^0{\nu_\tau}$ data.
They have roughly 5 times the statistics
of OPAL or DELPHI.
This allows them to consider contributions
from many sub-dominant processes, including:
$a_1\to \rho^\prime\pi$, both S-wave and D-wave;
$a_1\to f_2(1275)\pi$, $\sigma\pi$, and $f_0(1370)\pi$;
and $\pi^\prime(1300)\to 3\pi$.
Here, the $\sigma$ is a broad scalar resonance
which is intended to ``mock up'' the complex structure
in the S-wave $\pi\pi$ scattering amplitude above threshold,
according to the Unitarized Quark Model.
CLEO also considers the process $a_1\to K^*K$,
as a contribution to the total width and therefore
the Breit Wigner propagator for the $a_1$
(of course, the final state that is studied
does not receive contributions from $K^*K$).
CLEO finds significant contributions from all of these processes,
with the exception of $\pi^\prime(1300)\to 3\pi$.
All measures of goodness-of-fit are excellent,
throughout the phase space for the decay.
There is also excellent agreement with the
data in the $\pi^-\pi^+\pi^-{\nu_\tau}$ final state,
which, because of the presence of isoscalars
in the substructure, is non-trivial.
There is strong evidence for a $K^*K$ threshold.
There is only very weak evidence for an $a_1^\prime(1700)$.
They measure the radius of the $a_1$ meson to be $\approx 0.7$ fm.
They set the 90\%\ C.L. limit
$$\Gamma(\pi^\prime(1300)\to\rho\pi)/\Gamma(3\pi) < 1.0\times 10^{-4},$$
and make a model-dependent determination of the signed
tau neutrino helicity:
$$h_{\nu_\tau} = -1.02\pm 0.13\pm 0.03 \q(model).$$
{\it All} of these results are model-dependent;
but the model fits the data quite well.
\section{KAONS IN TAU DECAY}
\label{s-kaons}
Kaons are relatively rare in tau decay,
and modes beyond $K{\nu_\tau}$ and $K^*{\nu_\tau}$
are only being measured with some precision
in recent years.
At TAU 98, ALEPH presented \cite{ref:Chen}
branching fractions for 27 distinct modes with $K^\pm$, $K^0_S$,
and/or $K^0_L$ mesons, including $K3\pi$;
DELPHI presented \cite{ref:Andreazza}
12 new (preliminary) branching fractions,
and CLEO presented \cite{ref:Kravchenko}
an analysis of
four modes of the form $K^- h^+\pi^-(\pi^0)\nu_\tau$.
In the $K\pi$ system, ALEPH
sees a hint of $K^{*\prime}(1410)$,
with an amplitude (relative to $K^*(892)$)
which is in good agreement with
the analogous quantity from $\tau\to (\rho,\rho^\prime){\nu_\tau}$.
CLEO sees no evidence for anything beyond the $K^*(892)$.
ALEPH and CLEO both study the $K\pi\pi$ system.
Here, one expects contributions from:
the axial-vector $K_1(1270)$, which decays to $K^*\pi$, $K\rho$,
and other final states;
the axial-vector $K_1(1400)$, which decays predominately to $K^*\pi$;
and, to a much lesser extent, the vector $K^{*\prime}$,
via the Wess-Zumino parity-flip mechanism.
Both ALEPH and CLEO see more $K_1 (1270)$ than $K_1(1400)$,
with significant signals for $K\rho$ as well as $K^*\pi$
in the Dalitz plot projections.
The two $K_1$ resonances are quantum mechanical mixtures
of the $K_{1a}$ (from the $J^{PC} = 1^{++}$ nonet, the strange
analog of the $a_1$),
and the $K_{1b}$ (from the $J^{PC} = 1^{+-}$ nonet, the strange
analog of the $b_1$).
The coupling of the $b_1$ to the $W$ is a second-class current,
permitted in the Standard Model only via isospin violation.
The coupling of the $K_{1b}$ to the $W$ is permitted
only via $SU(3)_f$ violation.
CLEO extracts the $K_{1a} - K_{1b}$ mixing angle
(with a two-fold ambiguity)
and $SU(3)_f$-violation parameter $\delta$ in $\tau \to K_{1b}\nu_\tau$,
giving results consistent with
previous determinations from hadroproduction experiments
\cite{ref:Kravchenko}.
ALEPH studies the $K\bar{K}\pi$ structure, and finds \cite{ref:Chen}
that $K^*K$ is dominant, with little contribution
from $\rho\pi$, $\rho\to K\bar{K}$.
The $K\bar{K}\pi$ mass spectrum is consistent
with coming entirely from $a_1\to K\bar{K}\pi$,
although there may be a large vector component.
ALEPH also analyzes the isospin content of the
$K\pi$, $K\pi\pi$, and $K\bar{K}\pi$
systems. Finally, they classify the net-strange final states
as arising from the vector or axialvector current,
and construct the strange spectral function
$(v+a)^S_1(s)$ (using the data for the $K\pi$ and $K\pi\pi$
components, and the Monte Carlo for the small contributions
from $K3\pi$, $K4\pi$, {\it etc.}) \cite{ref:Chen}.
This function, shown in Fig.~\ref{fig:specs_aleph},
can then be used for QCD studies, as discussed
in the next section.
\begin{figure}[ht]
\psfig{figure=figs/specs_aleph.ps,width=2.6in}
\caption[]{Total $V+A$ spectral function from $\tau$ decays into
strange final states. from ALEPH.
The contributions from the exclusive channels, from data and MC,
are indicated \cite{ref:Chen}.}
\label{fig:specs_aleph}
\end{figure}
\subsection{$\mathbf{m_s}$ from $\mathbf{R_{\tau,s}}$}
\label{ss-ms}
The total strange spectral function can be used
to extract QCD parameters, in direct analogy
with the total and non-strange rates and moments
as described in sections \ref{s-leptonic} and \ref{ss-moments}.
In the strange case, we have
\begin{eqnarray*}
R_\tau^s &\equiv& \frac{{\cal B}_{Kn\pi}}{{\cal B}_e} \\
&=& 3 V_{us}^2 S_{EW} (1+\delta_{pert}+\delta_{mass}^s+\delta_{NP}),
\end{eqnarray*}
and we focus on the quark mass term:
$$\delta_{mass}^s \simeq
- 8 \frac{\bar{m}_s^2}{m_\tau^2} \left[
1 +\frac{16}{3}\frac{\alpha_S}{\pi} +
{\cal O}\left(\frac{\alpha_S}{\pi}\right)^2 \right],$$
where $\bar{m}_s = m_s(m_\tau^2)$ is the $\overline{\mbox{MS}}$
running strange quark mass, evaluated at the tau mass scale.
For $m_s(m_\tau^2) \approx 150$ MeV/c$^2$, we expect
$\delta_{mass}^s \approx -10\%$, and $R_\tau^s \approx 0.16$
(but with a large uncertainty due to poor convergence of
the QCD expansion).
The history of the calculation of the
${\cal O}\left(\frac{\alpha_S}{\pi}\right)^2$ term,
and the apparent convergence of the series, has been rocky.
But considerable progress has been made in the last year,
and the theoretical progress is reviewed in \cite{ref:Prades}.
The value of the strange quark mass appears in many
predictions of kaon properties;
most importantly, it appears in the theoretical expression
for the parameter governing direct CP violation
in the kaon system, $\epsilon^\prime/\epsilon$.
Thus we need to know its value in order to extract information
on the CKM matrix from measurements of direct CP violation.
ALEPH has constructed the strange spectral function
as described in the previous section
and shown in Fig.~\ref{fig:specs_aleph},
and has calculated its integral:
$$R_\tau^s = 0.1607\pm0.0066$$
and its moments $R_{\tau,s}^{kl}$.
By comparing with the non-strange moments,
they cancel, to lowest order,
the mass-independent non-perturbative terms.
They fit for $m_s(m_\tau^2)$
and the residual non-perturbative condensates.
They obtain \cite{ref:Hoecker}
the strange quark running mass
$m_s(m_\tau^2) = (163^{+34}_{-43})$ MeV/c$^2$.
At the (1 GeV)$^2$ scale,
$m_s(1\,\mbox{GeV}^2) = (217^{+45}_{-57})$ MeV/c$^2$,
which compares well with estimates from
sum rules and lattice calculations.
The uncertainty is rather large, especially
the component that comes from the uncertainty
in the convergence in the QCD series;
but improvements are expected.
\section{TAU NEUTRINO MASS}
\label{s-mnutau}
In the Standard Model, the neutrinos are assumed to be massless,
but nothing prevents them from having a mass.
If they do, they can mix (in analogy with the down-type quarks),
exhibit CP violation, and potentially decay.
In the so-called ``see-saw'' mechanism, the tau neutrino
is expected to be the most massive neutrino.
Indirect bounds from cosmology and big-bang nucleosynthesis
imply that
if the $\nu_\tau$ has a lifetime long enough to have not decayed
before the period of decoupling,
then a mass region between 65 eV/c$^2$ and 4.2 GeV/$c^2$
can be excluded.
For unstable neutrinos, these bounds can be evaded.
A massive Dirac neutrino will have a right-handed component,
which will interact very weakly with matter via the
standard charged weak current.
Produced in supernova explosions, these right-handed neutrinos
will efficiently cool the newly-forming neutron star,
distorting the time-dependent neutrino flux.
Analyses of the detected neutrino flux
from supernova 1987a,
results in allowed ranges $m_{\nu_\tau} < 15-30$ KeV/$c^2$
or $m_{\nu_\tau} > 10-30$ MeV/$c^2$,
depending on assumptions.
This leaves open a window for an MeV-range mass for $\nu_\tau$
of 10-30 MeV/$c^2$, with lifetimes on the order of
$10^5 - 10^9$ seconds.
The results from Super-K
(\cite{ref:Nakahata}, see section \ref{s-neuosc} below)
suggest neutrino mixing, and therefore, mass.
If they are observing $\nu_\mu \leftrightarrow \nu_\tau$
oscillation, then \cite{ref:McNulty} $m_{\nu_\tau} < 170$ KeV/c$^2$,
too low to be seen in collider experiments.
If instead they are observing oscillations
of $\nu_\mu$ to some sterile neutrino,
then there is no information from Super-K
on the tau neutrino mass.
If neutrino signals are observed from a galactic supernova,
it is estimated that neutrino masses as low as of 25 eV/$c^2$
could be probed by studying the dispersion in arrival time
of the neutrino events in a large underground detector
capable of recording neutral current interactions.
Very energetic neutrinos from a distant active galactic nucleus (AGN)
could be detected at large underground detectors (existing or planned).
If neutrinos have mass and therefore a magnetic moment,
their spin can flip in the strong magnetic field of the AGN,
leading to enhanced spin-flip and flavor oscillation effects
\cite{ref:Husain}.
The detectors must have the ability to measure the
direction of the source, the energy of the neutrino,
and its flavor.
At TAU 98, CLEO presented \cite{ref:Duboscq} two new limits,
one using $\tau\to 5\pi^\pm{\nu_\tau}$ and $3\pi^\pm2\pi^0{\nu_\tau}$,
and a preliminary result using $3\pi^\pm\pi^0{\nu_\tau}$.
Both results are based on the distribution of events in the
2-dimensional space of $m_{n\pi}$ {\it vs.}\ $E_{n\pi}^{lab}$,
looking for a kinematical suppression in the
high-mass, high energy corner
due to a 10 MeV-scale neutrino mass (see Fig.~\ref{fig:duboscq1}).
This technique has been used previously by ALEPH, DELPHI, and OPAL.
\begin{figure}[ht]
\psfig{figure=figs/duboscq1,width=2.6in}
\caption[]{The scaled hadronic energy $E_{n\pi}^{lab}/E_{beam}$
{\it vs.}\ hadronic mass for (a) the $5\pi$ and
(b) the $3\pi^\pm2\pi^0$ event candidates from CLEO.
Ellipses represent the resolution countours \cite{ref:Duboscq}.}
\label{fig:duboscq1}
\end{figure}
A summary of the best direct tau neutrino mass limits
at 95\%\ C.L.~is given \cite{ref:McNulty} in Table~\ref{tab:numass}.
\begin{table}[!ht]
\centering
\label{tab:numass}
\caption[]{Limits on the $\tau$ neutrino mass.}
\begin{tabular}{llr} \hline
Indirect & from ${\cal B}_e$ & 38 MeV \\
ALEPH & $5\pi(\pi^0)$ & 23 MeV \\
ALEPH & $3\pi$ & 30 MeV \\
ALEPH & both & 18.2 MeV \\
OPAL & $5\pi$ & 43 MeV \\
DELPHI & $3\pi$ & 28 MeV \\
OPAL & $3\pi - vs - 3\pi$ & 35 MeV \\
CLEO (98) & $5\pi$, $3\pi2\pi^0$ & 30 MeV \\
CLEO (98p) & $4\pi$ & 31 MeV \\
\hline
\end{tabular}
\end{table}
The limits are usually dominated by a few ``lucky'' events
near the endpoint, which necessarily have a low probability;
they are likely to be upward fluctuations
in the detector's mass {\it vs.}\ energy resolution response.
Therefore, it is essential to understand and model
that response, especially the tails.
Extracting meaningful limits on the neutrino mass
using such kinematical methods is made difficult by
many subtle issues regarding resolution,
event migration, modeling of the spectral functions,
and certainly also {\it luck}.
Pushing the discovery potential to or below 10 MeV
will require careful attention to these issues,
but most importantly, much higher statistics.
We look forward to the high-statistics event samples
obtainable at the B Factories soon to come on line.
\section{NEUTRINO OSCILLATIONS}
\label{s-neuosc}
If neutrinos have mass,
then they can mix with one another,
thereby violating lepton family number conservation.
In a two-flavor oscillation situation,
a beam of neutrinos that are initially of one pure flavor,
{\it e.g.}, $\nu_\mu$
will oscillate to another flavor, {\it e.g.}, $\nu_\tau$
with a probability given by
$$P(\nu_\mu\to\nu_\tau) =
\sin^2(2\theta_{\mu\tau})\sin^2(\pi L/L_0) ,$$
where the strength of the oscillation is governed by
the mixing parameter $\sin^2 2\theta_{\mu\tau}$.
$L$ is the distance from the source of initially pure $\nu_\mu$
in meters, and
the oscillation length $L_0$ is given by
$$ L_0 = \frac{2.48 E_\nu\,[\mbox{GeV}]}{
\Delta m^2\,[\mbox{eV}^2] }. $$
$\Delta m^2$ is the difference of squared masses
of the two flavors measured in eV$^2$, and
$E_\nu$ is the neutrino energy measured in GeV.
This formula is only correct in vacuum.
The oscillations are enhanced if the neutrinos are travelling
through dense matter, as is the case for neutrinos
from the core of the Sun.
This enhancement, known as the MSW effect, is invoked
as an explanation
for the deficit of $\nu_e$ from the Sun's
core as observed on Earth.
In such a scenario, all three neutrino flavors mix with one another.
Evidence for neutrino oscillations
has been seen in the solar neutrino deficit
($\nu_e$ disappearance),
atmospheric neutrinos (apparently, $\nu_\mu$ disappearance),
and neutrinos from $\mu$ decay ($\nu_\mu\to \nu_e$
and $\bar{\nu}_\mu\to \bar{\nu}_e$ appearance, at LSND).
At this workshop, upper limits were presented
for neutrino oscillations from the $\nu_\mu$ beam at CERN, from
NOMAD \cite{ref:Paul} and CHORUS \cite{ref:Cussans}.
It appears that, if the evidence for neutrino oscillations
from solar, atmospheric, and LSND experiments are {\it all} correct,
the pattern cannot be explained with only three
Standard Model Dirac neutrinos.
There are apparently three distinct $\Delta m^2$ regions:
\begin{eqnarray*}
\Delta m^2_{solar} &=& 10^{-5} \ \mbox{or}\ 10^{-10}\ \mbox{eV}^2 \\
\Delta m^2_{atmos} &=& 10^{-2} \ \mbox{to}\ 10^{-4} \mbox{eV}^2 \\
\Delta m^2_{LSND} &=& 0.2 \ \mbox{to}\ 2 \ \mbox{eV}^2 .
\end{eqnarray*}
This mass hierarchy is difficult (but not impossible)
to accomodate in a 3 generation model.
The addition of a
$4^{th}$ (sterile? very massive?) neutrino
can be used to describe either the solar neutrino
or atmospheric neutrino data \cite{ref:Gonzalez}
(the LSND result requires $\nu_\mu\to \nu_e$).
The introduction of such a $4^{th}$ neutrino makes it
relatively easy to describe all the data.
In addition, a light sterile neutrino is a candidate
for hot dark matter, and a heavy neutrino, for cold dark matter.
\subsection{Results from Super-K}
\label{ss-superk}
We turn now to the results on neutrino oscillations
from Super-Kamiokande, certainly
the highlight of {\it any} physics conference in 1998.
Neutrinos produced in atmospheric cosmic ray showers
should arrive at or below the surface of the earth
in the ratio $(\nu_\mu+\bar{\nu}_\mu)/(\nu_e+\bar{\nu}_e) \simeq 2$
for neutrino energies $E_\nu < 1$ GeV, and somewhat higher
at higher energies. There is some uncertainty in the
flux of neutrinos of each flavor from the atmosphere,
so Super-K measures \cite{ref:Nakahata}
the double ratio:
$$ R =
\left(\frac{\nu_\mu+\bar{\nu}_\mu}{\nu_e+\bar{\nu}_e}\right)_{observed}
\left/
\left(\frac{\nu_\mu+\bar{\nu}_\mu}{\nu_e+\bar{\nu}_e}\right)_{calculated}.
\right.
$$
They also measure the
zenith-angle dependence of the flavor ratio;
upward-going
neutrinos have traveled much longer since they were produced,
and therefore have more time to oscillate.
Super-K analyzes several classes of events
(fully contained and partially contained,
sub-GeV and multi-GeV),
classifying them as ``e-like'' and ``$\mu$-like''.
Based on the flavor ratio, the double-ratio,
the lepton energy,
and the zenith-angle dependence,
they conclude that they are observing effects
consistent with $\nu_\mu$ disappearance and thus neutrino oscillations
(see Fig.~\ref{fig:superkz}).
Assuming $\nu_\mu\to\nu_\tau$, their best fit gives
$\Delta m^2 = 2.2\times 10^{-3}\,\mbox{eV}^2$, $\sin^2 2\theta_{\mu\tau} = 1$
(see Fig.~\ref{fig:icarustau}).
They also measure the
upward through-going and stopping muon event rates
as a function of zenith angle, and see consistent results.
Whether these observations imply that muon neutrinos are oscillating
into tau neutrinos or into some other (presumably sterile) neutrino
is uncertain.
\begin{figure}[ht]
\psfig{figure=figs/superkz.ps,width=2.6in}
\caption[]{Zenith angle distribution of atmospheric
neutrino events:
(a) sub-GeV $e$-like;
(b) sub-GeV $\mu$-like;
(c) multi-GeV $e$-like;
(d) multi-GeV $\mu$-like and partially-contained.
Shaded histograms give the MC expectations
without oscillations, dotted histograms
show the best fit assuming neutrino oscillations \cite{ref:Nakahata}.}
\label{fig:superkz}
\end{figure}
Super-K also observes \cite{ref:Nakahata}
some 7000 low-energy $\nu_e$ events
which point back to the sun, presumably produced during
$^8B$ synthesis in the sun.
They measure a flux which is significantly smaller
than the predictions from standard solar models
with no neutrino mixing
($\approx 40\%$, depending on the model),
with a hint of energy dependence in the (data/model) ratio.
They see no significant difference
between solar neutrinos which pass through the earth
(detected at night) and those detected during the day.
\subsection{NOMAD and CHORUS}
\label{ss-nomad}
Two new accelerator-based
``short-baseline'' neutrino oscillation experiments
are reporting null results at this workshop.
The NOMAD and CHORUS detectors are situated in the
$\nu_\mu$ beam at CERN, searching for $\nu_\mu\to \nu_\tau$ oscillations.
They have an irreducible background from $\nu_\tau$ in the beam
(from $D_s$ production and decay),
but it is suppressed relative to the $\nu_\mu$ flux by $5\times 10^{-6}$.
NOMAD is an electronic detector which searches for
tau decays to $e\nu\nu$, $\mu\nu\nu$,
$h n\pi^0\nu$, and $3\pi n\pi^0\nu$.
They identify them as decay products of a tau
from kinematical measurements,
primarily the requirement of
missing $p_t$ due to the neutrino(s).
They see no evidence for oscillations \cite{ref:Paul},
and set the following limits at 90\%\ C.L.:
$$P(\nu_\mu \to \nu_\tau) < 0.6\times 10^{-3}$$
$$\sin^22\theta_{\mu\tau} < 1.2\times 10^{-3}$$
for $\Delta m^2 > 10^2$ eV$^2$.
The exclusion plot is shown in Fig.~\ref{fig:nomad}.
The CHORUS experiment triggers on
oscillation candidates with an electronic detector,
then aims for the direct observation of production and decay
of a tau in a massive nuclear emulsion stack target.
they look for tau decays to $\mu\nu\nu$ or $h n\pi^0\nu$,
then look for a characteristic pattern in their emulsion
corresponding to: nothing, then a
tau track, a kink, and a decay particle track.
They also see no evidence for oscillations \cite{ref:Cussans},
and set limits that are virtually identical to those of NOMAD;
the exclusion plot is shown in Fig.~\ref{fig:nomad}.
\begin{figure}[ht]
\psfig{figure=figs/nomad.ps,width=2.6in}
\caption[]{Exclusion plot at 90\%\ C.L.~for $\nu_\mu\leftrightarrow\nu_\tau$
oscillations, from CHORUS and NOMAD \cite{ref:Paul}.}
\label{fig:nomad}
\end{figure}
CHORUS has also reconstructed a beautiful event,
shown in Fig.~\ref{fig:chorusevt},
in which a $\tau^-$ lepton is tracked
in their emulsion \cite{ref:Cussans}.
They infer the decay chain:
$\nu_\mu N \to \mu^- D^{*+}_s N$,
$D^{*+}_s \to D^{+}_s \gamma$,
$D^{+}_s \to \tau^+\nu_\tau$,
$\tau^+\to \mu^+\nu_\mu\bar{\nu}_\tau$.
In their emulsion, they track the $D^{+}_s$, the $\mu^-$,
the $\tau^+$, and the decay $\mu^+$.
\begin{figure}[ht]
\psfig{figure=figs/chorusevt.ps,width=2.6in}
\caption[]{Tracks in the CHORUS emulsion
from a $D_s\to \tau\nu$ candidate \cite{ref:Cussans}.}
\label{fig:chorusevt}
\end{figure}
\subsection{OBSERVATION OF $\mathbf{\nu_\tau}$}
The appearance of a $\tau$ and its subsequent decay
in a detector exposed to a neutrino beam
would constitute direct observation
of the tau neutrino.
Fermilab experiment 872 ({\bf D}irect {\bf O}bservation
of {\bf NU\_T}au, DONUT) is designed to directly see
for the first time, such events.
The experiment relies on the production of $D^+_s$
mesons, which decay to $\tau^+\nu_\tau$ with
branching fraction $\simeq 4\%$.
A detector downstream from a beam dump
searches for $\tau^-$ production and subsequent
decay (with a kink) in an emulsion target.
They have collected, and are presently analyzing, their data;
they expect to find $40\pm12$ $\nu_\tau$ interactions \cite{ref:Thomas}.
At TAU 98, they showed \cite{ref:Thomas} an event
consistent with such an interaction; see Fig.~\ref{fig:donutevt}.
If/when a sufficient number of such events are
observed and studied, there will finally be
direct observational evidence for the tau neutrino.
\begin{figure}[ht]
\psfig{figure=figs/donutevt.ps,width=2.6in}
\caption[]{Candidate event for $\nu_\tau\to \tau X$, $\tau\to \mu\nu\nu$,
in the DONUT emulsion.
There is a 100 mrad kink 4.5 mm from the interaction vertex.
The scale units are microns \cite{ref:Thomas}.}
\label{fig:donutevt}
\end{figure}
\section{WHAT NEXT?}
\label{s-whatnext}
The frontiers of tau physics center on ever-higher precision,
farther reach for rare and forbidden phenomena,
and a deeper study of neutrino physics.
The next generation of accelerator-based
experiments will search for
neutrino oscillations with the small $\Delta m^2$
suggested by the Super-K results,
and do precision studies of $\tau$ decays
with samples exceeding $10^8$ events.
\subsection{Long-baseline neutrino oscillations}
\label{ss-longbase}
Several ``long-baseline'' experiments are planned,
in which $\nu_\mu$ beams are produced at accelerators,
allowed to drift (and hopefully, oscillate
into $\nu_e$, $\nu_\tau$, or sterile neutrinos)
for many km, and are then detected.
The motivation comes from the Super-K results,
which suggest $\nu_\mu \leftrightarrow \nu_\tau$ oscillations
with $\Delta m^2$ in the $10^{-2} - 10^{-3}$ eV$^2$ range.
For such small $\Delta m^2$ values,
accelerator-produced $\nu_\mu$ beams must travel many kilometers
in order to get appreciable $\nu_\mu\to \nu_\tau$ conversion.
At Fermilab, an intense, broad-band $\nu_\mu$ beam
is under construction (NuMI).
The MINOS experiment consists of
two nearly identical detectors which will search for
neutrino oscillations with this beam \cite{ref:Thomas}.
The near detector will be on the Fermilab site.
The neutrino beam will pass through Wisconsin,
and neutrino events will be detected
at a far detector
in the Soudan mine in Minnesota, 720 km away.
The experiment is approved, and is scheduled
to turn on in 2002.
From the disappearance of $\nu_\mu$'s between the
near and far detectors,
MINOS will be sensitive \cite{ref:Thomas}
to oscillations with mixing angle
$\sin^2 2\theta \stackrel{>}{\scriptstyle\sim} 0.1$
for $\Delta m^2 \stackrel{>}{\scriptstyle\sim} 2\times 10^{-3}$ eV$^2$.
From the appearance of an excess of $\nu_e$ events
in the far detector, they are sensitive to
$\nu_\mu\to\nu_e$ mixing down to
$\sin^2 2\theta \stackrel{>}{\scriptstyle\sim} 0.002$
for $\Delta m^2 \stackrel{>}{\scriptstyle\sim} 2\times 10^{-2}$ eV$^2$.
MINOS may be able to detect the appearance
of $\nu_\tau$'s from
$\nu_\mu\to\nu_\tau$ oscillations
using the decay mode
$\tau\to\pi\nu_\tau$.
They expect a sensitivity for such oscillations of
$\sin^2 2\theta \stackrel{>}{\scriptstyle\sim} 0.21$
for $\Delta m^2 \stackrel{>}{\scriptstyle\sim} 2\times 10^{-2}$ eV$^2$.
At CERN, the {\bf N}eutrino Beam to {\bf G}ran {\bf S}asso
(NGS) project plans on building several neutrino detectors
in the Gran Sasso Laboratory in central Italy,
732 km from the CERN wide-band $\nu_\mu$ beam.
The energy and flux of the CERN $\nu_\mu$ beam
are both somewhat higher than is planned for NuMi at Fermilab.
The ICARUS experiment \cite{ref:Bueno},
using liquid argon TPC as a target and detector,
is optimized for $\nu_\tau$ and $\nu_e$ appearance.
It is approved and expects to be taking data
as soon as the beam is available (2003?).
They can search for $\nu_\mu\to \nu_e$ appearance down to
$\sin^2 2\theta \stackrel{>}{\scriptstyle\sim} 1\times 10^{-3}$,
and $\nu_\mu\to \nu_\tau$ appearance down to
$\sin^2 2\theta \stackrel{>}{\scriptstyle\sim} 5\times 10^{-3}$,
for $\Delta m^2 \stackrel{>}{\scriptstyle\sim} 2\times 10^{-2}$ eV$^2$.
A projected exclusion plot is shown in Fig.~\ref{fig:icarustau}.
\begin{figure}[ht]
\psfig{figure=figs/icarustau.ps,width=2.6in}
\caption[]{ICARUS excluded region
for $\nu_\mu \to \nu_\tau$ oscillations
if no signal is observed \cite{ref:Bueno}.
The region favored by Super-K
is shown near $\sin^2 2\theta = 1$,
with $\Delta m^2$ between $10^{-1} - 10^{-3}$ eV$^2$.}
\label{fig:icarustau}
\end{figure}
Several other detectors are being proposed: OPERA \cite{ref:Bueno}
(a lead-emulsion stack),
NICE (iron and scintillator),
AQUA-RICH (water $\check{\mbox{C}}$erenkov), and NOE.
The NOE detector \cite{ref:Scapparone}
will be consist of TRD and calorimeter modules,
optimized for the detection of electrons from
$\nu_\mu\to \nu_e\to e X$,
and also and from $\nu_\mu\to\nu_\tau\to \tau X$,
$\tau\to e\nu\nu$ (with missing $p_t$).
They also hope to measure the rate for neutral current events
relative to charged current events.
If the interpretation of the Super-K data is correct,
we will soon have a wealth of data to pin down
the oscillation parameters for
$\nu_\mu\leftrightarrow\nu_e$, $\nu_\tau$, and $\nu_{sterile}$.
\subsection{High luminosity $e^+e^-$ Colliders}
\label{ss-collider}
The IHEP Laboratory in Beijing has been operating
the BEPC collider and the BES detector,
with $e^+e^-$ collisions at or above tau pair threshold,
for many years. They are pursuing the physics of taus
(their measurement of the tau mass totally dominates
the world average), $\psi$ spectroscopy and decay,
and charm.
They have proposed the construction
of a much higher luminosity tau-charm factory (BTCF).
However,
because of the limitation of funds, the BTCF will
not be started for at least 5 years \cite{ref:Qi}.
The plan for the near term is to continue to improve
the luminosity of the existing BEPC collider.
In tau physics, they
hope to produce results from the BES experiment at BEPC
on $m_{\nu_\tau}$, where they favor \cite{ref:Qi}
the use of the decay mode $\tau\to K\bar{K}\pi\nu_\tau$.
Within the next two years, three new ``B Factories'',
high luminosity $e^+e^-$ colliders with
center of mass energies around
10.58 GeV (the $\Upsilon(4S) \to B\bar{B}$ resonance)
will come on line:
the upgraded CLEO III detector at the CESR collider;
BaBar at SLAC's PEP-II collider \cite{ref:Seiden};
and BELLE at KEK's TRISTAN collider \cite{ref:Oshima}.
All these colliders and detectors expect to begin operation
in 1999. The BaBar and BELLE experiments operate with asymmetric beam
energies, to optimize the observation of CP violation in the $B$
mixing and decay; CLEO will operate with symmetric beams.
At design luminosities, these experiments expect to collect
between $10^7$ and $10^8$ tau pair events per year.
In a few years, one can expect more than $\sim 10^{8}$ events
from BaBar, BELLE, and CLEO-III.
The asymmetric beams at BaBar and BELLE should not present
too much of a problem (or advantage) for tau physics;
it may help for tau lifetime measurements.
The excellent $\pi$-$K$ separation that the detectors
require for their $B$ physics goals will also
be of tremendous benefit in the study of
tau decays to kaons.
BaBar and BELLE will also have improved ability
to identify low-momentum muons,
which is important for Michel parameter measurements.
The high luminosities will make it possible
to improve the precision of almost all the
measurements made to date in tau decays,
including the branching fractions, Michel parameters,
and resonant substructure in multi-hadronic decays.
In particular, they will be able to
study rare decays (such as $\eta X\nu_\tau$ or $7\pi\nu_\tau$)
with high statistics.
They will search with higher sensitivity for
forbidden processes, such as LFV neutrinoless decays
and CP violation due to an
electric dipole moment \cite{ref:Seiden,ref:Oshima}
in tau production or a charged Higgs in tau decay.
And they will be able to search for neutrino masses
down to masses below the current 18 MeV/c$^2$ best limit.
At the next Tau Workshop, we look forward to a raft of new results
from DONUT, BES, the B Factories, the Tevatron, and the long-baseline
neutrino oscillation experiments.
I expect that that meeting will be full of
beautiful results, and maybe some surprises.
\section{Acknowledgements}
\label{s-ack}
I thank the conference organizers for a thoroughly stimulating
and pleasant conference.
This work is supported by the U.S.~Department of Energy.
| proofpile-arXiv_065-9173 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{I. Introduction}
The quantization of a gauge field in the continuum requires gauge fixing.
The standard gauge for perturbative calculations is the covariant gauge,
in which the theory can be regularized and renormalized systematically.
On the other hand, noncovariant gauges, though computationally more involved,
possess a number of advantages of their own.
Take Coulomb gauge as an example, all degrees of freedom there are
physical and unitarity is manifest. The Coulomb propagator alone gives
rise to the renormalization of the coupling constant [1] [2].
In addition, the explicit solubility of the Gauss law constraint in
Coulomb gauge might make it easier to construct variational wave functional
to explore the nonperturbative physics of a nonabelian gauge field.
The difficulties with noncovariant gauges of a nonabelian gauge theory
include 1) the complication of the curvilinear coordinates; 2) the lack of
manifest BRST and Lorentz invariances; 3) the additional divergence, the
energy divergence, because of the instantaneous nature of the bare Coulomb
propagator and the ghost propagator;
4) the lack of a unitary regularization scheme which can be applied to
all orders and therefore the uncertainty of the renormalizability.
The operator ordering associated with the curvilinear coordinates
has been investigated by Christ and Lee [3]. They
derived the path integral in a noncovariant
gauge from the Weyl ordering of the corresponding Hamiltonian
and found that the effective Lagrangian contains
two nonlocal terms, referred to as Christ-Lee terms, in addition to the
classical Lagrangian and the ghost determinant. These terms start to show
up at the two loop order and their diagrammatic implications have been
discussed to that order [4][5].
Here, I approach the noncovariant gauge strictly {\it{within}} the
path integral formulation, along the line of Refs. 6 and 7. Starting with
the discrete time path integral in the time
axial gauge, the change of the integration variables to other gauges
and the continuous time
limit are examined carefully and the Christ-Lee terms are
reproduced. In the presence of quark fields, a new nonlocal term
of the effective Lagrangian involving fermion bilinears, which
starts to show up at the {\it{one loop}} order is discovered.
According to Feynman [8], a path integral of a quantum mechanical system
is the $\epsilon\to 0$ limit of a multiple integral over canonical coordinates
on a one dimensional lattice of time slices separated by $\epsilon$.
The exponent of the weighting factor on each time slice is equal to
$i\epsilon$ times the classical Lagrangian only if the canonical coordinates
are cartesian [2]. This is the case of the time-axial gauge of a gauge theory.
Even there, the velocities in the Lagrangian is the {\it mean} velocity between
the neighboring time slices instead of instantaneous ones. When transforming
the integration variables to curvilinear ones, or to other gauges, e.g.
Coulomb gauge, one has to keep track of all the contributions to the limit
$\epsilon\to 0$, which introduces the Christ-Lee operator ordering terms in
addition to the classical Lagrangian in terms of the new coordinates and the
corresponding jacobian. For the same reason,
the classical Lagrangian with the mean velocities is not exactly invariant
under a gauge transformation which depends on canonical coordinates and
the variation contributes to the limit $\epsilon\to 0$.
The standard form of the BRST identity is only recovered after
including the Christ-Lee terms. Alternatively, one may retain a nonvanishing
$\epsilon$ and this lends us to a gauge theory with discrete time
coordinates, which is manifest BRST invariant.
For a field theory, this discrete time formulation serves as a unitary
regularization scheme, which regularizes the energy divergence and the
ordinary ultraviolet divergence at one shot. It also possesses several
technical advantages which may be helpful in higher orders.
This paper is organized as follows: I will illustrate the technique of
the gauge fixing within the path integral formulation and the derivation
of the BRST identity of the soluble model of Ref. [5] in the next two
sections. The application to a nonabelian gauge field is discussed in
the Sections IV and V. There I will also test the discrete time regularization
scheme by evaluating the one loop correction to the Coulomb propagator.
The comparison of my regularization scheme with others and
some comments on the renormalizability will be discussed in the
final section. Except for the fermionic
operator ordering term, the interplay between the BRST identity and
the operator ordering terms and the discrete time regularization scheme,
all other results are not new. But it is instructive to see how the
operator ordering terms come about without referring to the operator
formulations. It is also amazing to see how similar in formulation the
soluble model and the nonabelian gauge field are.
\section{II. The Path Integral of a Soluble Model}
The soluble model proposed by Friedberg, Lee, Pang and the author
[5] provides a playground for the investigation of gauge fixing
and the BRST invariance in some nonstandard gauges within the
path integral formulation.
The Lagrangian of the soluble model is
$$L={1\over 2}[(\dot x+g\xi y)^2+(\dot y-g\xi x)^2+(\dot z-\xi)^2]
-U(x^2+y^2).\eqno(2.1)$$ It is invariant under the following
gauge transformation
$$x\to x^\prime=x\cos\alpha-y\sin\alpha,\eqno(2.2)$$
$$y\to y^\prime=x\sin\alpha+y\cos\alpha,\eqno(2.3)$$
$$z\to z^\prime=z+{1\over g}\alpha\eqno(2.4)$$
and $$\xi\to \xi^\prime=\xi+{1\over g}\dot\alpha\eqno(2.5)$$
with $\alpha$ an arbitrary function of time. The Lagrangian (2.1)
does not contain the time derivative of $\xi$ and the corresponding
equation of motion reads
$${\partial L\over \partial\xi}=g[y(\dot x+g\xi y)-x(\dot y-g\xi x)]
-\dot z+\xi=0,\eqno(2.6)$$ which is the analog of the Gauss law of
a gauge field. In the following,
we shall review the canonical quantization in the time-axial gauge,
i. e., $\xi=0$, convert it into a path integral and transform carefully
the path integral into the $\lambda$-gauge, i. e., $z=\lambda x$,
an analog of the Coulomb gauge.
In the time-axial gauge, the Lagrangian (2.1) becomes
$$L={1\over 2}(\dot X^2+\dot Y^2+\dot Z^2)-U(X^2+Y^2).\eqno(2.7)$$
The canonical momenta corresponding to $X$, $Y$, and $Z$ are
$$P_X=\dot X=-i{\partial\over\partial X},\eqno(2.8)$$
$$P_Y=\dot Y=-i{\partial\over\partial Y}\eqno(2.9)$$ and
$$P_Z=\dot Z=-i{\partial\over\partial Z},\eqno(2.10)$$
and the Hamiltonian operator reads
$$H={1\over 2}(P_X^2+P_Y^2+P_Z^2)+U(X^2+Y^2).\eqno(2.11)$$
The physical states in the Hilbert space are subject to the
Gauss law constraint, i.e.
$$[P_Z+g(XP_Y-YP_X)]\vert>=0,\eqno(2.12)$$ as follows from (2.6) and
the operator $P_Z+g(XP_y-YP_X)$ commutes with $H$.
In terms of polar coordinates, $X=\rho\cos\Phi$ and $Y=\rho\sin\Phi$,
the wave function of a physical state takes the form
$$<X,Y,Z\vert>=\Psi(\rho, \Phi-gZ).\eqno(2.13)$$ For a harmonic oscillator
potential, $U={1\over 2}\omega^2(X^2+Y^2)$, the energy spectrum is given by
$$E=\omega(n_++n_-+1)+{1\over 2}g^2(n_+-n_-)^2\eqno(2.14)$$ with
$n_+$, $n_-$ non-negative integers, and the corresponding eigenfunction
can be expressed in terms of Laguerre polynomials.
Following Feynman, the transition matrix element
$<X,Y,Z\vert e^{-iHt}\vert >$ can be cast
into a path integral $$<X,Y,Z\vert e^{-iHt}\vert >=\lim_{\epsilon\to 0}
\Big({1\over 2i\pi\epsilon}\Big)^{{3\over 2}N}\int\prod_{n=0}^{N-1}
dX_ndY_ndZ_n$$ $$\times e^{i\epsilon\sum_{n=0}^{N-1}L(n)}<X_0,Y_0,Z_0\vert>,
\eqno(2.15)$$ where $\epsilon=t/N$ and
$$L(n)={1\over 2}(\dot X_n^2+\dot Y_n^2+\dot Z_n^2)-U(X_n^2+Y_n^2)
\eqno(2.16)$$ with $\dot X_n=(X_{n+1}-X_n)/\epsilon$ etc.. In the rest
part of the paper, the limit sign and the normalization factors like
$(2i\pi\epsilon)^{-{3\over 2}N}$ will not be displayed explicitly.
As was pointed out in [3] and [5], the path integral (2.15) picks up the
velocity $$(\dot X_n, \dot Y_n, \dot Z_n)\hbox{ as large as }
O\Big(\epsilon^{-{1\over 2}}\Big).\eqno(2.17)$$
In other words, the contribution to the path
integral comes from paths which can be more zigzag than classical
ones. This has to be taken into account
in variable transformations. The magnitudes of $X_n$, $Y_n$ and $Z_n$,
on the other hand, remains of the order one with a well defined
initial wave function $<X_0,Y_0,Z_0\vert>$. To transform the path
integral (2.15) to $\lambda$-gauge, i.e., $z=\lambda x$, we insert the
identity $$1={\rm{const.}}\int\prod_{n=0}^{N-1}d\theta_n{\cal J}_n\delta
(z_n-\lambda x_n),\eqno(2.18)$$ with
$$x_n=X_n\cos\theta_n-Y_n\sin\theta_n,\eqno(2.19)$$
$$y_n=X_n\sin\theta_n+Y_n\cos\theta_n,\eqno(2.20)$$
$$z_n=Z_n+{1\over g}\theta_n\eqno(2.21)$$ and $${\cal J}_n=
{1\over g}+\lambda y_n,\eqno(2.22)$$ we have
$$<X,Y,Z\vert e^{-iHt}\vert >={\rm{const.}}\int\prod_{n=0}^{N-1}
dX_ndY_ndZ_nd\theta_n{\cal J}_n\delta(z_n-\lambda x_n)\times$$ $$\times
e^{i\epsilon\sum_{n=0}^{N-1}L(n)}<X_0,Y_0,Z_0\vert>,\eqno(2.23)$$
Introducing back $\xi_n$ via $$\xi_n={1\over g}\dot\theta_n
={\theta_{n+1}-\theta_n\over g\epsilon}\eqno(2.24)$$ and changing the
integration variables from $X_n$, $Y_n$, $Z_n$ and $\theta_n$ to $x_n$,
$y_n$, $z_n$ and $\xi_n$, we obtain that
$$<X,Y,Z\vert e^{-iHt}\vert >={\rm{const.}}\int\prod_{n=0}^{N-1}
dx_ndy_ndz_nd\xi_n\delta(z_n-\lambda x_n)\times$$ $$\times
e^{i\epsilon\sum_{n=0}^{N-1}L^\prime(n)}<x_0,y_0,z_0\vert>,\eqno(2.25)$$
where $$L^\prime(n)=L(n)-{i\over\epsilon}\ln{\cal J}_n\eqno(2.26)$$
with $L(n)$ the same Lagrangian (2.16). Written in terms of the new
variables, $L(n)$ becomes
$$L(n)={1\over 2\epsilon^2}(\tilde r_{n+1}e^{-i\epsilon g\xi_n\sigma_2}
-\tilde r_n)(e^{i\epsilon g\xi_n\sigma_2}r_{n+1}-r_n)
+{1\over 2}(\dot z_n-\xi_n)^2-U(\tilde r_n r_n),\eqno(2.27)$$
where we have grouped $x_n$ and $y_n$ into a $2\times 1$ matrix
$$r_n=\left(\matrix{x_n\cr y_n}\right)\eqno(2.28)$$
and $\sigma_2$ is the second Pauli matrix.
For finite $\epsilon$, (2.25) with (2.26)-(2.27) define a one-dimensional
lattice gauge field and the limit $\epsilon\to 0$ corresponds to its
continuum limit. As $\epsilon\to 0$, it follows from (2.17),
(2.19)-(2.21) and (2.24) that
$$(\dot x_n, \dot y_n, \dot z_n, \xi_n)=O(\epsilon^{-{1\over 2}}).
\eqno(2.29)$$ Several terms beyond the naive continuum limit
have to be kept when expanding the exponential $e^{i\epsilon g\xi_n\sigma_2}$
in (2.27) according to $\xi_n$ [3]. In addition, the commutativity between time derivative and path
integral contractions requires the Lagrangian to be written in terms of
$\dot x_n$, $\dot y_n$, $\dot z_n$, $\xi_n$, $\bar x_n$, $\bar y_n$ and
$\bar z_n$ with $\bar x_n=(x_n+x_{n+1})/2$ etc.. In another word, we write
$$L^\prime(n)\equiv L^\prime(\dot x_n,\dot y_n,\dot z_n,\xi_n,x_n,
y_n,z_n)=L^\prime(\dot x_n,\dot y_n,\dot z_n,\xi_n,\bar x_n,
\bar y_n,\bar z_n)+\delta L^\prime(n)\eqno(2.30)$$ with
$$\delta L^\prime(n)=L^\prime(\dot x_n,\dot y_n,\dot z_n,\xi_n,x_n,
y_n,z_n)-L^\prime(\dot x_n,\dot y_n,\dot z_n,\xi_n,\bar x_n,\bar y_n,
\bar z_n).\eqno(2.31)$$ According to the estimate (2.29), the
contribution from the potential energy $U$ to the difference (2.31)
vanishes in the limit $\epsilon\to 0$. But that from the kinetic
energy and from the jacobian (2.22) {\it{do not}}. With this precaution,
we rewrite $L^\prime(n)$ as
$$L^\prime(n)=L_{\rm{eff.}}(n)+{i\over 2\epsilon}(\ln{\cal J}_{n+1}
-\ln{\cal J}_n),\eqno(2.32)$$ where
$$L_{\rm{eff.}}(n)=L(n)-{i\over\epsilon}\ln\bar{\cal J}_n+{i\over\epsilon}
\Big[\ln\bar{\cal J}_n-{1\over 2}
(\ln{\cal J}_{n+1}+\ln{\cal J}_n)\Big]\eqno(2.33)$$ with
$$\bar{\cal J}_n={1\over g}+\lambda\bar y_n.\eqno(2.34)$$
The path integral (2.25) becomes then
$$<X,Y,Z\vert e^{-iHt}\vert >={\rm{const.}}{\cal J}_N^{-{1\over 2}}
\int\prod_{n=0}^{N-1}dx_ndy_ndz_nd\xi_n\delta(z_n-\lambda x_n)\times$$
$$\times e^{i\epsilon\sum_{n=0}^{N-1}L_{\rm{eff.}}(n)}
{\cal J}_0^{1\over 2}<x_0,y_0,z_0\vert>,\eqno(2.35)$$
Now it is the time to take the limit $\epsilon\to 0$.
The small $\epsilon$ expansion of $L_{\rm{eff.}}(n)$ reads
$$L_{\rm{eff.}}(n)=L_{\rm{cl.}}(n)-{i\over\epsilon}\ln
\bar{\cal J}_n+\Delta L(n)+O(\epsilon^{1\over 2}),\eqno(2.36)$$ where
$$L_{\rm{cl.}}(n)={1\over 2}[(\dot x_n+g\xi_n\bar x_n)^2+
(\dot y_n-g\xi_n\bar y_n)^2+(\dot z_n-\xi_n)^2]-U(\bar x_n^2+\bar y_n^2)
\eqno(2.37)$$ is the classical Lagrangian but with {\it mean}
velocities and $$\Delta L(n)=-{1\over 8}\epsilon^2g^2\xi_n^2(
\tilde{\dot r_n}-ig\xi_n\tilde{\bar r_n}\sigma_2)\Big(\dot r_n
+{i\over 3}g\sigma_2\xi_n\bar r_n\Big)+{i\over 8}\epsilon
{\lambda^2g^2\dot y_n^2\over (1+\lambda g\bar y_n)^2}.\eqno(2.38)$$
The first term of $\Delta L(n)$ comes from the kinetic energy, the
second term from the jacobian. It follows from (2.29) that $\Delta L(n)
=O(1)$ and the terms not displayed all vanish as $\epsilon\to 0$.
It remains to convert $\Delta L(n)$ into an equivalent potential
(to eliminate the explicit $\epsilon$ dependence). The recipe was given
by Gervais and Jevicki [6], which we shall outline here.
We assume that the integrations on the time slices $t=0$, $\epsilon$,
$2\epsilon$, ..., $(n-1)\epsilon$ have been carried out and we are left
with $$<X,Y,Z\vert e^{-iHt}\vert >={\rm{const.}}{\cal J}_N^{-{1\over 2}}
\int\prod_{m=n+1}^{N-1}dx_mdy_mdz_md\xi_m$$ $$\times\delta(z_m-\lambda x_m)
e^{i\epsilon\sum_{m=n+1}^{N-1}L_{\rm{eff.}}(m)}$$
$$\times\int dx_ndy_ndz_nd\xi_n{\cal J}_n^{1\over 2}\delta(z_n-\lambda x_n)
e^{i\epsilon L_{\rm{cl.}}(n)}[1+i\epsilon\Delta L(n)]<x_n,y_n,z_n\vert e^
{-in\epsilon H}\vert>,\eqno(2.39)$$ where the corresponding transition matrix
element, $<x_n,y_n,z_n|e^{-in\epsilon H}|>$ is a smooth function
of $x_n$, $y_n$ and $z_n$. The structure of $\Delta L(n)$ is
$$\Delta L(n)=\sum_lC_l(\bar x_n,\bar y_n)P_l(n)\epsilon^{{n_l\over 2}}
\eqno(2.40)$$ with $P_l(n)$ a product of $\dot x_n$, $\dot y_n$ and $\xi_n$
and $n_l$ the number of factors. Changing the
integration variables from $x_n$, $y_n$ and $\xi_n$ to $\dot x_n$,
$\dot y_n$ and $\xi_n$ while replacing ($x_n$, $y_n$) by
($x_{n+1}-\epsilon \dot x_n$, $y_{n+1}-\epsilon \dot y_n$) and
($\bar x_n$, $\bar y_n$) by ($x_{n+1}-\epsilon \dot x_n/2$,
$y_{n+1}-\epsilon \dot y_n/2$), we have,
upon a Taylor expansion in terms of $\epsilon \dot x_n$ and
$\epsilon\dot y_n$, that
$$(2.39)={\rm{const.}}{\cal J}_N^{-{1\over 2}}\int\prod_{m=n+1}^{N-1}
dx_mdy_mdz_nd\xi_m\delta(z_m-\lambda x_m)
e^{i\epsilon\sum_{m=n+1}^{N-1}L_{\rm{eff.}}(m)}$$
$$\int dx_ndy_ndz_nd\xi_n{\cal J}_n^{1\over 2}\delta(z_n-\lambda x_n)
e^{i\epsilon L_{\rm{cl.}}(n)}[1-i\epsilon{\cal V}(\bar x_n,\bar y_n)
+O(\epsilon^{3\over 2})]$$ $$\times<x_n,y_n,z_n\vert e^{-in\epsilon H}\vert>,
\eqno(2.41)$$ where $${\cal V}(\bar x_n,\bar y_n)=
-<\Delta L(n)>_{\rm{Gauss}}=-\sum_lC_l(\bar x_n,\bar y_n)<P_l(n)>_
{\rm{Gauss.}}\eqno(2.42)$$ and the Gauss average $<...>_{\rm{Gauss}}$ is
defined to be $$<F(n)>_{\rm{Gauss}}={\int d\dot x_nd\dot y_nd\xi_ne^{i\epsilon
L_{\rm{cl.}}(n)}F(n)\over \int d\dot x_nd\dot y_nd\xi_ne^{i\epsilon
L_{\rm{cl.}}(n)}}\eqno(2.43)$$ while regarding $\bar x_n$ and $\bar y_n$
constants. Such a procedure is valid even if a linear term of
$\dot x_n$, $\dot y_n$ and $\xi_n$ with coefficients of the order one
is added to $L_{\rm{cl.}}(n)$, as will be the case when external sources are
introduced to generate various Green's functions. Introduce a $3\times 1$
matrix $$\left(\matrix{\zeta_{1n} \cr\zeta_{2n} \cr\zeta_{3n}}\right)
=\left(\matrix{\dot x_n+g\bar y_n\xi_n \cr\dot y_n-g\bar x_n\xi_n
\cr\xi_n-\lambda\dot x_n}\right),\eqno(2.44)$$ we have
$$\left(\matrix{\dot x_n \cr\dot y_n \cr\xi_n}\right)=
{1\over 1+\lambda g\bar y_n}\left(\matrix{1&0&-g\bar y_n \cr\lambda
g\bar x_n&1+\lambda g\bar y_n&g\bar x_n \cr\lambda&0&1 \cr}\right)
\left(\matrix{\zeta_{1n} \cr\zeta_{2n} \cr\zeta_{3n}}\right).
\eqno(2.45)$$ It follows from (2.43) that
$$<\zeta_{in}\zeta_{jm}>_{\rm{Gauss}}={i\over\epsilon}\delta_{nm}\delta_{ij}
.\eqno(2.46)$$ Working out the Wick contractions in (2.42) according to
(2.45) and (2.46), we end up with
$${\cal V}(x,y)=-{g^2(2+3\lambda^2)+\lambda^3g^3y\over
8(1+\lambda gy)^3}+{\lambda^2g^4x^2(1+\lambda^2)\over 8(1+\lambda gy)^4}
,\eqno(2.47)$$ which agrees with the result obtained via Weyl ordering.
The effective Lagrangian $L_{\rm{eff.}}(n)$ in (2.35) is then replaced by
that of Christ-Lee type, i.e. $${\cal L}(n)=L_{\rm{cl.}}(n)-{i\over\epsilon}
\ln\bar{\cal J}_n-{\cal V}(n).\eqno(2.48)$$
Before closing this section, I would like to remark that the subtleties
of the path integral depends strongly on the way in which the gauge
condition is introduced. Consider a general linear gauge fixing with the
insertion (2.18) replaced by
$$1={\rm{const.}}\int\prod_{n=0}^{N-1}d\theta_n{\cal J}
e^{-i\epsilon{1\over 2a}(z_n-\lambda x_n-\kappa\dot\xi_n
)^2}\eqno(2.49)$$ with $a$ and $\kappa>0$ gauge parameters like $\lambda$.
This is the discrete version of the gauge fixing used in [9] and the
$\lambda$-gauge, (2.18), corresponds to $a=0$ and $\kappa=0$.
The analysis in Appendix A gives rise to the estimates in the table I for
the typical magnitude of $\xi$ in the path integral with different choices
of the gauge parameters.
\topinsert
\begintable{Table I.}{The typical magnitude of $\xi$}
\halign to \hsize{\tabskip 10pt plus 2in\relax
\hglue 10pt#\hfil&\hfil#\hfil&\hfil#\hfil\cr
& $\kappa=0$ & $\kappa\neq 0$\cr
\tablerule
$a=0$ & $O(\epsilon^{-{1\over2}})$ & $O(1)$\cr
$a\neq 0$ & $O(\epsilon^{-{3\over2}})$ & $O(1)$\cr
}
\endtable
\endinsert
For $\kappa\neq 0$, the limit $\epsilon\to 0$ is trivial and the same
estimates apply to the gauge fixing with $x_n$ and $z_n$ in (2.49) replaced
by $\bar x_n$ and $\bar z_n$, an analog of the covariant gauge in a
relativistic field theory. But the limit $\kappa\to 0$ with a continuous
time will entail higher degrees of energy divergence for individual
diagrams.
\section{III. The BRST Identity of the Soluble Model}
There are two approaches to the BRST identity of the soluble
model (2.1). One can prove BRST invariance by introducing ghost
variables and establish the identity with external sources.
One may also start with the Slavnov-Taylor identity [10] and construct the
BRST identity afterwards. It turns out the former is
more straightforward for the path integral (2.25) with the lattice
Lagrangian (2.26) and (2.27), while
the latter is more convenient with the Christ-Lee type of path
integral. We shall illustrate both approaches in the following.
\subsection{III.1 Prior to the $\epsilon$-expansion}
Introducing the ghost variables $c_n$ and $\bar c_n$ and an auxiliary
field $b_n$, the path integral (2.25) can be cast into
$$<X,Y,Z\vert e^{-iHT}\vert>={\rm{const.}}\int\prod_{n=0}^{N-1}
dx_ndy_ndz_nd\xi_ndb_ndc_nd\bar c_n\times$$ $$\times
e^{i\epsilon\sum_{n=0}^{N-1}L_{\rm{BRST}}(n)}<x_0,y_0,z_0\vert>
\eqno(3.1)$$ where
$$L_{\rm{BRST}}(n)={1\over 2\epsilon^2}(\tilde r_{n+1}
e^{-i\epsilon g\xi_n\sigma_2}-\tilde r_n)(e^{i\epsilon g\xi_n\sigma_2}
r_{n+1}-r_n)+{1\over 2}(\dot z_n-\xi_n)^2-U(\tilde r_nr_n)$$
$$+b_n(z_n-\lambda x_n)-\bar c_n(1+\lambda gy_n)c_n.\eqno(3.2)$$
The integration measure and the Lagrangian (3.2) is invariant under
the following transformation
$$\delta r_n=-ig\theta_n\sigma_2 r_n,\eqno(3.3)$$
$$\delta z_n=\theta_n,\eqno(3.4)$$
$$\delta\xi_n=\dot\theta_n,\eqno(3.5)$$
$$\delta b_n=0,\eqno(3.6)$$
$$\delta c_n=0\eqno(3.7)$$ and $$\delta\bar c_n=s_nb_n\eqno(3.8)$$
with $\theta_n=s_nc_n$ and $s_n$ a Grassmann number.
For $n$-independent $s_n\equiv s$, an operator $Q$ such that $\delta=sQ$
can be extracted. It is straightforward to show that $Q^2=0$ and the
transformation (3.3)-(3.8) is of BRST type.
To establish the BRST identity, we introduce the generating functional
of connected Green's functions,
$$e^{iW(J,\zeta,u,\eta,\bar\eta)}=\lim_{T\to\infty}
<\vert e^{-iHT}\vert>={\rm{const.}}\int\prod_{n=0}^N
dx_ndy_ndz_nd\xi_ndb_ndc_nd\bar c_n\times$$ $$\times <\vert x_N,y_N,z_N>
e^{i\epsilon\sum_n[L_{\rm{BRST}}(n)+L_{\rm{ext.}}(n)]}
<x_0,y_0,z_0\vert>,\eqno(3.9)$$ where $L_{\rm{ext.}}(n)$ stands for the
source term, i.e.
$$L_{\rm{ext.}}(n)=\tilde J_nr_n+\zeta_nz_n+u_n\xi_n+\bar
\eta_nc_n+\bar c_n\eta_n,\eqno(3.10)$$ $|>$ denotes the ground state of the
system, and the limits $\epsilon\to 0$
and $N\epsilon=T\to\infty$ are understood for the right hand side.
It follows from the transformations (3.3)-(3.8) that
$$<\delta L_{\rm{ext.}}(n)>_\eta=-ig\tilde J_n\sigma_2<c_nr_n>_\eta
+(\zeta_n-\dot u_n)<c_n>_\eta+<b_n>_\eta\eta_n=0.\eqno(3.11)$$
This is the prototype of the BRST identity and can be
converted into various useful forms.
\subsection{III.2 After the $\epsilon$-expansion}
After integrating out the ghost variables and carrying out the
$\epsilon$-expansion, the path integral (3.9) becomes
$$e^{iW(J,\zeta,u,\eta,\bar\eta)}=\lim_{T\to\infty}
<\vert e^{-iHT}\vert>={\rm{const.}}\int\prod_{n=0}^N
dx_ndy_ndz_nd\xi_ndb_n\times$$ $$\times <\vert x_N,y_N,z_N>
e^{i\epsilon\sum_n[{\cal L}(n)+L_{\rm{ext.}}(n)]}
<x_0,y_0,z_0\vert>,\eqno(3.12)$$ where
$${\cal L}(n)=L_{\rm{cl.}}(n)+b_n(z_n-\lambda x_n)
-{i\over\epsilon}\ln\bar{\cal J}_n-{\cal V}(n)\eqno(3.13)$$ is the
Lagrangian of Christ-Lee type with $L_{\rm{cl.}}(n)$ given by
(2.37), ${\cal V}(n)$ by (2.47) and $L_{\rm{ext}}(n)$
by (3.10) at $\eta_n=\bar\eta_n=0$.
One may introduce the ghost variables for the path integral (3.12), but
they will be different from the ones in (3.9),
since the argument of the jacobian ${\cal J}_n$ has been shifted from $y_n$
to $\bar y_n$. On the other hand, the BRST identity can be
constructed from the Slavnov-Taylor identity and we shall adapt this
strategy. We consider a field dependent gauge transformation
$$\delta r_n=-i\chi_n\sigma_2r_n,\eqno(3.14)$$
$$\delta z_n={1\over g}\chi_n,\eqno(3.15)$$
$$\delta\xi_n={1\over g}\dot\chi_n,\eqno(3.16)$$
where $$\chi_n={\varepsilon_n\over {1\over g}+\lambda y_n}\eqno(3.17)$$
with $\varepsilon_n$ an infinitesimal ordinary number.
Keeping in mind that the velocities $\dot x_n$, $\dot y_n$ and
$\dot z_n$, and the coordinates $\bar x_n$, $\bar y_n$ and $\bar z_n$
follow strictly the discrete time definition and the variable
transformation (3.14)-(3.16) is {\it{nonlinear}}, the variation of the
Lagrangian $L_{\rm{cl.}}(n)$ contributes to the path integral
in the limit $\epsilon\to 0$. We find that
$$\delta L_{\rm{cl.}}(n)={1\over 4}\epsilon^2g^2\xi_n\dot\chi_n
\widetilde{(Dr)_n}\dot r_n=-{1\over4}{\epsilon^2\lambda g^3
\xi_n\dot y_n\tilde{(Dr)_n}\dot r_n\over (1+\lambda g\bar y_n)^2}
\varepsilon_n,\eqno(3.18)$$ where the terms containing $\dot\varepsilon_n$
have been dropped since $\varepsilon_n$ is assumed smooth with respect
to $n$ i.e. $\dot\varepsilon_n=O(1)$. Furthermore, the combination
$dx_ndy_ndz_nd\xi_ndb_n\bar{\cal J}_n$ ceases to be invariant like
the combination $dx_ndy_ndz_nd\xi_ndb_n{\cal J}_n$. We have, instead,
$$\delta\Big(dx_ndy_ndz_nd\xi_n\Big)=dx_ndy_ndz_nd\xi_n\Delta_n
$$ with $$\Delta_n=\biggl\{{1\over 2}\Big[{x_n\over(1+\lambda gy_n)^2}
-{x_{n+1}\over(1+\lambda gy_{n+1})^2}\Big]$$ $$+{1\over 4}
{\epsilon\lambda^2g^3\dot x_n\dot y_n\over (1+\lambda g\bar y_n)^3}
-{1\over 2}{\epsilon\lambda^3g^4\bar x_n\dot y_n^2\over(1+\lambda g
\bar y_n)^4}\biggr\}\varepsilon_n.\eqno(3.19)$$ With these additional
terms, the identity (3.11) is replaced by $$\sum_n\varepsilon_n\Big[
\Big[-ig\tilde J_n\sigma_2<{1\over (1+\lambda gy_n)}r_n>+
\Big(\zeta_n-\dot u_n\Big)<{1\over 1+\lambda gy_n}>+<b_n>\Big]$$
$$+<\delta L_{\rm{cl.}}(n)-i\Delta_n-\delta{\cal V}(n)>\Big]
=0,\eqno(3.20)$$ where the term $<b_n>$ comes from the variation
of the gauge fixing term of (3.13). Upon utilizing (2.45) and (2.46) for
the Gauss average in the last section, we obtain that
$$\sum_n\varepsilon_n<\delta L_{\rm{cl.}}(n)-i\Delta_n>=
\sum_n\varepsilon_n<\Big[-{1\over 8}\lambda^3g^4{\bar x_n\over
(1+\lambda g\bar y_n)^4}$$ $$+{3\over 8}{\lambda g^4(2+3\lambda^2)\bar x_n
-\lambda^2g^5(2+\lambda^2)\bar x_n\bar y_n\over (1+\lambda g\bar y_n)^5}
-{1\over 2}\lambda^3g^6(1+\lambda^2){\bar x_n^3\over 1+\lambda g\bar y_n)^6}
+O(\epsilon^{1\over 2})\Big]>$$ $$=\sum_n\varepsilon_n[<\delta{\cal V}
(n)>+O(\epsilon^{1\over 2})],\eqno(3.21)$$ and the Slavnov-Taylor
identity follows in the limit $\epsilon\to 0$
$$-ig\tilde J_n\sigma_2<{1\over (1+\lambda gy_n)}r_n>+
(\zeta_n-\dot u_n)<{1\over 1+\lambda gy_n}>+<b_n>=0.\eqno(3.22)$$
This can also be obtained from (3.11) after integrating over $c$
and $\bar c$ at $\eta=\bar\eta=0$.
To construct the BRST identity, we introduce the ghost variables
by rewriting (3.12) as
$$e^{iW(J,\zeta,u,\eta,\bar\eta)}=\lim_{T\to\infty}
<\vert e^{-iHT}\vert>={\rm{const.}}\int\prod_{n=0}^N
dx_ndy_ndz_nd\xi_ndb_ndc_n^\prime d\bar c_n^\prime\times$$
$$\times <\vert x_N,y_N,z_N>e^{i\epsilon\sum_{n=0}^N[{\cal L}^\prime(n)
+L_{\rm{ext.}}^\prime(n)]}<x_0,y_0,z_0\vert>,\eqno(3.23)$$ where
$${\cal L}^\prime(n)=L_{\rm{cl.}}(n)+b_n(z_n-\lambda x_n)
-\bar c_n^\prime(1+\lambda g\bar y_n)
c_n^\prime-{\cal V}(n)\eqno(3.24)$$ and
$$L_{\rm{ext.}}(n)=\tilde J_n r_n+\zeta_nz_n+u_n\xi_n+\bar
\eta_nc_n^\prime+\bar c_n^\prime\eta_n.\eqno(3.25)$$
Note that the primed ghosts are different from the original ones.
Denoting the average with respect to the path
integral (3.23) by $<...>_\eta$, we have, for a function of the
integration variables, $F$,
$$<F>_\eta={<Fe^{i\epsilon\sum_n(\bar\eta_nc_n^\prime+\bar c_n^\prime
\eta_n)}>\over<e^{i\epsilon\sum_n(\bar\eta_nc_n^\prime+\bar c_n^\prime
\eta_n)}>}.\eqno(3.26)$$ Then it follows that
$$<c_n^\prime r_n>_\eta=<{1\over 1+\lambda g\bar y_n}r_n>\eta_n,
,\eqno(3.27)$$ $$<c_n^\prime>_\eta=<{1\over 1+\lambda g\bar y_n}>
\eta_n\eqno(3.28)$$ and $$<b_n>_\eta=<b_n>.\eqno(3.29)$$
In the limit $\epsilon\to 0$, the difference of $\bar y_n$ in (3.27)
from $y_n$ may be neglected. The Slavnov-Taylor identity (3.22) implies the
following BRST identity
$$-ig\tilde J_n\sigma_2<c_n^\prime r_n>_\eta+(\zeta_n
-\dot u_n)<c_n^\prime>_\eta+<b_n>_\eta\eta_n=0,\eqno(3.30)$$ which
is equivalent to (3.11). As will be shown in
Appendix B, the invariance of the
Lagrangian under the field dependent transformation (3.14)-(3.17)
is related to an symmetry of the corresponding Hamiltonian in the
$\lambda$-gauge after factoring out the Gauss law constraint. There
we shall present a derivation of the Slavnov-Taylor identity (3.22)
from canonical formulations.
\section{IV. The Path Integral of an Nonabelian Gauge Field in
Coulomb Gauge}
\subsection{IV.1 Quantization in the time axial gauge}
The Lagrangian density of a nonabelian gauge theory is
$$L=-\int d^3\vec r\Big[{1\over 4}V_{\mu\nu}^lV_{\mu\nu}^l+\psi^\dagger
\gamma_4(\gamma_\mu D_\mu+m)\psi\Big],\eqno(4.1)$$ where
$$V_{\mu\nu}^l={\partial V_\nu^l\over\partial x_\mu}
-{\partial V_\mu^l\over\partial x_\nu}+gf^{lmn}V_\mu^m V_\nu^n\eqno(4.2)$$
with $V_\mu^l$ the gauge potential and $f^{lmn}$ the structure constant
of the Lie algebra of the gauge group. The fermion field $\psi$ carries
both color and flavor indices and the mass matrix $m$ is diagonal with
respect to the color indices. The gauge covariant derivative is
$D_\mu={\partial\over\partial x_\mu}-igT^lV_\mu^l$ with $T^l$
the generator of the gauge group in the representation to which $\psi$
belongs. The normalizations of $f^{lmn}$ and $T^l$ are given by
${\rm{tr}}T^lT^{l^\prime}={1\over 2}\delta^{ll^\prime}$ and
$f^{lmn}f^{l^\prime mn}=C_2\delta^{ll^\prime}$ with $C_2$ the second
Casmir of the gauge group.
The Lagrangian (4.1) is invariant under the following
gauge transformation
$$V_\mu\to V_\mu^\prime=uV_\mu u^\dagger+{i\over g}u{\partial u^\dagger
\over\partial x_\mu}\eqno(4.3)$$ and
$$\psi\to\psi^\prime=u\psi\eqno(4.4)$$ with $V_\mu=V_\mu^lT^l$ and
$u$ the transformation matrix in the representation of $\psi$.
The quantization of the gauge field is specified in the time axial
gauge where $V_0=0$. The Lagrangian (4.1) becomes
$$L=\int d^3\vec r\Big[{1\over 2}\dot V_j^l\dot V_j^l-{1\over 2}B_j^lB_j^l
+i\Psi^\dagger\dot\Psi-\Psi^\dagger\gamma_4(\gamma_j D_j+m)\Psi\Big],
\eqno(4.5)$$ and the corresponding Hamiltonian reads
$$H=\int d^3\vec r\Big[{1\over 2}\Pi_j^l\Pi_j^l+{1\over 2}
B_j^lB_j^l+\Psi^\dagger\gamma_4(\gamma_jD_j+m)\Psi\Big],\eqno(4.6)$$
where the canonical momentum
$$\Pi_j^l(\vec r)=\dot V_j^l(\vec r)=-i{\delta\over\delta V_j^l(\vec r)}
\eqno(4.7)$$ and $B_j^l(\vec r)={1\over 2}\epsilon_{ijk}V_{jk}(\vec r)$ is
the color magnetic field. The Hamiltonian (4.6) commutes with the
generator of time-independent gauge transformations, ${\cal G}^l$
with $${\cal G}^l={1\over g}(\delta^{lm}\nabla_j-gf^{lmn}V_j^n)\Pi_j^m
+\Psi^\dagger T^l\Psi.\eqno(4.8)$$ A physical state in the Hilbert
space is subject to the Gauss law constraint, i.e.
$${\cal G}^l\vert>=0.\eqno(4.9)$$ The path integral in the time axial
gauge can be readily written down
$$<V\vert e^{-iHt}\vert>={\rm{const.}}\int\prod_n[dVd\Psi d\bar\Psi]_n
e^{i\epsilon\sum_n L(n)}<V_j(0,\vec r)\vert>,\eqno(4.10)$$ where
$$[dVd\Psi d\bar\Psi]_n\equiv\prod_{\vec r,j,l}
dV_j^l(n,\vec r)d\Psi(n,\vec r)d\bar\Psi(n,\vec r),\eqno(4.11)$$
$$L(n)=\int d^3\vec r\Big[{1\over 2}\dot V_j^l(n)\dot V_j^l(n)
-{1\over 2}B_j^l(n)B_j^l(n)$$ $$+i\bar\Psi(n)\gamma_4\dot\Psi(n)-\bar\Psi(n)
(\gamma_jD_j(n)+m)\Psi(n)\Big]\eqno(4.12)$$ with
$$V_j^l(n)={V_j^l(n+1)-V_j^l(n)\over\epsilon}=O(\epsilon^{-{1\over 2}}),
\eqno(4.13)$$ and the initial
wave functional satisfies the Gauss law (4.9). The dependence of the
field amplitudes on $\vec r$ has been suppressed in (4.12).
\subsection{IV.2 Transformation to Coulomb Gauge}
Inserting the following identity into the path integral (4.10),
$$1={\rm{const.}}\int\prod_{n,\vec r}du(n,\vec r){\cal J}(n)
\delta(\nabla_jA_j^l(n,\vec r)),\eqno(4.14)$$ where
$$A_j(n,\vec r)=u^\dagger(n,\vec r)V_j(n,\vec r)u(n,\vec r)
+{i\over g}u^\dagger(n,\vec r)\nabla_ju(n,\vec r),\eqno(4.15)$$
$u(n,\vec r)$ is a representation matrix of the gauge group and
$${\cal J}(n)={\rm{det}}(-\nabla_j{\cal D}_j(n))\eqno(4.16)$$
with ${\cal D}_j^{lm}=\delta^{lm}\nabla_j-gf^{lmn}A_j^n$. Introducing
$$u^\dagger(n,\vec r)u(n+1,\vec r)=e^{i\epsilon gA_0(n,\vec r)}
,\eqno(4.17)$$ $$\psi(n,\vec r)=u^\dagger(n,\vec r)\Psi(n,\vec r)
\eqno(4.18)$$ and $$\bar\psi(n,\vec r)=\bar\Psi(n,\vec r)
u(n+1,\vec r),\eqno(4.19)$$ and transforming the integration variables from
$V_j^l(n,\vec r)$ and $u(n,\vec r)$ into $A_j^l(n,\vec r)$ and
$A_0^l(n,\vec r)$, we obtain that
$$<V\vert e^{-iHt}\vert>={\rm{const.}}\int\prod_n
[dAd\psi d\bar\psi^\prime]_n\delta(\nabla_jA_j^l(n,\vec r))
e^{i\epsilon\sum_n L^\prime(n)}<A_j(0,\vec r)\vert>,\eqno(4.20)$$ where
$$L^\prime(n)=L(n)-{i\over\epsilon}\ln{\cal J}(n)-{i\over\epsilon}
\delta^3(0)\int d^3\vec r\ln h(n,\vec r)\eqno(4.21)$$ with $L(n)$
given by (4.12) and $h(n,\vec r)$ the Haar measure of the
integration of the group element $e^{i\epsilon gA_0(n,\vec r)}$ with respect
to $A_0(n,\vec r)$
$$h(n,\vec r)=1-{\epsilon^2g^2\over 24}A_0^l(n,\vec r)A_0^l(n,\vec r)
+O(\epsilon^3g^3),\eqno(4.22)$$ which does not have an analog in the
soluble model. In terms of the new variables, we have
$$L(n)=\int d^3\vec r\Big[{\rm{tr}}[{\cal E}_j(n){\cal E}_j
(n)-{\cal B}_j(n){\cal B}_j(n)]$$ $${i\over\epsilon}
\bar\psi(n)[\psi(n+1)-e^{-i\epsilon gA_0(n)}
\psi(n)]-\bar\psi(n)e^{-i\epsilon gA_0(n)}
[\gamma_jD_j(n)+m]\psi(n)\eqno(4.23)$$ with
$${\cal E}_j(n)=-{1\over\epsilon}\Big[e^{i\epsilon gA_0(n)}A_j(n+1)
e^{-i\epsilon gA_0(n)}-A_j(n)+{i\over g}e^{i\epsilon gA_0(n)}\nabla_j
e^{-i\epsilon gA_0(n)}\Big]\eqno(4.24)$$ and
$${\cal B}_j(n)={1\over 2}\epsilon_{jki}\Big[\nabla_kA_i(n)
-\nabla_iA_k(n)-ig[A_k(n), A_i(n)]\Big].\eqno(4.25)$$
The Lagrangian (4.23) with (4.24) and (4.25) defines a gauge theory
in a spatial continuum and on a temporal lattice. The action $\epsilon\sum_n
L(n)$ coincides with the naive continuum limit of the spatial links
of Wilson's lattice action. But it comes naturally from the
definition of a path integral and the procedure of gauge fixing.
It follows from (4.13), (4.15) and (4.17) that [3]
$$(\dot A_j^l(n,\vec r), A_0^l(n,\vec r))=O(\epsilon^{-{1\over 2}})
\eqno(4.26).$$ The path integral (4.20) can be rewritten as
$$<V\vert e^{-iHt}\vert>={\rm{const.}}{\cal J}^{-{1\over 2}}(N)
\int\prod_n[dAd\psi d\bar\psi^\prime]_n$$ $$e^{i\epsilon\sum_n
L_{\rm{eff.}}(n)}{\cal J}^{{1\over 2}}(0)<A(0)\vert>\eqno(4.27)$$ with
$$L_{\rm{eff.}}(n)=L(n)-{i\over\epsilon}[\ln\bar{\cal J}(n)+\ln h(n)]
+{i\over\epsilon}\Big[\ln\bar{\cal J}(n)-{1\over 2}(\ln{\cal J}(n+1)
+\ln{\cal J}(n))\Big],\eqno(4.28)$$ where
$$\bar{\cal J}(n)={\rm{det}}(-\nabla_j\bar{\cal D}_j(n))\eqno(4.29)$$
with $\bar{\cal D}_j^{ab}=\delta^{ab}\nabla_j-gf^{abc}\bar A_j^c(n)$
and $\bar A_j^l(n)={1\over 2}[A_j^l(n+1)+A_j^l(n)]$.
The small $\epsilon$ expansion of $L_{\rm{eff.}}(n)$ reads
$$L_{\rm{eff.}}(n)=L_{\rm{cl.}}(n)-{i\over\epsilon}\ln\bar{\cal J}(n)
+\Delta L(n),\eqno(4.30)$$ where
$$L_{\rm{cl.}}(n)=\int d^3\vec r\Big[{\rm{tr}}[\bar{\cal E}_j(n)\bar{\cal E}_j
(n)-{\cal B}_j(n){\cal B}_j(n)]$$
$$+i\bar\psi(n)[\gamma_4(\dot\psi(n)+igA_0(n)\psi(n))-(\gamma_jD_j(n)+m)
\psi(n)]\Big]\eqno(4.31)$$ with
$$\bar{\cal E}_j(n)=-\dot A_j(n)-\nabla_jA_0(n)-ig[A_0(n), \bar A_j(n)]
,\eqno(4.32)$$ and $$\Delta L(n)=\int d^3\vec r\Big[{1\over 8}g^2
\epsilon^2f^{lml^\prime}f^{akl^\prime}\bar{\cal E}_j^l(n)A_0^m(n)
A_0^k(n)\Big[\dot A_j^a(n)+{1\over 3}\bar{\cal D}_j^{ab}(n)A_0^b(n)\Big]
$$ $$-{i\over 8}\epsilon g^2(\vec r,l|[\nabla_j\bar{\cal D}_j(n)]^{-1}
t^m\dot A_{j^\prime}^m(n)\nabla_{j^\prime}[\nabla_i\bar{\cal D}_i(n)]^{-1}
t^{m^\prime}\dot A_{i^\prime}^{m^\prime}(n)\nabla_{i^\prime}|\vec r,l)$$ $$
+{i\over24}\epsilon\delta^3(0)C_2g^2A_0^l(n)A_0^l(n)+{i\over 2}\epsilon
g^2\bar\psi(n)\gamma_4T^lT^m\psi(n)A_0^l(n)A_0^m(n)\Big]
+O(\epsilon^{1\over 2})\eqno(4.33)$$
with $t^l$ the generator in the adjoint representation, $(t^l)^{ab}=if^{alb}$.
The first term of the integrand of $\Delta L(n)$ comes from the
$\epsilon$ expansion of the color electric field (4.24), the second term
from the shift of the Jacobian ${\cal J}(n)$, the last term of (4.28),
the third term comes from the Haar measure $h$ and the last term from the
$\epsilon$ expansion of the fermionic part of (4.23). We may notice the close
resemblance of the first two terms of (4.33) with (2.38).
\subsection{IV.3 Converting $\Delta L$ into an equivalent potential}
Following the recipe of Section III, the potential energy which is
equivalent to $\Delta L(n)$ in the limit $\epsilon\to 0$ is
$${\cal V}=-<\Delta L(n)>_{\rm{Gauss}}$$ $$\equiv-{\int\prod_{\vec r,j,l}
d\dot A_j^l(n,\vec r)dA_0^l(n,\vec r)\delta(\nabla_jA_j^l(n,\vec r))
e^{i\epsilon\int d^3\vec r{\rm{tr}}\bar{\cal E}_j(n,\vec r)
\bar{\cal E}_j(n,\vec r)}\Delta L(n)\over\int\prod_{\vec r,j,l}
d\dot A_j^l(n,\vec r)dA_0^l(n,\vec r)\delta(\nabla_jA_j^l(n,\vec r))
e^{i\epsilon\int d^3\vec r{\rm{tr}}\bar{\cal E}_j(n,\vec r)
\bar{\cal E}_j(n,\vec r)}}\eqno(4.34)$$ while regarding $\bar A_j(n,\vec r)$
constant. The Gauss average of a product of $\dot A_j$ and $A_0$ can be
decomposed by Wick's theorem. We have
$$<{\cal E}_i^a(\vec r){\cal E}_j(\vec r^\prime)>_{\rm{Gauss}}={i\over\epsilon}
\delta_{ij}(\vec r,a|\vec r^\prime,b)={i\over\epsilon}\delta_{ij}
\delta^{ab}\delta^3(\vec r-\vec r^\prime),\eqno(4.35)$$
$$<A_0^a(\vec r)A_0^b(\vec r^\prime)>_{\rm{Gauss}}=-{i\over\epsilon}
(\vec r,a|G\nabla^2G|\vec r^\prime,b),\eqno(4.36)$$
$$<A_0^a(\vec r)\dot A_j^b(\vec r^\prime)>_{\rm{Gauss}}=-{i\over\epsilon}
\Big[(\vec r,a|G\nabla_j|\vec r^\prime,b)+(\vec r,a|G\nabla^2G
{\cal D}_j|\vec r^\prime,b)\Big],\eqno(4.37)$$
$$<\dot A_i^a(\vec r)\dot A_j^b(\vec r^\prime)>_{\rm{Gauss}}=
{i\over\epsilon}\Big[\delta_{ij}\delta^{ab}\delta^3(\vec r-\vec r^\prime)
+(\vec r,a|\nabla_iG{\cal D}_j|\vec r^\prime,b)$$
$$+(\vec r,a|{\cal D}_iG\nabla_j|\vec r^\prime,b)
+(\vec r,a|{\cal D}_iG\nabla^2G{\cal D}_j|\vec r^\prime,b)\Big],\eqno(4.38)$$
where we have suppressed the $n$-dependence and
$G=(-\nabla_j{\cal D}_j)^{-1}$ with ${\cal D}$ from here on to the end of the
section defined at
$\bar A_j(n,\vec r)$. Substituting (4.35)-(4.38) into (4.33), we obtain
$${\cal V}=-{1\over 24}C_2g^2\delta^3(0)\int d^3\vec r(\vec r,l|
G\nabla^2G|\vec r,l)$$ $$+{1\over 8}g^2f^{kam}f^{nal}\int d^3\vec r
(\vec r,l|G\nabla_j|\vec r,k)(\vec r,m|G\nabla_j|\vec r,n)$$
$$-{1\over 4}g^2f^{kam}f^{nbl}\int d^3\vec r\int d^3\vec r^\prime
(\vec r,l|G\nabla_i|\vec r^\prime,k)(\vec r,n|\nabla_jG|\vec r^\prime,m)
(\vec r,b|{\cal D}_jG\nabla_i|\vec r^\prime,a)$$
$$+{1\over 8}g^2f^{kam}f^{nbl}\int d^3\vec r\int d^3\vec r^\prime
(\vec r,l|G\nabla_i|\vec r^\prime,k)(\vec r^\prime,m|G\nabla_j|\vec r,n)
(\vec r^\prime,a|{\cal D}_iG\nabla^2G{\cal D}_j|\vec r,b)$$ $$
+{3\over 8}C_2g^2\delta^3(0)\int d^3\vec r(\vec r,m|G\nabla^2G|\vec r,m)
$$ $$+{1\over 8}g^2f^{lka}f^{mna}\int d^3\vec r(\vec r,k|G\nabla_j|\vec r,l)
(\vec r,m|G\nabla_j|\vec r,n)$$
$$-{1\over 8}g^2f^{nka}f^{lma}\int d^3\vec r(\vec r,k|G\nabla_j|\vec r,l)
(\vec r,m|G\nabla_j|\vec r,n)$$
$$+{1\over 12}g^2f^{lmk}f^{ank}\int d^3\vec r\Big[
(\vec r,m|G\nabla_j|\vec r,l)(\vec r,a|{\cal D}_jG\nabla^2G|\vec r,n)$$
$$+(\vec r,a|{\cal D}_jG\nabla_j|\vec r,l)(\vec r,m|G\nabla^2G|\vec r,n)
+(\vec r,n|G\nabla_j|\vec r,l)(\vec r,a|{\cal D}_jG\nabla^2G|\vec r,m)
$$ $$-{1\over 2}g^2\int d^3\vec r(\vec r,l|G\nabla^2G|\vec r,m)\bar\psi
(\vec r)\gamma_4T^lT^m\psi(\vec r).\eqno(4.39)$$
The first term is the Wick contraction of the Haar measure term of (4.33),
the second to the fourth terms are from the jacobian of the gauge fixing,
i.e., the second term of (4.33), the fifth to the eighth terms are
from the color electric field
energy, i.e., the first term of (4.33) and the last term is from the
fermion part. This lengthy expression can be simplified with the aid
of the following two Jacobian identities, i.e.
$$f^{abc}\int d^3\vec r[(\vec r,a|{\cal D}_i|X)(\vec r,b|Y)(\vec r,c|Z)
+(\vec r,a|X)(\vec r,b|{\cal D}_j|Y)(\vec r,c|Z)$$
$$+(\vec r,a|X)(\vec r,b|Y)(\vec r,c|{\cal D}_j|Z)]=0,\eqno(4.40)$$
[3] and $$f^{abl}f^{ckl}+f^{bcl}f^{akl}+f^{cal}f^{bkl}=0.\eqno(4.41)$$
First of all, the seventh term of (4.39) is already of the form of
Christ-Lee's ${\cal V}_1$. The covariant derivative ${\cal D}_j$ of the
third term may be moved into the middle factor of the integrand according
to (4.40), and the result will cancel with the second and the sixth terms
through (4.41). Upon repeat applications of (4.40) and (4.41), the first,
forth, fifth and eighth terms will combine into Christ-Lee's ${\cal V}_2$.
We have finally
$${\cal V}={\cal V}_1+{\cal V}_2+{\cal V}_3,\eqno(4.42)$$
where $${\cal V}_1={1\over 8}g^2\int d^3\vec r(\vec r,l^\prime|
G\nabla_j|\vec r,l)(\vec r,m|G\nabla_jt^{l^\prime}t^l|\vec r,m),
\eqno(4.43)$$ $${\cal V}_2=-{1\over 8}g^2\int d^3\vec r
\int d^3\vec r^\prime(\vec r^\prime,l^\prime|(\delta_{i^\prime i}
+{\cal D}_{i^\prime}G\nabla_i)|\vec r,n)
(\vec r,l|(\delta_{ii^\prime}+{\cal D}_iG\nabla_{i^\prime})|\vec r^\prime,
n^\prime)$$ $$\times (\vec r,n|t^lG\nabla^2Gt^{l^\prime}|\vec r^\prime,
n^\prime)\eqno(4.44)$$ and $${\cal V}_3=-{1\over 2}g^2\int d^3\vec r
\bar\psi(\vec r)\gamma_4T^lT^m\psi(\vec r)
(\vec r,l|G\nabla^2G|\vec r,m).\eqno(4.45)$$
The terms ${\cal V}_1$ and ${\cal V}_2$ are the Christ-Lee operator
ordering terms for a pure gauge theory. The term ${\cal V}_3$ is new and its
expansion in $g$ reads
$${\cal V}_3=-{1\over 2}g^2\int d^3\vec r\Big[\delta^{lm}
(\vec r|\nabla^{-2}|\vec r)+3g^2f^{ll^\prime k}f^{knm}
(\vec r|\nabla^{-2}A_i^{l^\prime}\nabla_i\nabla^{-2}A_j^n\nabla_j\nabla^{-2}
|\vec r)$$ $$+O(g^3A^3)\Big]\bar\psi(\vec r)\gamma_4T^lT^m
\psi(\vec r),\eqno(4.46)$$ where the term linear in $A_j^l$ vanishes because
of the transversality. Operatorwise, this term stems from the normal
ordering of the four fermion coupling in the color Coulomb potential,
which is necessary for the passage from the canonical formulation to the
path integral. The details will be explained in Appendix C.
The effective Lagrangian in the path
integral (4.27) is replaced by the following Lagrangian of Christ-Lee
type in the limit $\epsilon\to 0$
$${\cal L}(n)=L_{\rm{cl.}}(n)-{i\over\epsilon}\ln\bar{\cal J}(n)
-{\cal V}_1(n)-{\cal V}_2(n)-{\cal V}_3(n),\eqno(4.47)$$
The formulation of this section for the Coulomb gauge can be easily
generalized to an arbitrary noncovariant gauge introduced in [3]
$$\int d^3\vec r^\prime(\vec r,l|\Gamma_j|\vec r^\prime,l^\prime)
A_j^{l^\prime}(\vec r^\prime)=0.\eqno(4.48)$$
Since $\epsilon$ is the only dimensional parameter in the formal manipulation
of this section, it would be expected that $A_0(n,\vec r)=O(\epsilon^{-1})$
on dimensional grounds, different from the estimate of $\xi$ for the
soluble model and the estimate (4.26). On the other hand, the field theory
in $D=4$ suffers from the ultraviolet divergences which have to be
regularized in order for the path integral to make sense. The validity of
the estimate $A_0(n,\vec r)=O(\epsilon^{-{1\over 2}})$ as well as the
Christ-Lee path integral depends on an implicit assumption that there is an
fixed ultraviolet length, which makes the summation over all physical degrees
of freedom finite, in the process $\epsilon\to 0$. If $\epsilon$ is
identified with the ultraviolet length as in the discrete time regularization
scheme of the next section, the $\epsilon$-expansion can nolonger be truncated.
\section{V. The BRST Identity and the Discrete Time Regularization}
Neglecting fermion couplings, the Lagrangian of a nonabelian gauge field
with discrete times reads
$$L(n)=\int d^3\vec r{\rm{tr}}[{\cal E}_j(n){\cal E}_j
(n)-{\cal B}_j(n){\cal B}_j(n)]\eqno(5.1)$$ with ${\cal E}_j$ and
${\cal B}_j$ given by (4.24) and (4.25). The corresponding path
integral is $$<V\vert e^{-iHt}\vert>={\rm{const.}}\int\prod_n
[dAdbdcd\bar c]_ne^{i\epsilon\sum_n L_{\rm{BRST}}(n)}
<A_j(0)\vert>,\eqno(5.2)$$ where $$[dAdbdcd\bar c]_n
=\prod_{\vec r,\mu,l}dA_\mu^l(n,\vec r)db^l(n,\vec r)
dc^l(n,\vec r)d\bar c^l(n,\vec r)h(n,\vec r)\eqno(5.3)$$ and
$$L_{\rm{BRST}}(n)=L(n)+\int d^3\vec rb^l(n,\vec r)\nabla_jA_j^l(n,\vec r)
-\int d^3\vec r\bar c^l(n,\vec r)[\nabla_j{\cal D}_j
(n,\vec r)]^{ll^\prime}c^{l^\prime}(n,\vec r).\eqno(5.4)$$
The Lagrangian (5.4) and the integration measure of (5.3) are invariant
under the following transformation:
$$\delta A_j^l(n,\vec r)=s_n{\cal D}_j^{ll^\prime}(n,\vec r)c^{l^\prime}
(n,\vec r),\eqno(5.5)$$ $$\delta e^{i\epsilon gA_0(n,\vec r)}
=igs_nc^l(n,\vec r)T^le^{i\epsilon gA_0(n,\vec r)}-igs_{n+1}c^l(n+1,\vec r)
e^{i\epsilon gA_0(n,\vec r)}T^l,\eqno(5.6)$$
$$\delta c^l(n,\vec r)=-{1\over 2}s_ngf^{lab}c^a(n,\vec r)c^b(n,\vec r),
\eqno(5.7)$$ $$\delta\bar c^l(n,\vec r)=s_nb^l(n,\vec r)\eqno(5.8)$$ and
$$\delta b^l(n,\vec r)=0,\eqno(5.9)$$ where $s_n$ is a Grassmann
number. For a $n$-independent $s_n$, a nilpotent charge operator can be
extracted and the transformation (5.5)-(5.9) is
therefore of BRST type. Introducing the generating
functional of the connected Green's functions via a source term, i.e.
$$e^{iW(J,\eta,\bar\eta)}=\lim_{T\to\infty}
<\vert e^{-iHT}\vert>$$ $$={\rm{const.}}\int\prod_n[dAdbdcd\bar c]_n
<\vert A(N)>e^{i\epsilon\sum_n[L_{\rm{BRST}}(n)+L_{\rm{ext.}}(n)]}
<A(0)\vert>\eqno(5.10)$$ with
$$L_{\rm{ext.}}(n)=2\int d^3\vec r{\rm{tr}}[J_\mu(n,\vec r)A_\mu(n,\vec r)
+\bar\eta(n,\vec r)c(n,\vec r)+\bar c(n,\vec r)\eta(n,\vec r)].
\eqno(5.11)$$ The invariance of (5.3) and (5.4) under (5.5)-(5.9)
implies the following BRST identity [11]
$$\int d^3\vec r{\rm{tr}}[\vec J(n,\vec r)\cdot<\vec {\cal D}c(n,\vec r)>
-<{\cal D}_0J_0(n,\vec r)>+ig\bar\eta(n,\vec r)<c^2(n,\vec r)>$$
$$+<b(n,\vec r)>\eta(n,\vec r)]=0,\eqno(5.12)$$ where
$$\vec {\cal D}c(n,\vec r)=\vec\nabla c(n,\vec r)
-ig[\vec A(n,\vec r),c(n,\vec r)]
\eqno(5.13)$$ and $${\cal D}_0J_0(n,\vec r)=\dot J_0(n,\vec r)
+ig[A_0(n,\vec),J_0(n,\vec r)].\eqno(5.14)$$ The transformation law
of $A_0(n,\vec r)$, deduced from (5.6),
$$\delta A_0(n,\vec r)=-{\cal D}_0\theta(n,\vec r)+{1\over 12}g^2\epsilon^2
[A_0(n,\vec r),[A_0(n,\vec r),\dot\theta(n,\vec r)]]+...\eqno(5.15)$$
has been utilized, only the first term of which contributes to the limit
$\epsilon\to 0$. The identity (5.12) can be cast into
various useful forms [11].
Similar to the case of the soluble model, the BRST identity can also be
constructed from the Slavnov-Taylor identity of the Christ-Lee type of
path integral (4.27) with $L_{\rm{eff.}}(n)$ replaced by ${\cal L}(n)$
of (4.47) in the limit $\epsilon\to 0$.
Unlike the soluble model, the field theory case suffers from an ultraviolet
divergence which needs to be regularized and subtracted. Owing to its
manifest BRST invariance, the discrete time Lagrangian (4.23) with
(4.24) and (4.25) serves also as a gauge invariant regularization scheme
with $\epsilon$ a ultraviolet cutoff. There are several additional technical
advantages with this regularization. 1) The energy integration with
a continuum time is regularized by the summation over the Bloch momentum
on the temporal lattice. This is particularly important for resolving the
ambiguities associated with the energy divergence.
2) With fixed Bloch momenta on
the temporal lattice, the integration over spatial momenta is less
divergent. There is only a finite number of divergent skeletons and these
can be handled by the dimensional regularization; 3) For fixed lattice
momenta, the integrand of each Feynman diagram is a rational function
of the spatial momenta and can be simplified with the aid of Feynman
parametrization; 4) Manifest unitarity is maintained throughout the
calculation. In what follows, we shall test this regularization by an
evaluation of the one loop Coulomb propagator in the absence of the quark
fields.
The expansion of the Lagrangian (5.3) according to the power of $g^2$ reads
$$L_{\rm{BRST}}(n)=L_{\rm{cl.}}(n)+\int d^3\vec rb^l(n,\vec r)
\nabla_jA_j^l(n,\vec r)$$ $$-\int d^3\vec r\bar c^l(n,\vec r)
[\nabla_j{\cal D}_j(n,\vec r)]^{ll^\prime}c^{l^\prime}(n,\vec r)
+R_n,\eqno(5.16)$$ where
$$R_n=\int d^3\vec r\Big[-{1\over 8}g^2
\epsilon^2f^{lml^\prime}f^{akl^\prime}\Big(\dot A_j^l(n)A_0^m(n)
A_0^k(n)\dot A_j^a(n)$$ $$+{1\over 3}\nabla_jA_0^l(n)A_0^m(n)A_0^k(n)
\nabla_jA_0^a(n)\Big)+{i\over24}\epsilon\delta^3(0)C_2g^2A_0^l(n)A_0^l(n)
\Big]\eqno(5.17)$$ where at the order $g^2$, only terms of an even number
of $A_0$ factor are kept.
The first term of (5.17) comes from the expansion of $e^{i\epsilon g
A_0}$ in the color electric field and the second from the Haar measure.
Both of them have been included in $\Delta L(n)$ of (4.33). For the
reason we shall explain later, the perturbative expansion ought to
be performed in Euclidean space, which amounts to
the substitutions $\epsilon\to -i\epsilon$,
$A_0\to -iA_4$ and $\dot A_j\to i{\partial A_j\over \partial x_4}$.
The dressed Coulomb propagator reads
$$d_0^\prime(k_0,\vec k)={1\over \vec k^2+\sigma(k_0,\vec k)},$$
where the one loop contribution to $\sigma(k_0,\vec k)$ is given by the
amputated Feynman diagrams of Fig. 1. plus the contribution of (5.17), i.e.
$$\sigma(k_0,\vec k)=-\Big(\hbox{ Fig. 1a }+\hbox{ Fig. 1b }+\hbox{ Fig. 1c }
\Big)+\hbox{ contribution from $R_n$}\eqno(5.18)$$ with the relevant
Feynman rules given in Fig. 2. A wavy line stands for a
transverse gluon propagator and contributes a factor
\topinsert
\hbox to\hsize{\hss
\epsfxsize=4.0truein\epsffile{figure2.eps}
\hss}
\begincaption{Figure 1}
The one loop diagrams of the inverse Coulomb propagator.
\endgroup
\endinsert
\topinsert
\hbox to\hsize{\hss
\epsfxsize=4.0truein\epsffile{figure1.eps}
\hss}
\begincaption{Figure 2}
The relevant ingredients of the diagrams
\endgroup
\endinsert
$$\delta^{ll^\prime}d_{ij}(\theta|\vec k)={\delta^{ll^\prime}
\over k_0^2+\vec k^2}\Big(\delta_{ij}-
{k_ik_j\over \vec k^2}\Big)\eqno(5.19)$$ with $k_0={2\over\epsilon}
\sin{\theta\over 2}$ and $\theta\in (-\pi,\pi)$ a Bloch momentum; a dashed
line stands for a bare Coulomb propagator and contributes a factor
$$\delta^{ll^\prime}d_0(\vec k)={\delta^{ll^\prime}\over \vec k^2}.
\eqno(5.20)$$ A three point vertex of one Coulomb line and two transverse
gluons with incoming momenta $(\theta_1,\vec k_1)$, $(\theta_2,\vec k_2)$ and
$(\theta_3,\vec k_3)$ is associated with the factor
$$-i{2\over\epsilon}gf^{lmn}\delta_{ij}\sin{\theta_3-\theta_2\over 2};
\eqno(5.21)$$ a three point vertex of two Coulomb lines and one
transverse gluon with incoming momenta $(\theta_1,\vec k_1)$,
$(\theta_2,\vec k_2)$ and $(\theta_3,\vec k_3)$ is associated with the
factor $$-igf^{lmn}(k_{2j}-k_{1j})\cos{\theta_3\over 2}.\eqno(5.22)$$
A four point vertex of two transverse gluons and two Coulomb lines
is associated with the factor
$$-g^2(f^{la^\prime a}f^{lb^\prime b}+f^{la^\prime b}f^{lb^\prime a})
\cos{\theta_1\over 2}\cos{\theta_2\over 2}\delta_{ij}.\eqno(5.23)$$
With these rules, we have
$$\hbox{ Fig.1a }={1\over 2}C_2\delta^{ll^\prime}g^2{4\over\epsilon^2}
\int_{-\pi}^\pi{d\theta\over 2\pi\epsilon}
\sin^2{\theta+\theta^\prime\over 2}I,\eqno(5.24)$$ where
$$I=\int{d^3\vec p\over (2\pi)^3}d_{ij}(\theta|\vec p)
d_{ij}(\theta^\prime|\vec p^\prime)$$
$$=3!\int{d^3\vec p\over (2\pi)^3}\int_0^1dx\int_0^1dy\int_0^1dz
z(1-z){\vec p^2(\vec p+\vec k)^2+[(\vec p\cdot(\vec p+\vec k)]^2
\over[(\vec p+\vec kz)^2+p_0^2x(1-z)+p_0^{\prime2}yz+\vec k^2z(1-z)]^4}
\eqno(5.25)$$ with $p_0={2\over\epsilon}\sin{\theta\over 2}$,
$p_0^\prime={2\over\epsilon}\sin{\theta^\prime\over 2}$ and
$\phi=\theta^\prime-\theta$, $\vec k=\vec p^\prime-\vec p$ the
external energy and momentum. Similarly,
$$\hbox{ Fig. 1b }=C_2\delta^{ll^\prime}g^2\int_{-\pi}^\pi
{d\theta\over 2\pi\epsilon}\cos^2{\theta\over 2}II,\eqno(5.26)$$ where
$$II=\int{d^3\vec p\over (2\pi)^3}(p+2k)_i(p+2k)_j
d_{ij}(\theta|\vec p)d_0(\vec p^\prime)$$
$$=8\int{d^3\vec p\over (2\pi)^3}\int_0^1dx\int_0^1dz(1-z)
{\vec p^2\vec k^2-(\vec p\cdot\vec k)^2\over [(\vec p+\vec kz)^2
+p_0^2x(1-z)+\vec k^2z(1-z)]^3}\eqno(5.27)$$ and
$$\hbox{ Fig. 1c }=-C_2\delta^{ll^\prime}g^2\int_{-\pi}^\pi
{d\theta\over 2\pi\epsilon}\cos^2{\theta\over 2}
\int{d^3\vec p\over (2\pi)^3}d_{jj}(\theta|\vec p)$$
$$=-2C_2\delta^{ll^\prime}g^2\int_{-\pi}^\pi
{d\theta\over 2\pi\epsilon}\cos^2{\theta\over 2}
\int{d^3\vec p\over (2\pi)^3}{1\over p_0^2+\vec p^2}\eqno(5.28)$$
and $$\hbox{Contribution of $R_n$}=-C_2\delta^{ll^\prime}{g^2\over 12}
\int_{-\pi}^\pi{d\theta\over 2\pi\epsilon}\int{d^3\vec p\over(2\pi)^3}
\int_0^1dx{(24\sin^2{\theta\over 2}+\vec k^2\epsilon^2)\vec p^2+p_0^2
\vec k^2\epsilon^2\over (\vec p^2+p_0^2x)^2}.\eqno(5.29)$$
We shall not expose the details of the evaluation of (5.25)-(5.29), but
only remark on few key points which lead to the final answer.
First of all, the $\vec p$-integrations in (5.27)-(5.29)
are all linearly divergent, which upon the replacement
$$\int {d^3\vec p\over (2\pi)^3}\to \int {d^D\vec p\over (2\pi)^D}
,\eqno(5.30)$$ give rise to Gamma functions with arguments of
the form ${D\over 2}+\hbox{integer}$, and therefore yield finite limits as
$D\to 3$. After the $\vec p$-integration, the integrand for $\theta$-
integration is of the dimension of a momentum. Because of the $\epsilon$
of the denominator of the Bloch momentum $p_0$, the leading divergence
as $\epsilon\to 0$ is of the order of $\epsilon^{-2}$, which reflects the
usual quadratic divergence. At $\vec k=0$, we obtain that
$$\hbox{(5.24)}={2\over 3\pi^2\epsilon^2}C_2g^2\delta^{ll^\prime},
\eqno(5.31)$$ $$\hbox{(5.26)}=0,\eqno(5.32)$$
$$\hbox{(5.28)}={2\over 3\pi^2\epsilon^2}C_2g^2\delta^{ll^\prime},
\eqno(5.33)$$ and $$\hbox{(5.29)}={4\over 3\pi^2\epsilon^2}C_2g^2
\delta^{ll^\prime}.\eqno(5.34)$$ If follows from (5.18) that
$$\sigma(k_0,\vec k)=0,\eqno(5.35)$$ which renders the net divergence
logarithmic. After some manipulations, we obtain that
$$\sigma(0,\vec k)=-{11\over 24\pi^2}C_2g^2\vec k^2\Big(\ln{1\over
k\epsilon}-{74\over 33}-{91\over 22}\ln2\Big)\eqno(5.36)$$ and the
one loop renormalized Coulomb propagator reads
$$d_0^\prime(\vec k)={Z\over \vec k^2}\eqno(5.37)$$ with
$$Z=1+{11\over 24\pi^2}C_2g^2\Big(\ln{1\over
k\epsilon}-{74\over 33}-{91\over 22}\ln2\Big),\eqno(5.38)$$
the divergent part of which coincides with the charge renormalization
[1] [2].
We end this section with two technical remarks:
1). Euclidean time is adapted for the above one loop calculation.
This turns out to be necessary for the logarithmically divergent
diagrams with the integration order we followed. Consider a simple integral
with a Minkowski momentum $p=(p_0,\vec p)$
$$I=\int {d^4p\over (2\pi)^4}{i\over (p^2+m^2)^2}\eqno(5.39)$$ with
$p^2=\vec p^2-p_0^2$. If the Wick rotation is performed before the
spatial integration, the infinite arc, $p_0=Re^{i\phi}$ with $R\to\infty$,
$0<\phi<{\pi\over 2}$ and $\pi<\phi<{3\pi\over 2}$ will not contribute.
But if the spatial momentum is integrated first as we did, the Wick rotation
then will pick up a term from the infinite arc. As a result, the
renormalization constant will be complex unless we start with the Euclidean
definition of the diagram.
2). It may looks puzzling that the very terms of $R_n$ which help to cancel
the quadratic divergence of the one loop diagrams of the Coulomb propagator
are actually the same terms which contribute to the Christ-Lee anomalous
vertices which are expected at two loop level. This paradox is tied to the
identification $\epsilon$ with the ultraviolet cutoff.
If an independent ultraviolet cutoff is introduced
for the integration over $\vec p$ and the limit $\epsilon\to 0$ is taken
before sending the cutoff to infinity, the contribution of $R_n$ to the
one loop Coulomb propagator, (5.28), will vanish as can be seen easily.
\section{VI. Concluding Remarks}
In this work, we have carefully traced all the subtleties of gauge
fixing and variable transformation in a path integral of a gauge
model, without resorting to the operator
formalism. For a soluble quantum mechanical model in $\lambda$-
gauge and for a nonabelian gauge field in Coulomb gauge, the well
known operator ordering terms are reproduced exactly. In the presence
of fermionic degrees of freedom, an additional operator ordering term
is discovered. Because of the intrinsic nonlinearity of a BRST
transformation, the operator ordering terms are found essential in restoring
the simple form of the identity associated with this transformation. In the
field theory case, a manifest BRST invariant and unitary regularization
scheme is proposed and it does give rise to the correct $\beta$-function
at one loop order.
Though this work does not attempt to prove the renormalizability
of a nonabelian gauge theory in Coulomb gauge, I do not see any problems
in applying the discrete time regularization scheme to higher orders.
The only draw back is that the $\epsilon$-expansion of the temporal
lattice Lagrangian can no longer be truncated since the ultraviolet
cutoff is identified with $\epsilon$.
Alternatively, one may try to renormalize the theory with a Christ-Lee
type of path integral. Then one has to face the energy divergence and the
ambiguities associated with it. The coupling with the ultraviolet
divergence makes it difficult to organize the cancellation in higher orders.
Several scenarios have been proposed but none of them [12] goes smoothly
beyond two loops. On the other hand, the energy divergence is an
artifact of the path integral, since it is not there with canonical
perturbation methods. In principle, one should be able to organize the
energy integral before integrating spatial momenta and to reproduce
the canonical perturbation series. But then no advantages of Feynman
diagrams have been taken and the path integral seems unnecessary.
What we need for the renormalization with a Christ-Lee type of path
integral is an unambiguous scheme which regularize the spatial loop integral.
The only feasible BRST invariant scheme is a spatial lattice.
At this point, it is instructive to draw some connections of the path
integral in the continuum with Wilson's lattice formulation [13]. In the
absence of quarks, the partition function of Wilson's formulation on
a four dimensional rectangular lattice reads
$$Z=\int\prod_{<ij>}dU_{ij}e^{-{1\over g^2}S_W[U]},\eqno(6.1)$$
where $U_{ij}$ is a gauge group matrix on a nearest neighbor link.
The simplest choice of the action is
$$S_W[U]={1\over g^2}\Big[{a_s\over a_t}\sum_{P_t}{\rm{tr}}\Big(1-{1\over d}
{\rm{Re}}U_{P_t}\Big)+{a_t\over a_s}\sum_{P_s}{\rm{tr}}\Big(1-{1\over d}
{\rm{Re}}U_{P_s}\Big)\Big],\eqno(6.2)$$ where $U_P=U_{ij}U_{jk}
U_{kl}U_{li}$ for a plaquette $P(ijkl)$ with the subscript $s$ labeling
the space-like one and $t$ time-like one, $a_s$, $a_t$ denote
the spatial and temporal lattice spacings and $d$ the dimension of
$U$'s. The lattice Coulomb gauge
condition can be imposed as in Ref. 14. The discrete time regularization
scheme presented in Section 5 corresponds to the limit
$a_t\to 0$ after $a_s\to 0$ and any regularization corresponding
to Christ-Lee path integral follow from the limit $a_s\to 0$ after
$a_t\to 0$.
\section{Acknowledgments}
The author would like to thank Professor D. Zwanziger for illuminating
discussions. he is also obliged to Professor N. N. Khuri and Dr. James Liu
for their reading of the manuscript. This work is supported in part by U. S.
Department of Energy under Grant DE-FG02-91ER40651, Task B and by National
Science Council of ROC under Grant NSC-CTS-981003.
The author would like to dedicate this work to late Ms. Irene Tramm. Her
selfless help to his career is highly appreciated.
\section{Appendix A}
To estimate the typical contributions of $\dot x$, $\dot y$, $\dot z$
and $\xi$ to the path integral in the limit $\epsilon\to 0$,
we may neglect the interaction term and consider the path integral with
the free Lagrangian only,
$$L_0(n)={1\over 2}[\dot x_n^2+\dot y_n^2+(\dot z_n-\xi_n)^2
-(\omega^2-i0^+)(x_n^2+y_n^2)-{1\over a}
(z_n-\lambda x_n-\kappa\dot\xi_n)^2],
\eqno(A.1)$$ where the total time interval $T=N\epsilon\to\infty$ with
$N$ the number of time slices between the time interval $T$, and the
infinitesimal imaginary part of $\omega$ provides a converging factor of the
integral. The last term of (A.1) is the gauge fixing term (2.49) with the
gauge parameter $a$. Defining the path integral average of an arbitrary
function of $x_n$, $y_n$, $z_n$ and $\xi_n$ by
$$<F>={\int\prod_ndx_ndy_ndz_nd\xi_ne^{i\epsilon\sum_nL_0(n)}F\over
\int\prod_ndx_ndy_ndz_nd\xi_ne^{i\epsilon\sum_nL_0(n)}}.\eqno(A.2)$$
we obtain the following expressions for various propagators:
$$<x_nx_m>=<y_ny_m>={1\over\epsilon}\int_{-\pi}^\pi {d\theta\over 2\pi}
{ie^{-i(n-m)\theta}\over p^*p-\omega^2+i0^+},\eqno(A.3)$$
$$<z_nz_m>={1\over\epsilon}\int_{-\pi}^\pi{d\theta\over 2\pi}
i{\lambda^2-(a-\kappa^2p^*p)(p^*p-\omega^2)\over (1+\kappa p^2)
(1+\kappa p^{*2})(p^*p-\omega^2+i0^+)}e^{-i(n-m)\theta}\eqno(A.4)$$
$$<\xi_n\xi_m>={1\over\epsilon}\int_{-\pi}^\pi{d\theta\over 2\pi}
i{\lambda^2p^*p+(1-ap^*p)(p^*p-\omega^2)\over (1+\kappa p^2)
(1+\kappa p^{*2})(p^*p-\omega^2+i0^+)}e^{-i(n-m)\theta}\eqno(A.5)$$
$$<x_n\xi_m>=-{1\over\epsilon}\int_{-\pi}^\pi{d\theta\over 2\pi}
{\lambda p^*e^{-i(n-m)\theta}\over (1+\kappa p^{*2})(p^*p-\omega^2+i0^+)},
\eqno(A.6)$$ $$<x_nz_m>={1\over\epsilon}\int_{-\pi}^\pi{d\theta\over 2\pi}
{\lambda e^{-i(n-m)\theta}\over (1+\kappa p^{*2})(p^*p-\omega^2+i0^+)}
\eqno(A.7)$$ and $$<z_n\xi_m>=-{1\over\epsilon}\int_{-\pi}^\pi
{d\theta\over 2\pi}i{\lambda^2p^*
-(\kappa p+ap^*)(p^*p-\omega^2)\over (1+\kappa p^2)
(1+\kappa p^{*2})(p^*p-\omega^2+i0^+)}e^{-i(n-m)\theta},\eqno(A.8)$$
where $p=i{e^{-i\theta}-1\over\epsilon}$. According to the definition
of $\dot x_n$ and $\dot y_n$, we have
$$<\dot x_n\dot x_m>=<\dot y_n\dot y_m>={1\over\epsilon}
\int_{-\pi}^\pi {d\theta\over 2\pi}{ip^*p
e^{-i(n-m)\theta}\over p^*p-\omega^2+i0^+}.\eqno(A.9)$$
The squares of the typical magnitude of $x_n$, $y_n$, $z_n$, $\dot x_n$,
$\dot y_n$, $\dot z_n$ and $\xi_n$ inside the path integral in the limit
$\epsilon\to 0$ are of the same order as the expectation value of
their squares, i.e. the propagators (A.3)-(A.5) at $n=m$. It follows
from (A.3) and (A.9) that $$<x_n^2>=<y_n^2>={1\over 2\omega}
+O(\epsilon^2),\eqno(A.10)
$$ but $$<\dot x_n^2>=<\dot y_n^2>={i\over\epsilon}+{\omega\over 2}
+O(\epsilon^2)\eqno(A.11)$$ for arbitrary $\lambda$, $\kappa$ and
$a$. On the other hand, the $\epsilon\to 0$ limit of $<z_n^2>$,
$<\dot z_n^2>$ and $<\xi_n^2>$ are very delicate and we consider the
following situations.
1) None of $\kappa$ or $a$ vanishes, It
follows from (A.4) and (A.5) that
$$<z_n^2>={\lambda^2\over 2\omega(1+\kappa\omega^2)^2}
+{i\over\sqrt{\kappa}}\Big[\kappa-a-{\lambda^2\kappa\over
1+\kappa\omega^2}-{2\lambda^2\kappa\over(1+\kappa\omega^2)^2}\Big]
+O(\epsilon^2),\eqno(A.12)$$
$$<\xi_n^2>={\lambda^2\omega\over2(1+\kappa\omega^2)^2}
+{i\over4\kappa\sqrt{\kappa}}\Big[\kappa-a-{\lambda^2\kappa\over
1+\kappa\omega^2}+{2\lambda^2\kappa\over(1+\kappa\omega^2)^2}
\Big]+O(\epsilon^2)\eqno(A.13)$$ and
$$<\dot z_n^2>={i\over\epsilon}+\hbox{finite terms}.\eqno(A.14)$$
The small $\epsilon$ expansion of the lattice Lagrangian (2.27) is trivial
with such a gauge fixing. So is the case when $x_n$ and $z_n$ in the last term
of (A.1) are replaced by $\bar x_n$ and $\bar z_n$.
2) $\kappa=0$ and $a\to 0$ before $\epsilon\to 0$.
This is the $\lambda$-gauge in the text. It is easy to show, using
(A.3), (A.4) and (A.5) that
$$<z_n^2>=\lambda^2<x_n^2>={\lambda^2\over 2\omega},\eqno(A.15)$$
$$<\dot z_n^2>=\lambda^2<\dot x_n^2>={i\lambda^2\over\epsilon}+
\hbox{finite terms}\eqno(A.16)$$ and
$$<\xi_n^2>={i(1+\lambda^2)\over\epsilon}+\hbox{finite terms}.\eqno(A.17)$$
3) $\kappa=0$ but $a\neq 0$. This corresponds to the
``smeared $\lambda$-gauge''. We obtain from (A.4) and (A.5) that
$$<z_n^2>=-i{a\over\epsilon}+\hbox{finite terms},\eqno(A.18)$$
$$<\dot z_n^2>=-2i{a\over\epsilon^2}+i{\lambda^2\over\epsilon}
+\hbox{finite terms}\eqno(A.19)$$ and
$$<\xi_n^2>=-2i{a\over\epsilon^3}+i{1+\lambda^2\over\epsilon}
+\hbox{finite terms}.\eqno(A.20)$$ The equations (A.12)-(A.20)
give the announced estimates in Section II.
\section{Appendix B}
The Hamiltonian of the soluble model in the $\lambda$-gauge is given by [5]
$$H={1\over 2{\cal J}}\left(\matrix{p_x&p_y}\right){\cal J}\left(
\matrix{{\cal M}_{xx}^{-1}&{\cal M}_{xy}^{-1}\cr{\cal M}_{yx}^{-1}
&{\cal M}_{yy}^{-1}}\right)\left(\matrix{p_x\cr p_y}\right)
+U(x^2+y^2)\eqno(B.1)$$ after solving the Gauss law constraint, where
$${\cal M}_{xx}^{-1}={\cal J}^{-2}(y^2+{1\over g^2}),\eqno(B.2)$$
$${\cal M}_{xy}^{-1}={\cal M}_{yx}^{-1}={\cal J}^{-2}x({\lambda\over g}
-y),\eqno(B.3)$$
$${\cal M}_{yy}^{-1}={\cal J}^{-2}\Big[\Big(\lambda y+{1\over g}\Big)^2
+x^2(\lambda^2+1)\Big]\eqno(B.4)$$ and
$${\cal J}={1\over g}+\lambda y.\eqno(B.5)$$
It was pointed out that the Hamiltonian (B.1) commutes with the operator
$$K={\cal J}^{-1}(xp_y-yp_x),\eqno(B.6)$$ i.e. $[H,K]=0$ [5]. With ${\cal U}
=e^{i\varepsilon K}$, we have
$$x_\varepsilon\equiv{\cal U}x{\cal U}^{-1}=x-\varepsilon{\cal J}^{-1}y
\eqno(B.7)$$ and
$$y_\varepsilon\equiv{\cal U}y{\cal U}^{-1}=y+\varepsilon{\cal J}^{-1}x
\eqno(B.8)$$ for infinitesimal $\varepsilon$. Adding the source term
$$h(t)=\kappa(t)K+J_x(t)x(t)+J_y(t)y(t)\eqno(B.9)$$ to the
Hamiltonian (B.1), the Schroedinger equation of the state is given
by $$i{\partial\over\partial t}|t>=h(t)|t>,\eqno(B.10)$$ where the
operators follow the time development generated by $H$, e.g.
$$x(t)=e^{iHt}x(0)e^{-iHt}\eqno(B.11)$$ and
$$y(t)=e^{iHt}y(0)e^{-iHt}.\eqno(B.12)$$ The $c$-number sources $\kappa(t)$,
$J_x(t)$ and $J_y(t)$ are adiabatically switched on in the remote past and
are switched off in the remote future. It can be shown that
$$\Big({\cal J}^{1\over 2}K{\cal J}^{-{1\over 2}}\Big)_W=
{\cal J}^{1\over 2}K{\cal J}^{-{1\over 2}}.\eqno(B.13)$$
with the subscript $W$ standing for the Weyl ordering.
The general solution of (B.10) reads $$|t>=U(t,t_0)|t_0>\eqno(B.13)$$ with
$$U(t,t_0)=T\exp\Big(-i\int_{t_0}^tdt^\prime h(t^\prime)\Big).\eqno(B.14)$$
Define the generating functional of the connected Green's functions,
${\cal W}(\kappa,J)$ by
$$e^{i{\cal W}(\kappa,J)}=<|U(\infty,-\infty)|>\eqno(B.15)$$ with $|>$ the
ground state of the Hamiltonian (B.1), the previously defined one,
$W(J,\zeta,u,\eta,\bar\eta)$ in (3.12) at $\zeta=u=\eta=\bar\eta=0$
corresponds to ${\cal W}(0,J)$. Effecting an infinitesimal
transformation (B.7) and (B.8) with $\varepsilon(t)\to 0$ at
$t\to\pm\infty$, we have
$$i{\partial\over\partial t}|t>_\varepsilon=h_\varepsilon(t)|t>_\varepsilon,
\eqno(B.16)$$ where
$$|t>_\varepsilon={\cal U}(t)|t>\eqno(B.17)$$ and
$$h_\varepsilon(t)=(\kappa-{\partial\varepsilon\over\partial t})K
+J_x(t)x_\varepsilon(t)+J_y(t)y_\varepsilon(t).\eqno(B.18)$$ Consequently,
$$|t>_\varepsilon=U_\varepsilon(t,t_0)|t_0>_\varepsilon\eqno(B.19)$$ with
$$U_\varepsilon(t,t_0)=T\exp\Big(-i\int_{t_0}^tdt^\prime
h_\varepsilon(t^\prime)
\Big).\eqno(B.20)$$ The invariance of the Hamiltonian $H$ and its
ground state under the transformation implies that
$$<|U_\varepsilon(\infty,-\infty)-U(\infty,-\infty)|>=0,\eqno(B.21)$$
which, to the linear power of $\varepsilon$ gives
$$\int_\infty^\infty dt\biggl\{{\partial\varepsilon\over\partial t}
<K>_t+\varepsilon(t)\Big[J_x(t)<{gy\over 1+\lambda gy}>_t-J_y(t)
<{gx\over 1+\lambda gy}>_t\Big]\biggr\}=0,\eqno(B.22)$$ where the
canonical average $<...>_t$ is defined as $$<O>_t
={<|T[U(\infty,-\infty)O(t)]|>\over<|U(\infty,-\infty)|>}.\eqno(B.23)$$
Converting $<|U(\infty,-\infty)|>$ into the path integral and denoting
the path integral average by $<...>$ without the subscript $t$, we
have $$<{gy\over 1+\lambda gy}>_t=<{gy(t)\over 1+\lambda gy(t)}>,
\eqno(B.24)$$ $$<{gx\over 1+\lambda gy}>_t=<{gx(t)\over 1+\lambda gy(t)}>
\eqno(B.25)$$ and $$<K>_t|_{\kappa=0}=-g
<{\dot x(t)y(t)-x(t)\dot y(t)+\lambda g[x^2(t)+y^2(t)]\dot x(t)
\over 1+g^2[x^2(t)+y^2(t)]}>\vert_{\kappa=0}.\eqno(B.26)$$
The last equality requires some explanation. In the canonical form,
we may write $$<K>_t|_{\kappa=0}=-{\delta\over\delta\kappa(t)}
{\cal W}(\kappa,J)|_{\kappa=0}.\eqno(B.27)$$ On the other hand, the
operator $K$ contains the canonical momenta. Performing a Legendre
transformation of the Hamiltonian $H+h$ [15], the term of the corresponding
Lagrangian which is linear in $\kappa$ reads $$\kappa g{\dot xy-x\dot y+\lambda
g(x^2+y^2)\dot x\over 1+g^2(x^2+y^2)}\eqno(B.28)$$ and the equality (B.26)
follows from (B.27) with the path integral representation of
${\cal W}(\kappa,J)$. Putting back the $\xi$ and $z$, we find
$$<K>_t|_{\kappa=0}=<\xi(t)-\dot z(t)>\eqno(B.29)$$ and the identity (B.22)
becomes $$\int_\infty^\infty dt\biggl\{{\partial\varepsilon\over\partial t}
<\xi(t)-\dot z(t)>+\varepsilon(t)\Big[J_x(t)<{gy(t)\over 1+\lambda gy(t)}>
-J_y(t)<{gx(t)\over 1+\lambda gy(t)}>\Big]\biggr\}=0,\eqno(B.30)$$
For an arbitrary function $\varepsilon(t)$, the integration sign may be
removed after a partial integral and the Slavnov-Taylor identity
(3.22) at $u=\zeta=0$ emerges. The relation
$$<b(t)>={d\over dt}<\xi(t)-\dot z(t)>,\eqno(B.31)$$
which can be checked explicitly, is utilized in the final step.
\section{Appendix C}
To make the paper self-contained, we shall go through the path integral
of fermionic degrees of freedom, following the coherent field
treatment of the Ref. [7].
Consider a pair of fermion annihilation and creation operators, $a$,
and $a^\dagger$, with anticommutator
$$\{a, a^\dagger\}=1,\eqno(C.1)$$ The combination $a^\dagger aa^\dagger a$
is not zero. But if we replace $a$ and $a^\dagger$ by a pair of
Grassmann numbers $z$ and $\bar z$, the combination $\bar zz\bar zz$
is always zero. Therefore there are ordering ambiguities when transforming
the canonical formulation for fermionic operators to the path integral.
The question is which order goes through to the path integral
simply through the above replacements $a\to z$ and $a^\dagger\to\bar z$.
We shall discuss the systematics in the following:
For a system of $M$ fermionic degrees of freedom, represented by the
annihilation and creation operators $a_j$ and $a_j^\dagger$ with
$$\{a_i, a_j\}=0\eqno(C.2)$$ and $$\{a_i, a_j^\dagger\}=\delta_{ij}
,\eqno(C.3)$$ we introduce two set of independent Grassmann
numbers, $z_1$, $z_2$,..., $z_M$ and $\bar z_1$, $\bar z_2$,..,
$\bar z_M$. We also specify that they anticommute with the $a$'s,
$a^\dagger$'s and commute with the ket or bra of the ground state in
the Hilbert space. Furthermore, the following integration rule is imposed
$$\int dz_j=\int d\bar z_j=0\eqno(C.4)$$ and
$$\int z_idz_j=\int d\bar z_i\bar z_j=\delta_{ij}.\eqno(C.5)$$
Defining a coherent state by
$$|z_1,z_2,...,z_M>\equiv e^{\sum_ja_j^\dagger z_j}|0>\eqno(C.6)$$
and its conjugate by
$$<\bar z_1,\bar z_2,...,\bar z_M|\equiv <0|e^{\sum_j\bar z_ja_j}\eqno(C.7)$$
with $|0>$ the ground state. It follows that
$$a_j|z_1,z_2,...,z_M>=z_j|z_1,z_2,...,z_M>,\eqno(C.8)$$
$$<\bar z_1,\bar z_2,...,\bar z_M|a_j^\dagger=<\bar z_1,\bar z_2,...
\bar z_M|\bar z_j\eqno(C.9)$$ and
$$<\bar z_1,\bar z_2,...,\bar z_M|z_1,z_2,...,z_M>=e^{\sum_j\bar z_jz_j}.
\eqno(C.10)$$ Furthermore, we have the completeness relation
$$\int|z_1,z_2,...,z_M>\prod_jdz_jd\bar z_j e^{-\sum_j\bar z_jz_j}<\bar z_1,
\bar z_2,...,\bar z_M|=1.\eqno(C.11)$$ Let the Hamiltonian of the
system be $$H(a^\dagger,a)=\sum_{ij}\omega_{ij}a_i^\dagger a_j+{1\over 2}
\sum_{ii^\prime,jj^\prime}v_{ii^\prime,j^\prime j}
a_i^\dagger a_{i^\prime}^\dagger a_{j^\prime}a_j+...$$
$$+{1\over M!}\sum_{i_1,...,i_M;j_M,...,j_1}v_{i_1...i_M,j_M...j_1}
a_{i_1}^\dagger...a_{i_M}^\dagger a_{j_M}...a_{j_1},\eqno(C.12)$$
where the normal ordering with respect to the state $|0>$ is the
crucial point. It follows from (C.8) and (C.9) that
$$<\bar z_1,...,\bar z_M|H|z_1,...,z_M>=H(\bar z,z)$$ $$=
\sum_{ij}\omega_{ij}\bar z_iz_j+{1\over 2}
\sum_{ii^\prime,jj^\prime}v_{ii^\prime,j^\prime j}
\bar z_i\bar z_{i^\prime}z_{j^\prime}z_j+...$$ $$+{1\over M!}
\sum_{i_1,...,i_M;j_M,...,j_1}\bar z_{i_1}...\bar z_{i_M}z_{j_M}
...z_{j_1}\eqno(C.13)$$ and therefore
$$<\bar z_1,...,\bar z_M|e^{-i\epsilon H}|z_1,...,z_M>
=e^{\sum_j\bar z_jz_j}[1-i\epsilon H(\bar z,z)+O(\epsilon^2)].
\eqno(C.14)$$
With the aid of the completeness relation (C.11), we end up with the
following path integral representation of the fermionic system
$$<\bar z_1^\prime,...,\bar z_M^\prime|e^{-itH}|z_1,...,z_M>
=\int[dz]_N\prod_{n=1}^{N-1}[d\bar zdz]_n[d\bar z]_0e^{i\sum_n
L(n)}\eqno(C.15)$$ where $t=N\epsilon$ and $\epsilon\to 0$ at fixed
$t$, and we have made the abbreviation
$$[d\bar zdz]_n=\prod_jd\bar z_j(n)dz_j(n),\eqno(C.16)$$
$$[dz]_N=\prod_jdz_j(N)\eqno(C.17)$$ and $$[d\bar z]_0=\prod_jd\bar z_j(0)
\eqno(C.18)$$ The Lagrangian $L(n)$ reads $$L(n)=i\sum_j\bar z_j(n)\dot z_j(n)
-H(\bar z(n),z(n))\eqno(C.19)$$ with $\dot z_j(n)={1\over\epsilon}
[z_j(n+1)-z_j(n)]$.
Like bosonic operators, the ordering ambiguity here is also reflected
in the difference between the Dyson-Wick contraction and the path
integral contraction at equal time. Consider a free system whose
Hamiltonian is given by (C.12) with $\omega_{ij}=\omega\delta_{ij}$ and all
$v$'s vanishing. The Dyson-Wick contraction gives.
$$\lim_{t\to 0^+}<0|T(a(t)a^\dagger(0))|0>=1\eqno(C.20)$$ while
$$\lim_{t\to 0^-}<0|T(a(t)a^\dagger(0))|0>=0.\eqno(C.21)$$ The path
integral, on the other hand, gives rise to an unambiguous result
at $t=0$ since
$$S_{ij}\equiv{\int [dz]_N\prod_{n=1}^{N-1}[d\bar zdz]_n
[d\bar z]_0z_i(m)\bar z_j(m)e^{i\epsilon\sum_nL(n)}\over
\int [dz]_N\prod_{n=1}^{N-1}[d\bar zdz]_n
[d\bar z]_0e^{i\epsilon\sum_nL(n)}}$$ $$
=\delta_{ij}{1\over\epsilon}\int_{-\pi}^\pi{d\theta\over 2\pi}{i\over
i{e^{-i\theta}-1\over\epsilon}-\omega+i0^+}=0.\eqno(C.22)$$
To illustrate the caution which is needed in transforming the
canonical formulation to path integral, we consider a soluble
gauge model whose Lagrangian is given by
$$L={1\over 2}(\dot z-\xi)^2+i\psi^\dagger(\dot\psi-ig\xi)
-m\psi^\dagger\psi\eqno(C.23)$$ with $\psi$, $\psi^\dagger$ fermionic
and $z$, $\xi$ bosonic. The gauge transformation reads
$$z\to z^\prime=z+{\alpha\over g},\eqno(C.24)$$
$$\xi\to\xi^\prime=\xi+{\dot\alpha\over g}\eqno(C.25)$$
and $$\psi\to\psi^\prime=e^{i\alpha}\psi\eqno(C.26)$$ with $\alpha$ an
arbitrary function of time. In the time axial gauge where $\xi=0$,
The Hamiltonian corresponding to (C.23) is
$$H=-{1\over 2}{\partial^2\over \partial Z^2}+m\Psi^\dagger\Psi
\eqno(C.27)$$ and the Gauss law constraint is
$$\Big(-i{\partial\over\partial Z}-g\Psi^\dagger\Psi\Big)|>=0
\eqno(C.28)$$ The constraint can be solved explicitly and the
physical spectrum consists of two states with $\Psi^\dagger\Psi
=0,1$ and the corresponding eigenvalue of $H$ $=0$ and $m+{g^2\over 2}$.
Though trivial, we still follow the transformation of (C.27) and (C.28) to
the gauge where $z=0$, with the dynamical variables $\theta$ determined
by $Z+{\theta\over g}=0$ and $\psi=e^{i\theta}\Psi$. The Hamiltonian
(C.27) and the constraint (C.28) becomes
$$H=-{g^2\over 2}{\partial^2\over \partial\theta^2}+m\psi^\dagger\psi
\eqno(C.29)$$ and
$$\Big(-i{\partial\over\partial\theta}-\psi^\dagger\psi\Big)|>=0.\eqno(C.30)$$
Substituting the solution of (C.30) into (C.29), we obtain
$$H_{\rm{eff.}}=m\psi^\dagger\psi+{g^2\over 2}(\psi^\dagger\psi)^2.
\eqno(C.31)$$
Following the above recipe, we convert (C.28) into a path integral
$$<\psi|e^{-itH_{\rm{eff.}}}|>=\int\prod_ndz_nd\xi_nd\psi_n
d\bar\psi_n\delta(z_n)e^{i\epsilon\sum_nL_{\rm{eff.}}(n)}<\psi_0|>
,\eqno(C.32)$$ where
$$L_{\rm{eff.}}(n)={1\over 2}(\dot z_n-\xi_n)^2+i\bar\psi_n(\dot\psi_n
-ig\xi_n\psi_n)-m\bar\psi_n\psi_n-{g^2\over 2}\bar\psi_n\psi_n
\eqno(C.33)$$ with the last term comes from the normal ordering of the
four-fermion term of (C.31). The
integration over $\xi_n$ in (C.33) will not generate quartic terms
since the combination $(\bar\psi_n\psi_n)^2$ vanishes. Applying the
Feynman rules given by the path integral (C.32) at $t\to
\infty$, we have verified explicitly that the shift of the self-energy
because of the interaction vanishes to one loop order,
in agreement with the result of canonical quantization.
Finally, we come to the nonabelian gauge field. The four fermion
Coulomb interaction term of Christ-Lee Hamiltonian in Coulomb
gauge reads
$$H_{\rm{Coul.}}={g^2\over 2}\int d^3\vec rd^3\vec r^\prime
\psi^\dagger(\vec r)T^l\psi(\vec r)(\vec r,l|G(-\nabla^2)G|
\vec r^\prime,l^\prime)\psi^\dagger(\vec r^\prime)T^{l^\prime}
\psi(\vec r^\prime)$$ $$={g^2\over 2}\int d^3\vec rd^3\vec r^\prime
:\psi^\dagger(\vec r)T^l\psi(\vec r)(\vec r,l|G(-\nabla^2)G|
\vec r^\prime,l^\prime)\psi^\dagger(\vec r^\prime)T^{l^\prime}
\psi(\vec r^\prime):$$ $$+{g^2\over 2}\int d^3\vec r
(\vec r,l|G(-\nabla^2)G|\vec r^\prime,l^\prime)\psi^\dagger(\vec r)
T^lT^{l^\prime}\psi(\vec r).\eqno(C.34)$$ The last term becomes
${\cal V}_3$ of (4.45).
The additional term steming from the normal ordering of fermionic
operators begins to show up at one loop level, unlike its
bosonic counterpart. In the case of an abelian
gauge theory, the term (4.45) corresponds to the self Coulomb
energy of a fermion and is not observable, but here, for the nonabelian
case, it carries the coupling to the gluon fields and may not be
ignored.
\bigskip
\references
\ref{1.}{J. Frenkel and J. C. Taylor, {\it Nucl. Phys.} {\bf B109}, 439
(1976).}
\ref{2.}{T. D. Lee, {\it Particle Physics and Introduction to Field Theory},
Harwood Academic, Chur, 1981.}
\ref{3.}{N. H. Christ and T. D. Lee, {\it Phys. Rev.} {\bf D22}, 939(1980).}
\ref{4.}{H. Cheng and E. C. Tsai, {\it Phys. Rev. Lett.} {\bf 57}, 511
(1986).}
\ref{5.}{R. Friedberg, T. D. Lee, Y. Pang and H. C. Ren, {\it Ann. Phys.}
{\bf Vol. 246}, 381(1996).}
\ref{6.}{J.-L. Gervais and A. Jevicki, {\it Nucl. Phys.}, {\bf B110}, 93(1976).
See also the footnote 9 of the Ref. 3}
\ref{7.}{B. Sakita, {\it Quantum Theory of Many-Variable Systems and Fields},
World Scientific, 1985.}
\ref{8.}{R. P. Feynman and A. R. Hibbs, {\it Quantum Mechanics and Path
Integrals}, McGraw-Hill, New York, 1965.}
\ref{9.}{K. Fujikawa, {\it Nucl. Phys.} {\bf B468}, 355 (1996).}
\ref{10.}{A. A. Slavnov, {\it Theor. Math. Phys.} {\bf 10}, 99 (1972);
J. C. Taylor, {\it Nucl. Phys.} {\bf B33}, 436 (1971).}
\ref{11.}{D. Zwanziger, {\it Nucl. Phys}. {\bf B518}, 237(1998);
L. Baulieu and D. Zwanziger, hep-th/9807024.}
\ref{12.}{H. Cheng and E. C. Tsai, {\it Chinese J. Phys.} {\bf 25}, 1(1987);
P. Doust, {\it Ann. Phys.} {\bf Vol. 177}, 169(1987); P. J. Doust and J. C.
Taylor, {\it Phys. Lett.} {\bf 197B}, 232(1987).}
\ref{13.}{K. G. Wilson, {\it Phys. Rev.} {\bf D14}, 2455(1974).}
\ref{14.}{D. Zwanziger, {\it Nucl. Phys.} {\bf B485}, 185(1997).}
\ref{15.}{T. D. Lee and C. N. Yang, {\it Phys. Rev.} {\bf 128}, 885(1962).}
\vfill\eject
\end
\section{ACKNOWLEDGMENTS}}
\def\section{APPENDIX}{\section{APPENDIX}}
\def\references{\section{REFERENCES}\tenpoint\parindent=0pt
\raggedright\rightskip=0pt plus 5em}
\def\ref#1#2{\hbox to \hsize{\vbox{\tenpoint\hsize=0.2in\relax #1\hfil}
\hfil\vtop{\hsize=4.75in\relax\tenpoint #2}}}
\vglue 1.0truein
| proofpile-arXiv_065-9182 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{Abstract (Not appropriate in this style!)}%
\else \small
\begin{center}{\bf Abstract\vspace{-.5em}\vspace{\z@}}\end{center}%
\quotation
\fi
}%
}{%
}%
\@ifundefined{endabstract}{\def\endabstract
{\if@twocolumn\else\endquotation\fi}}{}%
\@ifundefined{maketitle}{\def\maketitle#1{}}{}%
\@ifundefined{affiliation}{\def\affiliation#1{}}{}%
\@ifundefined{proof}{\def\proof{\paragraph{Proof. }}}{}%
\@ifundefined{endproof}{\def\endproof{\mbox{\ $\Box$}}}{}%
\@ifundefined{newfield}{\def\newfield#1#2{}}{}%
\@ifundefined{chapter}{\def\chapter#1{\par(Chapter head:)#1\par }%
\newcount\c@chapter}{}%
\@ifundefined{part}{\def\part#1{\par(Part head:)#1\par }}{}%
\@ifundefined{section}{\def\section#1{\par(Section head:)#1\par }}{}%
\@ifundefined{subsection}{\def\subsection#1%
{\par(Subsection head:)#1\par }}{}%
\@ifundefined{subsubsection}{\def\subsubsection#1%
{\par(Subsubsection head:)#1\par }}{}%
\@ifundefined{paragraph}{\def\paragraph#1%
{\par(Subsubsubsection head:)#1\par }}{}%
\@ifundefined{subparagraph}{\def\subparagraph#1%
{\par(Subsubsubsubsection head:)#1\par }}{}%
\@ifundefined{therefore}{\def\therefore{}}{}%
\@ifundefined{backepsilon}{\def\backepsilon{}}{}%
\@ifundefined{yen}{\def\yen{\hbox{\rm\rlap=Y}}}{}%
\@ifundefined{registered}{\def\registered{\relax\ifmmode{}\r@gistered
\else$\m@th\r@gistered$\fi}%
\def\r@gistered{^{\ooalign
{\hfil\raise.07ex\hbox{$\scriptstyle\rm\text{R}$}\hfil\crcr
\mathhexbox20D}}}}{}%
\@ifundefined{Eth}{\def\Eth{}}{}%
\@ifundefined{eth}{\def\eth{}}{}%
\@ifundefined{Thorn}{\def\Thorn{}}{}%
\@ifundefined{thorn}{\def\thorn{}}{}%
\def\TEXTsymbol#1{\mbox{$#1$}}%
\@ifundefined{degree}{\def\degree{{}^{\circ}}}{}%
\def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}%
\newdimen\theight
\def\Column{%
\vadjust{\setbox\z@=\hbox{\scriptsize\quad\quad tcol}%
\theight=\ht\z@\advance\theight by \dp\z@\advance\theight by \lineskip
\kern -\theight \vbox to \theight{%
\rightline{\rlap{\box\z@}}%
\vss
}%
}%
}%
\def\qed{%
\ifhmode\unskip\nobreak\fi\ifmmode\ifinner\else\hskip5\p@\fi\fi
\hbox{\hskip5\p@\vrule width4\p@ height6\p@ depth1.5\p@\hskip\p@}%
}%
\def\cents{\hbox{\rm\rlap/c}}%
\def\miss{\hbox{\vrule height2\p@ width 2\p@ depth\z@}}%
\def\vvert{\Vert
\def\tcol#1{{\baselineskip=6\p@ \vcenter{#1}} \Column} %
\def\dB{\hbox{{}}
\def\mB#1{\hbox{$#1$}
\def\nB#1{\hbox{#1}
\def\note{$^{\dag}}%
\makeatother
| proofpile-arXiv_065-9202 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{Introduction}
\parindent 1 cm
Theories of gravity with terms of any order in curvatures arise as part of the low energy
effective theories of the strings [1] and from the dynamics of quantum fields in a curved
spacetime background [2]. Theories of second order
(four-derivative theories in the following) have been studied more closely in the literature
because they are renormalizable [3] in four dimensions. This property spurred Renormalization-
Group studies [4-8], including attempts to get rid of the Weyl ghosts (also known as
``poltergeists") usually occurring in higher-derivative (HD) theories. On the practical side, HD
gravity greatly affects the effective potential and phase transitions of scalar fields in curved
space-time, with a wealth of astrophysical and cosmological properties [9]. These
phenomenological applications contributed to keep alive the theoretical interest, as illustrated
by the most comprehensive introductory study available [10].
Besides the renormalization properties [3], all that was known about the structure of the
(classical) theory was the particle contents, as read out of the linear decomposition of the HD
propagator into pieces with second order (particle) poles. Some related aspects of the equations
of motion [11] were also elucidated. Definite theoretical progress came from a procedure, based
on the Legendre transformation, devised to recast four-derivative gravity as an equivalent theory
of second differential order [12]. A suitable diagonalization of the resulting theory was found
later [13] that yields the explicit independent fields for the degrees of freedom (DOF) involved
(usually including massive Weyl ghosts), thus completing the order-reducing procedure. One should notice that theories with terms of higher order in curvatures have the same DOF and propagators of the four-derivative one, since the higher terms do no contribute to the linearized theory.
An alternative order-reducing method has been proposed [14] that introduces an auxiliary field coupled to the Einstein tensor $G_{\mu\nu}$ (or to the scalar curvature $R$) and featuring a squared term. It can be shown that this method is equivalent to the one based on a Legendre transformation with respect to $G_{\mu\nu}$ (or to $R$), the auxiliary field being a redefinition of the "momentum" conjugate to them.
The studies [12-14] above were carried out for the (non quantizable)
Diff-invariant theory. An exploration of the method in the presence of gauge fixing terms has
been done in a simplified HD gauge field model [15]. In this paper we implement this procedure
for four-derivative gravity.
Amongst a crowd of positive and negative norm, gauge-independent and gauge ghost, masless and
massive states, the famous ``third ghosts" arise. These subtle ghosts, missed in [4] and
properly accounted for in [5] and there since, first emerged from a functional determinant in the
context of Path Integral quantization. Now they appear as the poltergeist partners of the usual
gauge ghosts.
In Section 1 we present our starting Diff-invariant four-derivative theory.
A very general gauge fixing term is then introduced that includes the most used ones as
particular cases. Being interested in the propagators and
in the ensuing DOF identification, we focus mainly on the free part of the Lagrangian. Self
interactions and interactions with other matter fields are
embodied in a source term and may be treated perturbatively. Then the relevant total gauge fixed
linearized theory is worked out. Section 2 deals with the order-reducing procedure that leads to
the diagonalized second-derivative equivalent theory.
The structure of the propagators and the identification of the DOF is then worked out in Section
3. The Faddeev--Popov (FP) compensating Lagrangian is studied in Section 4, where an order
reduction of the fermion sector is also carried out. Particular attention is paid here to the
identification of poles and to the striking cancellation mechanism of ghost loop contributions.
Related to this, a discussion of the BRST symmetries involved is also made. The above results are
summarized and further elucidated in the conclusion.
The definitions of the spin projectors and related formulae, the basis of local differential
operators, and the notations and conventions used throughout the paper have been collected in
Appendix I in order to render the work almost self-contained and more readable. Secondary
calculations regarding the conditions from locality on the gauge-fixing parameters and the order
reduction of the HD fermionic FP Lagrangian have been respectively moved to two Appendices.
\section{The Linearized Lagrangian}
We consider a general four-derivative theory of gravity
\begin{equation}
{\cal L}_{HD} = {\cal L}_{inv} + {\cal L}_{g}+{\cal L}_{m}\quad ,
\end{equation}
where ${\cal L}_{m}$ is the coupling with matter,
\begin{equation}
{\cal L}_{inv} = \sqrt g \ [ aR + bR^2 + c R_{\mu \nu} R^{\mu\nu} ]
\quad ,
\end{equation}
is the most general Diff-invariant gravitational Lagrangian of second order in curvatures
(the squared Riemann tensor has not been considered as long as a topologically trivial 4D
spacetime is assumed so that the Gauss-Bonnet identity holds), and
\begin{equation}
{\cal L}_{g} = \sqrt{g} \frac{1}{2}
\chi^{\mu}[h] {\cal G}_{\mu \nu}\chi^{\nu}[h]\, ,
\end{equation}
where
\begin{eqnarray}
\chi^{\mu}[h] &\equiv& A^{\mu} - \lambda D^{\mu} h \quad , \\
{\cal G}_{\mu \nu} &\equiv& \xi_1 D^{\rho}D_{\rho} g_{\mu\nu} -
\xi_2\frac{1}{2}D_{(\mu} D_{\nu)} + \xi_3 g_{\mu\nu}
+\xi_4 R_{\mu \nu} + \xi_5 R g_{\mu \nu} \quad ,
\end{eqnarray}
is a general gauge fixing term that depends on six gauge parameters and generally contains HD as
well as lower-derivative (LD) terms. One may obtain the gauge fixings used in [4]-[8] by
specializing these parameters.
\par
In order to study the propagating DOF of the theory we work the quadratic
terms in $ h_{\mu \nu} $ out of $ {\cal L}_{HD} $. Dropping
total derivatives, they write
\begin{equation}
{\cal L}_{HD} = {\cal L}^{(2)}_{inv} + {\cal L}^{(2)}_{g} +
{\cal L}_{s}
= \frac{1}{2} h^{\mu\nu}
(P_{\mu \nu,\rho\sigma}^{inv} + P_{\mu \nu,\rho\sigma}^{g} )
h^{\rho\sigma}
+{\cal L}_{s}\nonumber \quad .
\end{equation}
The source term ${\cal L}_{s}$ includes the interactions with matter
fields $\phi$ and all the self-interactions of $h_{\mu \nu}$ affected by the
Newton constant $G_{N}$.
Here and in the following the indices are rised and lowered by
$ \eta_{\mu \nu} $ and usually omitted for simplicity whenever no ambiguity
arises.
The differential operator kernel for the diff-invariant part is
\begin{equation}
P^{inv} = a \Box \left[ \frac{1}{2} P^{(2)} - P^{(S)} \right]
+ 6b \Box^2 P^{(S)} + c \Box^2 \left[ \frac{1}{2} P^{(2)} + 2P^{(S)} \right] \quad .
\end{equation}
The gauge fixing contribution
\begin{equation}
{\cal L}^{(2)}_{g} = \frac{1}{2} ( A^{\mu} - \lambda \partial^{\mu} h )
(\xi_1 \Box \eta_{\mu\nu}
- \xi_2\partial_{\mu} \partial_{\nu} + \xi_3 \eta_{\mu\nu} )
( A^{\nu} - \lambda \partial^{\nu} h )
\end{equation}
yields
\begin{eqnarray}
P^{g} & = & - \Box \lambda^2 \left[ ( \xi_1 - \xi_2 )\Box
+ \xi_3 \right] \left(3P^{(S)}+ P^{(W)}+ P^{\{SW\}}\right) \nonumber \\
& & \mbox{} + \xi_2 \Box^2 P^{(W)}
- \Box \left[\xi_1\Box +\xi_3 \right]
\left(\frac{1}{2}P^{(1)} + P^{(W)}\right) \\
& & \mbox{} + \lambda \Box \left[ (\xi_1 - \xi_2)\Box + \xi_3 \right]
\left( 2P^{(W)} + P^{\{SW\}}\right) \nonumber\, .
\end{eqnarray}
One recognizes in (8) the linearized $\chi^{\mu}[h]$ and the $h$-independent part of
$\cal{G}_{\mu\nu}\,$, which we call ${\cal G}^{(h)}$ in the following.
\noindent{Thus} the complete HD differential operator kernel is
\begin{eqnarray}
P & = & P^{inv} + P^{g} \nonumber\\
&=& \frac{1}{2}\Box \left( c\Box+a \right) P^{(2)}
-\frac{1}{2}\Box \left( \xi_{1}\Box +\xi_{3}\right) P^{(1)}
\nonumber
\\
& & \mbox{} + \Box\left[ -a+2(3b+c)\Box -3 \lambda^{2}
\left( (\xi_{1}-\xi_{2})\Box+\xi_{3} \right)
\right] P^{(S)}
\\
& & \mbox{} -(\lambda -1)^{2} \Box\left( (\xi_{1}
-\xi_{2} )\Box +\xi_{3}\right) P^{(W)}
\nonumber
\\
& & \mbox{} -\lambda (\lambda -1)\Box \left( (\xi_{1}-\xi_{2})\Box +\xi_{3}\right)
P^{\{SW\}} \nonumber\, .
\end{eqnarray}
\noindent{B}y decomposing $ P $ in its HD and LD parts, namely
\begin{equation}
P = M\Box^2 + N\Box \quad ,
\end{equation}
where
\begin{eqnarray}
M &\equiv& \frac{c}{2} P^{(2)}- \frac{1}{2} \xi_1 P^{(1)} \nonumber \\
& & \mbox{} + \left( 2(3b+c)-3\lambda^{2}(\xi_{1}-\xi_{2})\right)P^{(S)}
-(\lambda -1)^{2}(\xi_{1}-\xi_{2})P^{(W)} \nonumber \\
& & \mbox{}
-\lambda (\lambda -1) (\xi_{1}-\xi_{2})P^{\{SW\}} \\
N & \equiv& \frac{1}{2} a P^{(2)}
- \frac{1}{2} \xi_3 P^{(1)}\nonumber \\
& & \mbox{} -\left(a +3\lambda^{2}\xi_{3} \right)P^{(S)}
-(\lambda -1)^{2}\xi_{3}P^{(W)}
-\lambda (\lambda -1)\xi_{3} P^{\{SW\}}\nonumber \quad ,
\end{eqnarray}
equation (6) may be written as
\begin{equation}
{\cal L}_{HD} = \frac{1}{2} h \Box (M\Box + N) h
+{\cal L}_{s}\quad .
\end{equation}
Dropping total derivatives, it can be given the more
convenient form
\begin{equation}
{\cal L}_{HD} [h,\Box h]
= \frac{1}{2}(\Box h)M(\Box h) + \frac{1}{2}hN(\Box h)
+{\cal L}_{s}\quad .
\end{equation}
The HD Euler's equation takes also the form
\begin{equation}
\Box (M\Box + N)^{\mu \nu ,\rho \sigma} h_{\rho \sigma} = T^{\mu \nu}\quad ,
\end{equation}
where $T^{\mu \nu}\equiv-\frac{\delta {\cal L}_{s}}{\delta h_{\mu \nu}}\;$.
\section{Second order equivalent theory}
In order to perform a Lorentz-covariant Legendre transformation [13][16]
of our HD Lagrangian, the form of (14) trivially suggests defining the
conjugate variable
\begin{equation}
\pi^{\mu\nu} = \frac{\partial {\cal L}_{HD}}{\partial (\Box h_{\mu\nu})}\quad .
\end{equation}
One finds
\begin{equation}
\pi = M (\Box h) + \frac{1}{2} Nh +O(G_{N})\quad ,
\end{equation}
where the contributions from the gravitational interactions may be
accounted for perturbatively in $G_{N}\,$, or may be simply ignored for
the analysis of the propagating DOF.
As required, (17) is invertible and gives
\begin{equation}
\Box h = M^{-1} \left[\pi - \frac{1}{2}Nh \right] \equiv F[ h , \pi ]
\quad .
\end{equation}
Notice that the operators $ M $ and $ N $ are invertible as long
as gauge fixing terms have been introduced. Otherwise they would project into
the spin-state subspace $ 2 \oplus S $, then being singular.\par
The Lorentz-covariant Hamiltonian-like function is then
\begin{eqnarray}
{\cal H} [ h , \pi ] & = & \pi F[ h,\pi ]
- {\cal L}_{HD}[ h ,F[ h ,\pi ]] \nonumber \\
& = & \frac{1}{2} \left[\frac{1}{2}N h - \pi\right]
M^{-1} \left[ \frac{1}{2}N h - \pi\right]-{\cal L}_{s}\quad .
\end{eqnarray}
The equations of motion turn out to be the system of canonical-like equations
\begin{eqnarray}
\Box h & = & \frac{ \partial {\cal H} }{ \partial \pi} \\
\Box \pi & = & \frac{ \partial {\cal H} }{ \partial h }
\; .
\end{eqnarray}
The familiar negative sign one woud expect in (21) is
absent because the definition (16) involves second derivatives
of the field $h$ instead of the usual velocity [15].
They may also be derived by a variational principle from the so called
(now second-derivative) Helmholtz Lagrangian
\begin{equation}
{\cal L}_{H} [ h, \pi] = \pi \Box h - {\cal H} [ h, \pi ]\quad .
\end{equation}
In fact from (22) one sees that (20) is the Euler's equation for $ \pi $ and
(21) is the one for $ h $. From (20) (which is nothing but equation (18))
one may
work out $ \pi $ as given by (17). Substituting it in (21) one recovers (15),
namely the original HD equation of motion.
Mixed $ \pi - h $ terms occur in (22). The diagonalization can be
performed by
defining new tilde fields such that
\begin{eqnarray}
h & = & \tilde h + \tilde\pi \nonumber \\
\pi & = & \frac{N}{2} (\tilde h - \tilde\pi ) \; .
\end{eqnarray}
or conversely
\begin{eqnarray}
\tilde h & = & N^{-1} \left[\frac{1}{2}Nh + \pi\right] \nonumber \\
\tilde \pi & = & N^{-1} \left[ \frac{1}{2}Nh -\pi\right]\; .
\end{eqnarray}
Then $ {\cal L}_{H} $ finally becomes the desired LD theory
\begin{eqnarray}
{\cal L}_{LD} &=& \frac{1}{2} \tilde h N \Box \tilde h -
\frac{1}{2} \tilde \pi ( N\Box+ NM^{-1}N ) \tilde \pi
+{\cal L}_{s}\quad ,
\end{eqnarray}
where
\begin{eqnarray}
NM^{-1}N&=&\frac{a^{2}}{2c}P^{(2)}-\frac{\xi_{3}^{2}}{2\xi_{1}}P^{(1)}
\nonumber\\
& & \mbox{}
+ \frac{a^{2}(\xi_{1}-\xi_{2})-3\lambda^{2}\xi_{3}^{2}2(3b+c)}
{2(3b+c)(\xi_{1}-\xi_{2})}P^{(S)}\nonumber\\
& & \mbox{} -\frac{(\lambda -1)^{2}\xi_{3}^{2}}
{\xi_{1}-\xi_{2}}P^{(W)}
\\
& & \mbox{} -\frac{\lambda (\lambda -1) \xi_{3}^{2}}{\xi_{1}-\xi_{2}}
P^{\{SW\}}\nonumber
\end{eqnarray}
For further discussion, a most enlightening expression for (25) is obtained by separating the
gauge-dependent parts
\begin{eqnarray}
{\cal L}_{LD} &=& \frac{1}{2}\tilde{h} a \left(
\frac{1}{2}P^{(2)}-P^{(S)}
\right)\Box\tilde{h}
+\frac{1}{2} \chi[ \tilde{h} ] {\cal G}^ {\tilde{(h)}} \chi[\tilde{h}] \nonumber\\
&&\mbox{} -\frac{1}{2}\tilde{\pi}\left[
a\left( \frac{1}{2}P^{(2)}-P^{(S)} \right)\Box
+\frac {a^{2}}{2c} P^{(2)}
+ \frac{a^{2}}{2(3b+c)} P^{(S)} \right]\tilde{\pi}\\
&&\mbox{}
-\frac{1}{2} \chi[ \tilde{\pi} ] {\cal G}^{\tilde{(\pi)}} \chi[ \tilde{\pi}]
+{\cal L}_{s}
\nonumber
\end{eqnarray}
where
\begin{eqnarray}
{\cal G}_{\alpha \beta}^{(\tilde h)}
&=&\xi_{3} \theta_{\alpha \beta} +\xi_{3} \omega_{\alpha \beta}
=\xi_{3} \eta_{\alpha \beta }
\\
{\cal G}_{\alpha \beta}^{(\tilde \pi)}&=&\xi_{3}
\frac{\xi_{1}\Box+\xi_{3}}{\xi_{1}\Box} \theta _{\alpha \beta}
+\xi_{3} \frac{(\xi_{1}-\xi_{2})\Box+\xi_{3}}{(\xi_{1}-\xi_{2})\Box}
\omega_{\alpha \beta} \quad ,
\end{eqnarray}
and the form of $\chi$ has been displayed in (8).
The physical meaning is now apparent:
$\tilde h$ and $\tilde \pi$ describe the massless and the massive DOF of the theory
respectively. Notice that the gauge-invariant part of the kinetic term of $\tilde \pi$ reproduces
that of the Fierz-Pauli theory [17].
The LD Lagrangian (27) thus obtained is non-local for arbitrary
gauge parameters. However, we can have locality for a particular choice
of parameters (see Appendix II). Even for this choice, an unpleasant feature of (27) is that the
scalar subspaces $S$ and $W$ appear mixed as long as the transfer operator $P^{\{SW\}}$ occurs in
$N$ and $NM^{-1}N$.
\section{Linearized theory and propagators}
In order to avoid unessential complications due to the $S$-$W$ mixing that obscures the
identification of the propagating DOF, one may redefine the field $ h_{\mu\nu} $ as
\begin{equation}
\hat{h}_{\mu\nu} = ( Q^{-1})_{\mu\nu} ^{ \rho\sigma} \; h_{\rho \sigma}
\end{equation}
where
\begin{eqnarray}
Q(\lambda) &=& P^{(2)} + P^{(1)} +\frac{2}{3}P^{(W)}
- \frac{2}{9} \frac{(\lambda -1)}{\lambda } P^{\{SW\}}
\end{eqnarray}
is invertible and becomes a numerical matrix for $\lambda =-2\,$,
namely $Q(-2)=\bar{\eta}-\frac{1}{3}\bar{\bar{\eta}}\,$. This choice is
compulsory if we wish to avoid polluting the source term
with non-locality. The operator $ P $ transforms to
\begin{equation}
\hat{P}=QPQ= {\hat M}\Box^2 + {\hat N}\Box \quad ,
\end{equation}
where
\begin{eqnarray}
\hat {M} & \equiv& \frac{c}{2} P^{(2)}- \frac{1}{2} \xi_1 P^{(1)} \nonumber\\
& & \mbox{} + \frac{4}{27} \frac{(\lambda -1)^2}{\lambda^2} 2(3b + c)P^{(W)}
- \frac{4}{27} \frac{(\lambda -1)^4}{\lambda^2}(\xi_1 - \xi_2)P^{(S)} \quad ,
\\
\hat{N} & \equiv& \frac{1}{2} a P^{(2)}
-\frac{4}{27} a \frac{(\lambda -1)^2}{\lambda^2} P^{(W)}
- \frac{1}{2} \xi_3 P^{(1)}
- \frac{4}{27} \frac{(\lambda -1)^4}{\lambda^2}\xi_3 P^{(S)}
\nonumber
\end{eqnarray}
do not contain the operator $P^{\{SW\}}$ anylonger. Then equation (13) may be written as
\begin{equation}
{\cal L}_{HD} = \frac{1}{2} \hat{h} \Box ( \hat{M}\Box +
\hat{N}) \hat{h} +{\cal L}_{s}\quad ,
\end{equation}
or, dropping total derivatives,
\begin{equation}
{\cal L}^{(2)}_{HD} [\hat{h},\Box \hat{h}]
= \frac{1}{2}(\Box \hat{h})\hat{M}(\Box \hat{h})
+ \frac{1}{2}\hat{h}\hat{N}(\Box \hat{h})
+\hat{T}\hat{h}
\quad .
\end{equation}
\bigskip
The particle interpretation of (35) is now the central question. On one hand we can start from
the HD theory (34) and, after inverting the projectors, obtain the quartic propagator
\begin{eqnarray}
\Delta^{HD}[ \hat{h}] & = & \frac{2}{(c\Box+a) \Box}P^{(2)}
+\frac{27}{4}\frac{ \lambda^2}{(\lambda -1)^2 [2(3b+c) \Box-a] \Box}P^{(W)}
\\
& & \mbox{} -\frac{2}{(\xi_1 \Box+\xi_3)\Box}P^{(1)}
-\frac{27}{4}\frac{ \lambda^2}{(\lambda-1)^4[(\xi_1 -\xi_2 )
\Box+\xi_3 ]\Box} P^{(S)}\nonumber\quad .
\end{eqnarray}
On the other hand, the quadratic propagators arising from the LD theory
(analogous of(25)) for the new {\it hat} fields are
\begin{eqnarray}
\Delta^{LD}[ \tilde{ \hat{h} } ] & = &
\frac{2}{a\Box} P^{(2)}
-\frac{27}{4}\frac{ \lambda^2}{(\lambda -1)^2 a\Box} P^{(W)}
\nonumber\\
& & \mbox{} -\frac{2}{\xi_3 \Box}P^{(1)}
-\frac{27}{4}\frac{ \lambda^2}{(\lambda-1)^4 \xi_3 \Box} P^{(S)}\quad ,
\\
\Delta^{LD}[ \tilde{ \hat{\pi} } ] & = &
-\frac{2c}{a(c\Box+a)}P^{(2)}
+\frac{27}{4}
\frac{\lambda^2}{(\lambda -1)^2}
\frac{2(3b+c)}{a[2(3b+c)\Box-a]}
P^{(W)}
\nonumber\\
& & \mbox{} +\frac{2}{\xi_{3}}
\frac{\xi_{1}}{(\xi_{1}\Box +\xi_{3})}P^{(1)}
+\frac{27}{4}\frac{\lambda^2}{(\lambda -1)^4 \xi_{3}}
\frac{(\xi_{1} - \xi_{2})}
{[(\xi_{1} -\xi_{2} )\Box+\xi_{3} ]}
P^{(S)}\nonumber\quad .
\end{eqnarray}
As expected, the LD quadratic propagators sum up to give the HD quartic one, namely
\begin{equation}
\Delta^{HD}[\hat{h}]=\Delta^{LD}[\tilde{ \hat{h}}]+\Delta^{LD}[\tilde{ \hat{\pi}}]
\end{equation}
Notice that if we had not performed the $Q$--transformation, the propagators would have been
\begin{eqnarray}
\Delta^{HD}[h]&=&Q\Delta^{HD}[\hat{h}]Q \nonumber \\
\Delta^{LD}[\tilde{h}]&=&Q\Delta^{LD}[\tilde{\hat{h}} ]Q \\
\Delta^{LD}[\tilde{\pi}]&=&Q\Delta^{LD}[\tilde{\hat{\pi}}]Q \quad , \nonumber
\end{eqnarray}
with the mixing $P^{\{SW\}}$ occurring in them.
The DOF counting can be readily done on (37). Since we are dealing with a properly gauge
fixed four-derivative theory, all the fields in $\hat {h}_{\mu\nu}$ do propagate and therefore we
have a total of $20$ DOF ($10$ massless and $10$ massive). According to the dimensionality of the
respective spin subspaces, they are distributed as 5, 3, 1 and 1 DOF for the spin-2, 1,
$0_{S}$ and $0_{W}$ respectively, which sum up 10 DOF for the massless fields,
and the same for the massive ones. In the (massless) $\tilde{h}$-sector,
the spin-2 space contains the two DOF of the graviton plus three gauge
DOF, and the remaining five DOF are also gauge ones. The $\tilde{\pi}$-sector
describes the five DOF of a spin-2 poltergeist with (squared) mass
$a/c$ [13] [17], one physical scalar DOF with mass $-a/2(3b+c)$,
three third ghosts with gauge-dependent masses $\xi_{3}/\xi_{1}$ and
one third ghost with mass $\xi_{3}/(\xi_{1}-\xi_{2})\;$.
\bigskip
In the absence of gauge fixing only the projectors $P^{(2)}$ and $P^{(S)}$ are involved, and
$P^{inv}$ in (7) can be inverted only in the restricted spin subspace $2\oplus S$. In principle,
in that case one is left with eight DOF in the LD theory, namely, the massless graviton, the
massive spin-2 poltergeist and the physical scalar. This is generally so as long as critical
relationships between $a$, $b$ and $c$ are avoided [10] [13] that make the order-reducing
procedure singular. In those cases some DOF may collapse and the theory may have fewer than
eight DOF and/or larger symmetries. Fast DOF-counting recipes for gauge theories can be found in
[18]. In ordinary two-derivative gravity, each of the four gauge-group local parameters of Diff
invariance accounts for the killing of {\it two} DOF, leaving the two DOF of the graviton out
of the ten DOF of $h_{\mu\nu}\,$. In four-derivative gravity each gauge-group parameter instead
kills {\it three} DOF so that the initial twenty DOF reduce to the eight DOF quoted above. The
mechanism is well illustrated in four-derivative QED [15], where one initially has eight DOF and
the gauge invariance suppress three of them, leaving one photon and one massive spin-1
poltergeist.
\bigskip
As detailed in Appendix II, the free part of the $Q$-transformed LD theory can be made local for
a particular choice of gauge parameters. However, using $Q(\lambda)$, for $\lambda=b/(4b+c)$,
moves the non-locality to the source term, namely to the interactions. This can also be avoided
by requiring that $\lambda=-2\,$, in which case $Q$ becomes a numerical matrix, but this gives
rise to a condition on the parameters $b$ and $c$ of the starting Diff-invariant theory. Leaving
aside the interpretation of such a restriction, this does not mean that we had anyway obtained a
sum of independent Lagrangian theories for each (massless and massive, spin 2, 1, $0_{S}$ and
$0_{W}$) particle, notwithstanding the fact that the spin subspaces appear well separated. This
is not posible, as illustrated for instance by the fact [19] that there is no second-order
tensor local theory for spin-1 fields.
\section{Faddeev-Popov compensating terms}
As usual, the gauge fixing term (3) together with the compensating (HD)
Faddeev-Popov Lagrangian can be expressed as a coboundary in the
BRST cohomology, namely
\begin{equation}
{\cal L}_{g}+{\cal L}_{FP}=-s
\left[\bar{C}^{\alpha}{\cal G}_{\alpha \beta} \chi^{\beta} [h]
+ \frac{1}{2}\bar{C}^{\alpha}{\cal G}_{\alpha \beta}{\cal B}^{\beta} \right]
\quad ,
\end{equation}
where $\bar{C}$ are FP fermion ghosts and ${\cal B}$ is an auxiliary commuting field.
\vfill
\eject
In order to study the propagators of the new fields $\bar{C}$ and $\cal B$,
it suffices to consider the linearized objects
\begin{eqnarray}
\chi^{\beta}[h]&=&\chi^{\beta \mu \nu}h_{\mu \nu}\equiv
(\eta^{\beta \mu}\partial^{\nu}
-\lambda \eta^{\mu \nu}\partial^{\beta})h_{\mu \nu}\nonumber\\
{\cal G}^{(h)}_{\alpha \beta}&\equiv&\xi_{1}\eta_{\alpha \beta}\mathchoice\sqr64\sqr64\sqr{4.2}3\sqr{3.0}3
-\xi_{2}\partial_{\alpha}\partial_{\beta }
+\xi_{3}\eta_{\alpha \beta} \\
&=&(\xi_{1}\mathchoice\sqr64\sqr64\sqr{4.2}3\sqr{3.0}3 +\xi_{3})\theta_{\alpha \beta}
+\left[ (\xi_{1}-\xi_{2})\mathchoice\sqr64\sqr64\sqr{4.2}3\sqr{3.0}3 +\xi_{3} \right]\omega_{\alpha \beta}
\quad ,\nonumber
\end{eqnarray}
and the BRST symmetry given by the (linearized) Slavnov transformations
\begin{equation}
\begin{array}{l}
s h_{\mu \nu}={\cal D}_{\mu \nu ,\alpha}C^{\alpha}
\\
s C^{\alpha}=0
\\
s \bar{C}^{\alpha}= {\cal B}^{\alpha}
\\
s {\cal B}^{\alpha}=0 \quad ,
\end{array}
\end{equation}
where
\begin{equation}
{\cal D}_{\mu \nu ,\beta}=\eta_{\mu \beta}\partial_{\nu}+\eta_{\nu \beta}
\partial_{\mu}
\end{equation}
is the gauge symmetry generator. With the diagonalization
\begin{equation}
{\cal B}^{\alpha}=B^{\alpha}-\chi^{\alpha}[h]
\end{equation}
the linearized (40) becomes
\begin{equation}
{\cal L}^{(2)}_{g} +{\cal L}^{HD}_{FP}=
\frac{1}{2}\chi^{\alpha}[h]{\cal G}^{(h)}_{\alpha \beta}\chi^{\beta}[h]
+\bar{C}^{\alpha}{\cal G}_{\alpha \gamma}\chi^{\gamma \mu \nu}
{\cal D}_{\mu \nu , \beta}C^{\beta}
-\frac{1}{2}B^{\alpha}{\cal G}^{(h)}_{\alpha \beta}B^{\beta}\quad ,
\end{equation}
and one has
\begin{equation}
\begin{array}{l}
s h_{\mu \nu}={\cal D}_{\mu \nu ,\alpha}C^{\alpha}
\\
s C^{\alpha}=0
\\
s \bar{C}^{\alpha}=B^{\alpha}-\chi^{\alpha}[h]
\\
s B^{\alpha}= \chi^{\alpha\mu\nu}
{\cal D}_{\mu \nu , \beta} C^{\beta}\quad .
\end{array}
\end{equation}
Of course, they reflect the trivially Abelian gauge symmetry group $G$ to which the infinitesimal
diffeomorphisms reduce in the linearized theory. In the complete non-polynomial theory there are
couplings between $h_{\mu\nu}\,$, $C\,$, $\bar{C}$ and $ B\,$, and the non-Abelian symmetry
yields a more complicated set of $S$-transformations in which, for instance $sC^{\alpha}\neq 0$.
One should also notice that the $Q$-transformation used in Section 3 is ininfluent in the form of
${\cal L}_{FP}$, as can be checked by computing it with the corresponding operators $\hat{{\cal
D}}=Q^{-1}{\cal D}$ and $\hat{\chi}^{\alpha\mu\nu}=({\chi Q})^{\alpha \mu\nu}$.
\bigskip
\noindent{T}he fermionic sector of the FP Lagrangian above, namely
\begin{equation}
{\cal L}^{HD}[\bar{C}C]
=\bar{C}^{\alpha} \left[ (\xi_{1} \Box +\xi_{3} )\Box \theta_{\alpha \beta}
+2(1-\lambda)((\xi_{1}-\xi_{2})\Box +\xi_{3})\Box \omega_{\alpha \beta}
\right]
C^{\beta}
\end{equation}
is higher-derivative whereas, in constrast with ordinary second-order theories, now the auxiliary
bosonic field $B$ {\it does propagate} according to the
Lagrangian
\begin{equation}
{\cal L}[B]=-\frac{1}{2}B^{\alpha}\left[
(\xi_{1}\mathchoice\sqr64\sqr64\sqr{4.2}3\sqr{3.0}3 +\xi_{3})\theta_{\alpha \beta}
+\left[ (\xi_{1}-\xi_{2})\mathchoice\sqr64\sqr64\sqr{4.2}3\sqr{3.0}3 +\xi_{3} \right]\omega_{\alpha \beta}
\right]
B^{\beta}
\end{equation}
which is already LD and always local.
We can also perform an order-reduction of ${\cal L}^{HD}[\bar{C} C]\;$ (Appendix III), yielding
\begin{eqnarray}
{\cal L}^{LD}[\bar{E} E \bar{F} F] &=&
\bar{E}^{\alpha}
\left(
\xi_{3} \theta_{\alpha \beta} + 2(1-\lambda)\xi_{3} \omega_{\alpha \beta}
\right)
\Box E^{\beta} \\ \nonumber
&&\mbox{}-\bar{F}^{\alpha}
\left( \frac{\xi_{3}}{\xi_{1}}(\xi_{1}\Box + \xi_{3} )
\theta_{\alpha \beta}
+\frac{ 2(1-\lambda)\xi_{3} }{\xi_{1}-\xi_{2}}
((\xi_{1}-\xi_{2})\Box+\xi_{3})
\omega_{\alpha \beta} \right) F^{\beta}\quad ,
\end{eqnarray}
where $\, E^{\alpha}+F^{\alpha}=C^{\alpha}\,$ and
$\,\bar{E}^{\alpha}+\bar{F}^{\alpha}=\bar{C}^{\alpha}\,$. The Lagrangian (49) is local for the same choice (see equations (102)-(104) in Appendix II)
of gauge parameters that makes ${\cal L}_{LD}$ in (25) local.
From (48) one directly reads
\begin{equation}
\Delta[B]=
\frac{\theta}{\xi_{1}\mathchoice\sqr64\sqr64\sqr{4.2}3\sqr{3.0}3 +\xi_{3}}
+\frac{\omega}{(\xi_{1}-\xi_{2})\mathchoice\sqr64\sqr64\sqr{4.2}3\sqr{3.0}3 +\xi_{3}}
\quad ,
\end{equation}
whereas the HD (oriented) propagator
\begin{eqnarray}
\Delta^{HD}[\bar{C} C]&=&
\frac{\theta}{(\xi_{1}\mathchoice\sqr64\sqr64\sqr{4.2}3\sqr{3.0}3 +\xi_{3})\mathchoice\sqr64\sqr64\sqr{4.2}3\sqr{3.0}3}
+\frac{\omega}{2(1-\lambda)}\frac{1}{((\xi_{1}-\xi_{2})\mathchoice\sqr64\sqr64\sqr{4.2}3\sqr{3.0}3 +\xi_{3})
\mathchoice\sqr64\sqr64\sqr{4.2}3\sqr{3.0}3}
\\
&=&\frac{\theta}{\xi_{3}} \left( \frac{1}{\mathchoice\sqr64\sqr64\sqr{4.2}3\sqr{3.0}3}
-\frac{\xi_{1}}{\xi_{1}\mathchoice\sqr64\sqr64\sqr{4.2}3\sqr{3.0}3 +\xi_{3}}\right)
+\frac{\omega}{2(1-\lambda)\xi_{3}}
\left(\frac{1}{\mathchoice\sqr64\sqr64\sqr{4.2}3\sqr{3.0}3}
-\frac{ \xi_{1}-\xi_{2} }{(\xi_{1}-\xi_{2})\mathchoice\sqr64\sqr64\sqr{4.2}3\sqr{3.0}3+\xi_{3}}
\right)\nonumber
\end{eqnarray}
obtained from (47) splits into the (also oriented) LD propagators
\begin{eqnarray}
\Delta[\bar{E}E ]&=&
\frac{\theta}{\xi_{3}\Box}+\frac{\omega}
{2(1-\lambda )\xi_{3}\Box}\\
\Delta[\bar{F}F ] &=&
-\frac{\xi_{1}}{\xi_{3}(\xi_{1}\Box+\xi_{3})}\theta
-\frac{(\xi_{1}-\xi_{2})}{2(1-\lambda)\xi_{3}
((\xi_{1}-\xi_{2})\Box+\xi_{3})}\omega\quad ,
\end{eqnarray}
that can be derived from (49) as well.
In (52) one counts four FP {\it fermion} ghosts $E$ and four $\bar{E}$ with
massless poles, giving eight {\it negative} loop
contibutions that compensate for the eight massless gauge ghosts quoted
in Section 3. The compensation of the third ghosts contains
non trivial features which are characteristic to the HD theories.
From (53) one
has that the FP {\it fermion} ghosts $F$ and $\bar{F}$ give six
{\it negative } loop contributions with propagator poles at $\xi_{3}/\xi_{1}$ and
two at $\xi_{3}/(\xi_{1}-\xi_{2})$. This over-compensates the
(three plus one) third ghosts. Here is where
the new {\it boson} FP ghosts $B$,
propagating with (50),
come to the rescue: they give three {\it positive } contributions with
poles at $\xi_{3}/\xi_{1}$ and one at $\xi_{3}/(\xi_{1}-\xi_{2})$,
thus providing the complete cancellation
of ghost loop contributions.
This matching of the ghost masses is a consequence of the interplay of the order-reducing and
BRST procedures. The master relationship is
\begin{equation}
{{\cal G}^{(h)}}^{-1}={{\cal G}^{(\tilde{h})}}^{-1}-{{\cal G}^{(\tilde{\pi})}}^{-1}\quad ,
\end{equation}
where the massive poles of the third ghosts are displayed by ${{\cal G}^{(h)}}^{-1}$ and
${{\cal G}^{(\tilde{\pi})}}^{-1}$, the latter also having massless zero-modes. To see it we find
useful defining the differential operator
\begin{equation}
Z^{\alpha}_{\beta}\equiv \chi^{\alpha\mu\nu}{\cal D}_{\mu\nu , \beta} =
\Box\left[\theta^{\alpha}_{\beta}+2(1-\lambda)\omega^{\alpha}_{\beta}\right]
\quad ,
\end{equation}
and the differential kernels
\begin{equation}
K^{(i)}_{\alpha\beta}\equiv {\cal
G}^{(i)}_{\alpha\gamma}Z^{\gamma}_{\beta}\quad\quad\quad(i=h,\tilde{h},\tilde{\pi})
\end{equation}
occurring in the FP Lagrangians above and worked out in (47) and (49).
The poles of the (massless) gauge ghosts lie in $Z^{-1}$ whereas the operator ${{\cal
G}^{(\tilde{h})}}^{-1}$ has no poles. From (54) and (56) it follows that
\begin{equation}
{K^{(h)}}^{-1}={K^{(\tilde{h})}}^{-1}-{K^{(\tilde{\pi})}}^{-1}
\end{equation}
which also reads $ \,\Delta^{HD}[\bar{C}C]=\Delta[\bar{E}E]+\Delta[\bar{F}F]\,$. Thus the $E$
fields and the $F$-fields inherit the massless and the massive poles respectively. On the other
hand $\,\Delta[B]=-{{\cal G}^{(h)}}^{-1}\,$,$\,$ so that also the boson $B$-fields share the same
massive poles.
\bigskip
\bigskip
The symmetries of $\,{\cal L}_{g}^{LD}+{\cal L}_{FP}^{LD}\,$ are
not trivial either. The symmetry of the (invariant part of the) HD theory under the group $G$ of
the gauge variations
\begin{equation}
\delta h_{\mu \nu}={\cal D}_{\mu \nu ,\alpha}\varepsilon^{\alpha}\quad ,
\end{equation}
is inherited by the LD theory via (17) and (24) with the variations
\begin{eqnarray}
\delta\tilde{h}_{\mu \nu}&=&
\left[ \bar{\eta}^{\rho \sigma}_{\mu \nu}
+\Box(N^{-1}M)^{\rho \sigma}_{\mu \nu} \right]
{\cal D}_{\rho \sigma , \alpha} \varepsilon^{\alpha}
\nonumber \\
&=&
\left[ \frac{\xi_{1}\Box +\xi_{3}}{\xi_{3}} P^{(1) \rho \sigma}_{\mu \nu}
+\frac{(\xi_{1}-\xi_{2})\Box+\xi_{3}}{\xi_{3}} P^{(W) \rho \sigma}_{\mu \nu}
\right] {\cal D}_{\rho \sigma , \alpha} \varepsilon^{\alpha} \quad , \\
\delta\tilde{\pi}_{\mu \nu} &=&
-\left[\Box(N^{-1}M)^{\rho \sigma}_{\mu \nu} \right]
{\cal D}_{\rho \sigma , \alpha} \varepsilon^{\alpha}
\nonumber \\
&=&
-\left[ \frac{\xi_{1}\Box }{\xi_{3}} P^{(1) \rho \sigma}_{\mu \nu}
+\frac{(\xi_{1}-\xi_{2})\Box}{\xi_{3}} P^{(W) \rho \sigma}_{\mu \nu}
\right] {\cal D}_{\rho \sigma , \alpha} \varepsilon^{\alpha}\quad ,
\nonumber
\end{eqnarray}
both depending on the same four gauge-group parameters $\varepsilon^{\alpha}(x)$.
One may check that $\delta \tilde{h}_{\mu \nu}+\delta \tilde{\pi}_{\mu \nu}
=\delta h_{\mu \nu}$.
However, the {\it free} invariant part of the LD theory (25)
exhibits a larger symmetry group,
namely the fields $\tilde{h}$ and $\tilde{\pi}$ may be given
independent variations
\begin{eqnarray}
\bar{\delta}\tilde{h}_{\mu \nu}&=&{\cal D}_{\mu \nu,\alpha}\varepsilon'^{\alpha}
\\
\bar{\bar{\delta}}\tilde{\pi}_{\mu \nu}&=&{\cal D}_{\mu \nu,\alpha}\varepsilon''^{\alpha}
\quad ,
\end{eqnarray}
thus doubling the number of group parameters,
with the original symmetry as a diagonal-like subgroup $G_{1}\subset G\times G\,$, which is
isomorphic to $G$ [15].
One may then look at
\begin{equation}
{\cal L}_{g }^{LD}[\tilde{h}]\equiv \frac{1}{2}\chi[\tilde{h}]{\cal G}^{(\tilde{h})}
\chi[\tilde{h}]
\end{equation}
and
\begin{equation}
{\cal L}_{g}^{LD} [\tilde{\pi}]
=-\frac{1}{2}\chi[\tilde{\pi}]{\cal G}^{(\tilde{\pi})} \chi[\tilde{\pi}]
\quad ,
\end{equation}
occurring in (27), as separate gauge fixings for the
symmetries (60) and (61) respectively, and wonder what happens with the
whole BRST scheme.
The separate $S$-transformations would be
\begin{equation}
\begin{array}{lcccl}
s\tilde{h}_{\mu \nu}={\cal D}_{\mu \nu ,\alpha}E^{\alpha}
&&&&
s\tilde{\pi}_{\mu \nu}={\cal D}_{\mu \nu ,\alpha}F^{\alpha}
\\
sE^{\alpha}=0
&&&&
s F^{\alpha}=0
\\
s \bar{E}^{\alpha}=B'^{\alpha} -\chi^{\alpha}[\tilde{h}]
&&&&
s \bar{F}^{\alpha}=B''^{\alpha}-\chi^{\alpha}[\tilde{\pi}]
\\
s B'^{\alpha}= \chi^{\alpha\mu\nu}
{\cal D}_{\mu \nu ,\beta}E^{\beta}
&&&&
s B''^{\alpha}=\chi^{\alpha\mu\nu}
{\cal D}_{\mu \nu ,\beta}F^{\beta}\quad ,
\end{array}
\end{equation}
so we are led to write
\begin{eqnarray}
{\cal L}_{g}+{\cal L}^{\star}_{FP}&=&- s \left[
\bar{E}^{\alpha}{\cal G}_{\alpha \beta}^{(\tilde{h})} \chi^{\beta}[\tilde{h}]
+\frac{1}{2}\bar{E}^{\alpha}{\cal G}_{\alpha \beta}^{(\tilde{h})} {\cal B}'^{\beta}
\right] \\
&&\mbox{}+ s \left[
\bar{F}^{\alpha}{\cal G}_{\alpha \beta}^{(\tilde{\pi})} \chi^{\beta}[\tilde{\pi}]
+\frac{1}{2}\bar{F}^{\alpha}{\cal G}_{\alpha \beta}^{(\tilde{\pi})} {\cal B}''^{\beta}
\right] \nonumber \\
&=& \mbox{}\frac{1}{2}\chi^{\alpha} [\tilde{h}]
{\cal G}_{\alpha \beta}^{(\tilde{h})}
\chi^{\beta}[\tilde{h}]
-
\mbox{}\frac{1}{2}\chi^{\alpha}[ \tilde{\pi} ]
{\cal G}_{\alpha \beta}^{(\tilde{\pi})}
\chi^{\beta}[\tilde{\pi}] \nonumber \\
& & \mbox{} +\bar{E}^{\alpha}{\cal G}_{\alpha \rho}^{(\tilde{h})}
\chi^{\rho \mu \nu}
{\cal D}_{\mu \nu , \beta} E^{\beta}
-\bar{F}^{\alpha}{\cal G}_{\alpha \rho}^{(\tilde{\pi})}
\chi^{\rho\mu \nu}
{\cal D}_{\mu \nu , \beta} F^{\beta} \\
& & \mbox{} -\frac{1}{2}B'^{\alpha}
{\cal G}_{\alpha \beta}^{(\tilde{h})}
B'^{\beta}
+\frac{1}{2}B''^{\alpha}
{\cal G}_{\alpha \beta}^{(\tilde{\pi})}
B''^{\beta}\nonumber
\end{eqnarray}
Thus (49) agrees with the fermionic sector
of (66).
Equations (64) define two cohomologies $\{\bar{V};s \}$ and $\{ \bar{\bar{V}};s\}$ with
cohomological spaces $\bar{V}\equiv\{ \tilde{h},E,\bar{E},B' \}$ and $\bar{\bar{V}}\equiv\{
\tilde{\pi},F,\bar{F},B'' \}$ respectively, both being copies of the original $V\equiv
\{h,C,\bar{C},B\}$ of (46) with boundary operator $s$. The polynomial (65) is then an exact
cochain in the cohomology $\{\bar{V};s\}\oplus\{\bar{\bar{V}};s\}\equiv
\{\bar{V}\oplus\bar{\bar{V}};s\oplus s \}$.
The cohomology characterizing the HD theory appears as a subcohomology $\{V_{1};s_{1}\}$ of the
direct sum above. The subspace $V_{1}\subset\bar{V}\oplus \bar{\bar{V}}$ is defined by
\begin{equation}
\begin{array}{lcccl}
\tilde{h}_{\mu\nu}={\cal O}'^{\rho \sigma}_{\mu\nu }h_{\rho\sigma}
&&&&
\tilde{\pi}_{\mu\nu}={\cal O}''^{\rho \sigma}_{\mu\nu }h_{\rho\sigma}
\\
E^{\alpha}={\cal O}'^{\alpha}_{\beta}C^{\beta}
&&&&
F^{\alpha}={\cal O}''^{\alpha}_{\beta}C^{\beta}
\\
\bar{E}^{\alpha}={\cal O}'^{\alpha}_{\beta}\bar{C}^{\beta}
&&&&
\bar{F}^{\alpha}={\cal O}''^{\alpha}_{\beta}\bar{C}^{\beta}
\\
B'^{\alpha}= {\cal O}'^{\alpha}_{\beta}B^{\beta}&&&&
B''^{\alpha}= {\cal O}'^{\alpha}_{\beta}B^{\beta}
\quad ,
\end{array}
\end{equation}
where
\begin{eqnarray}
{\cal O}'^{\rho \sigma}_{\mu\nu }&\equiv& \bar{\eta}^{\rho \sigma}_{\mu \nu}
+\Box(N^{-1}M)^{\rho \sigma}_{\mu \nu} \\
{\cal O}''^{\rho \sigma}_{\mu\nu }&\equiv&
-\Box(N^{-1}M)^{\rho \sigma}_{\mu \nu} \\
{\cal O}'^{\alpha}_{\beta}&\equiv &\frac{\xi_{1}\Box +\xi_{3}}{\xi_{3}}\theta^{\alpha}_{\beta}
+\frac{(\xi_{1}-\xi_{2})\Box +\xi_{3}}{\xi_{3}} \omega^{\alpha}_{\beta}\\
{\cal O}''^{\alpha}_{\beta}&\equiv&
- \left( \frac{ \xi_{1} \Box }{\xi_{3}}
\theta^{\alpha}_{\beta}+\frac{(\xi_{1}-\xi_{2})\Box}{\xi_{3}}
\omega^{\alpha}_{\beta} \right)
\end{eqnarray}
are invertible linear operators satisfying ${\cal O}'+{\cal O}''=\delta$, and $s_{1}$ is the
restriction to $V_{1}$ of $s\oplus s$. Then this subcohomology is nothing but the original one
$\{V;s\}$ of the HD theory, since (67) defines an isomorfism $V\stackrel{\imath_{1}}{\rightarrow}
V_{1}$ and $s_{1}$ becomes $s\,$, that is
\begin{eqnarray}
s\tilde{h}_{\mu \nu}+s\tilde{\pi}_{\mu\nu}&=&
sh_{\mu\nu}
\nonumber
\\
s E^{\alpha} + s F^{\alpha} &=& sC^{\alpha}
\nonumber\\
s\bar{E}^{\alpha}+s\bar{F}^{\alpha} &=&
s\bar{C}^{\alpha}\\
sB'^{\alpha} +sB''^{\alpha}&=& sB^{\alpha}\quad ,
\nonumber
\end{eqnarray}
as a consequence of (67) and (68)-(71).
In other words we have $\imath_{1}^{-1}\circ s\oplus s \circ\imath_{1}=s$. Moreover, we recover
the Lagrangian (48) for $B$, namely
\begin{eqnarray}
{\cal L}^{\star}[B'B'']&=&
-\frac{1}{2}B'^{\alpha}{\cal G}_{\alpha \beta}^{(\tilde{h})}B'^{\beta}
+\frac{1}{2}B''^{\alpha}{\cal G}_{\alpha \beta}^{(\tilde{\pi})}B''^{\beta}
\nonumber \\
&=&
-\frac{1}{2}B^{\alpha}{\cal O}'^{\gamma}_{\alpha}
{\cal G}_{\gamma \rho}^{(\tilde{h})}{\cal O}'^{\rho}_{\beta}B^{\beta}
+\frac{1}{2}B^{\alpha}
{\cal O}''^{\gamma}_{\alpha}
{\cal G}_{\gamma \rho}^{(\tilde{\pi})}{\cal O}''^{\rho}_{\beta}B^{\beta}
\\
&=&-\frac{1}{2}B^{\alpha}{\cal G}_{\alpha \beta}^{(h)}B^{\beta}=
{\cal L}[B]\nonumber
\end{eqnarray}
The subgroup $G_{1}\subset G\times G$, associated to $\{ V_{1}; s_{1}\}$ and isomorphic to $G$,
is obtained by taking the group parameters $\varepsilon'$ and $\varepsilon''$ as functions of
four parameters $\varepsilon$ by means of the equations
\begin{eqnarray}
{\cal D}_{\mu \nu ,\alpha}\varepsilon'^{\alpha}&=&
{\cal O}'^{\rho\sigma}_{\mu\nu}
{\cal D}_{\rho \sigma , \alpha} \varepsilon^{\alpha}
\\
{\cal D}_{\mu \nu ,\alpha}\varepsilon''^{\alpha} &=&
{\cal O}''^{\rho\sigma}_{\mu\nu}
{\cal D}_{\rho \sigma , \alpha} \varepsilon^{\alpha}\quad .\nonumber
\end{eqnarray}
These are derived by imposing the relations stemming from (59) on the otherwise independent
variations (60) and (61), and yield
\begin{eqnarray}
\varepsilon'^{\alpha}&=&
{\cal O}'^{\alpha}_{\beta}\varepsilon^{\beta}\\
\varepsilon''^{\alpha}&=&
{\cal O}''^{\alpha}_{\beta} \varepsilon^{\beta} \quad ,
\end{eqnarray}
so that $\varepsilon=\varepsilon'+\varepsilon''\,$.
The subgroup $G_{1}$ is, by definition, a symmetry of the whole (non gauge-fixed) Lagrangian (and
also separately of the interaction terms) since one has
$\bar{\delta}\tilde{h}_{\mu\nu}+\bar{\bar{\delta}}\tilde{\pi}_{\mu\nu}=\delta h_{\mu\nu}\,$,
whereas $G\times G$ is broken by the interaction terms.
\section{Conclusion}
The interplay of gauge invariance and higher differential order in field theories gives rise to a
remarkable diversity of particle-like states which are encripted in the original field
variables. In the four-derivative tensor theory of gravity here studied, the doubling of the
initial conditions for the (fourth differential-order) equations of motion translates into a
doubling of the effective number of particle-like DOF obeying second differential-order evolution
equations. They describe physical (positive Fock-space norm) states together with an outburst of
massless and massive ghostly states which are unphysical because of their negative norm and/or
gauge dependence. Beyond the methodological interest, its analysis provides an enlarged context
for the traditional gauge theories and BRST symmetries of physical relevance, which also
enlightens the nature of some states already encountered in former classical works on higher
derivative gravity.
Four-derivative gravity is particularly interesting to study as long as the emphasis
traditionally given to its applications has overlooked many details of its theorical structure.
Amongst the particle-like states of the gauge-fixed theory, there are physical ones (a massless
graviton and one scalar, reminiscent of the Brans-Dicke field), a massive spin-2 gauge
independent Weyl ghost (unphysical norm), and two families of gauge-dependent fields: the usual
massless gauge ghosts and the novel massive third ghosts.
This elusive new breed of ghosts firstly arose in the exponentiation of the functional
determinant of the differential operator ${\cal G}^{(h)}$.
In the presence of (generally HD) gauge fixing terms and of the corresponding compensating FP
Lagrangian, the order-reducing procedure reveals remarkable features of the underlying BRST
symmetry associated to the four-parameter gauge group $G$ of infinitesimal diffeomorphisms. In
parallel with the doubling of the fields, there is a doubling of the gauge symmetry of the {\it
free} part of the (second-order) LD equivalent theory. Out of this $G\times G$ larger symmetry,
both the interaction terms and the consistency of the BRST algebra select a diagonal-like
subgroup $G_{1}\,$, isomorphic to $G\,$, as the only symmetry of the complete LD theory, in
agreement with the ocurrence of Diff-invariance as the only symmetry of the starting HD theory.
However, restricting ourselves to the free LD theory and considering its $G\times G$ symmetry,
the LD gauge-dependent terms can be viewed as separate gauge fixings for both group factors. The
(gauge-dependent) unphysical propagating DOF so introduced then appear as the respective gauge
ghosts, which are massless for the first group factor and massive for the second one, thus giving
further meaning to the famous third ghosts. Moreover, the separate symmetry of the the gauge
independent part of the physical and poltergeist sectors of the LD theory illustrates also how
their kinetic terms reproduce the structure of the Einstein's and Fierz-Pauli theory, thus
describing (massless and massive) spin-2 fields respectively.
In correspondence with the appearance of a new class of massive gauge ghosts, the compensating FP
Lagrangian also contains a greater number of propagating fiels. These come from the HD doubling
of the FP anticommuting fermion fields and from the boson fields, which are just auxiliary
decoupled artifacts in ordinary two-derivative theories and now propagate and couple to the
gauge-independent fields. The negative loop contributions of the massive fermion FP fields yield
twice the amount needed to compensate for the third ghost loops, and it is just the positive
contributions of the boson FP fields that provide the exact balance. This striking compensation
mechanism, peculiar to HD gauge theories and easy to extrapolate to higher than four-derivative
theories, illustrates well the power and richness of the BRST procedure. Of course, checking the
exact cancellation of ghost loop contributions would require considering the actual residues of
the propagators and vertex couplings arising in the complete non-polynomial theory, a task which
is beyond the purposes of this work.
A final comment on locality is in order. From a HD local theory, the order-reducing procedure
leads to an equivalent two-derivative theory. For scalar theories, the LD counterpart is directly
local [16]. In gauge theories of vector fields there is always a choice of the gauge fixing
parameters for which it is also local [15]. For tensor fields, the example studied in this paper
tells us that obtaining an equivalent LD theory with independent free Lagrangians for the
different spin states is not compatible with locality, although one comes close to this goal by
suitably picking the gauge parameters. This obstruction is related to the more complex structure
of the constraints of the tensor field theories, like the one that prevents having a tensor local
theory of second differential-orden for spin-1 fields.
\vfill
\eject
\section*{Appendix I}
We use the notations
\begin{eqnarray}
g_{\mu \nu} &=& \eta_{\mu \nu} + h_{\mu \nu} \nonumber\\
A^{\mu} & \equiv & \partial_{\nu}h^{\mu \nu} \nonumber\\
h & \equiv & h_{\mu}^{\mu} \nonumber\\
X_{(\mu \nu)} & \equiv & X_{\{\mu \nu\}} \equiv X_{\mu \nu} + X_{\nu \mu} \quad
,\nonumber\end{eqnarray}
and the Minkowsky metric is $ \eta_{\mu \nu} = diag(+---) $.
\bigskip
The spin projectors are
\begin{eqnarray}
P^{(2)}_{\mu\nu,\rho\sigma}&=&\frac{1}{2}\theta_{\mu(\rho}\theta_{\nu\sigma)}
-\frac{1}{3}\theta_{\mu\nu}\theta_{\rho\sigma}\\
P^{(1)}_{\mu\nu,\rho\sigma}&=&\frac{1}{2}\theta_{\{\mu(\rho}\omega_{\nu\}\sigma)}\\
P^{(S)}_{\mu\nu,\rho\sigma}&=&\frac{1}{3}\theta_{\mu\nu}\theta_{\rho\sigma}\\
P^{(W)}_{\mu\nu,\rho\sigma}&=&\omega_{\mu\nu}\omega_{\rho\sigma}
\end{eqnarray}
They are symmetric under the interchanges
\begin{equation}
\mu\leftrightarrow\nu\quad,\quad
\rho\leftrightarrow\sigma\quad,\quad\mu\nu\leftrightarrow\rho\sigma\quad ,
\end{equation}
idempotent, orthogonal to each other, and sum up to the identity operator in the space of
symmetric two-tensors, namely
\begin{equation}\bar{\eta}_{\mu\nu,\rho\sigma}\equiv \frac{1}{2}\eta_{\mu(\rho}\eta_{\nu\sigma)}
=(P^{(2)}+P^{(1)}+P^{(S)}+P^{(W)})_{\mu\nu,\rho\sigma}\quad .
\end{equation}
These projectors are constructed using the transverse and longitudinal projectors for vectors
fields
\begin{eqnarray}
\theta_{\mu\nu}&=&\eta_{\mu\nu}-\frac{\partial_{\mu}\partial_{\nu}}{\Box}\\
\omega_{\mu\nu}&=&\frac{\partial_{\mu}\partial_{\nu}}{\Box}\quad\quad\quad.
\end{eqnarray}
We also use the transfer operators
\begin{eqnarray}
P^{(SW)}_{\mu\nu,\rho\sigma}&=&\theta_{\mu\nu}\omega_{\rho\sigma}\\
P^{(WS)}_{\mu\nu,\rho\sigma}&=&\omega_{\mu\nu}\theta_{\rho\sigma}
\end{eqnarray}
from which we define
\begin{equation}
P^{\{ SW \}}=P^{\{ WS\}}\equiv P^{(SW)}+P^{(WS)}\quad.
\end{equation}
\noindent{T}hey have non-zero products
\begin{eqnarray}
P^{(SW)}P^{(WS)}&=&3P^{(S)}\\
P^{(WS)}P^{(SW)}&=&3P^{(W)}\\
P^{(S)}P^{(SW)}&=&P^{(SW)}P^{(W)}=P^{(SW)}\\
P^{(W)}P^{(WS)}&=&P^{(WS)}P^{(S)}=P^{(WS)}\\
P^{\{ SW\} }P^{\{ SW\} }&=&3(P^{(S)}+P^{(W)})\\
P^{(S)}P^{\{ SW\} }&=&P^{\{ SW \}}P^{(W)}=P^{(SW)}\\
P^{(W)}P^{\{ SW\} }&=&P^{\{ SW\} }P^{(S)}=P^{(WS)}\quad.
\end{eqnarray}
We define also
\begin{equation}
\bar{\bar {\eta} }_{\mu\nu,\rho\sigma} \equiv
\eta_{\mu\nu}\eta_{\rho\sigma}=3P^{(S)}+P^{(W)}+P^{\{SW\}}\quad.
\end{equation}
\bigskip
Here we collect some formulae which are useful for dealing with combinations of the
operators above. Inverse:
\begin{eqnarray}
{\cal M}&=&\lambda_{2}P^{(2)} +
\lambda_{1}P^{(1)}+\lambda_{S}P^{(S)}+\lambda_{W}P^{(W)}
+\lambda_{SW}P^{\{SW\}}\nonumber\\
{\cal M}^{-1}&=&\frac{1}{\lambda_{2}}P^{(2)} +
\frac{1}{\lambda_{1}}P^{(1)}
+\frac{\lambda_{W}}{\lambda_{S}\lambda_{W}-3\lambda_{SW}^{2}} P^{(S)}
+\frac{\lambda_{S}}{\lambda_{S}\lambda_{W}-3\lambda_{SW}^{2}}P^{(W)}
\nonumber\\
& & \mbox{} -\frac{\lambda_{SW}}{\lambda_{S}\lambda_{W}-3\lambda_{SW}^{2}}
P^{\{SW\}} \quad .
\nonumber
\end{eqnarray}
\noindent{W}hen computing symmetric products like (26), operators in the subspace $S\oplus W$ of
the form
\begin{equation}
\Omega(\tau_{S},\tau_{W},\tau_{SW})=\tau_{S}P^{(S)}+\tau_{W}P^{(W)}+\tau_{SW}P^{\{SW\}}
\end{equation}
occur, for which one has the product law
\begin{eqnarray}
\Omega(\tau_{S},\tau_{W},\tau_{SW})\Omega(\lambda_{S},\lambda_{W},\lambda_{SW})
\Omega(\tau_{S},\tau_{W},\tau_{SW})&=&\\
\Omega(\tau_{S}^{2}\lambda_{S}+3\tau_{SW}^{2}\lambda_{W}+6\tau_{S}\tau_{SW}\lambda_{SW}&,&\nonumber\\
\tau_{W}^{2}\lambda_{W}+3\tau_{SW}^{2}\lambda_{S}\!\!\!&+&\!\!\!6\tau_{W}\tau_{SW}\lambda_{SW}
\,\,\; ,\nonumber\\
\tau_{S}\tau_{W}\lambda_{SW}\!\!\!&+&\!\!\!\tau_{S}\tau_{SW}\lambda_{S}+\tau_{SW}\tau_{W}\lambda
{W}
+3\tau_{SW}^{2}\lambda_{SW})\nonumber
\end{eqnarray}
\bigskip
A basis of zeroth differential-order local operators with the symmetries (81) is provided
by $\bar{\eta}$ and $\bar{\bar{\eta}}$. Local second-order operators can be expanded in the basis
\begin{eqnarray}
{\cal C}_{1}\Box&:=& \left(\frac{1}{2} P^{(2)}-P^{(S)}\right) \mathchoice\sqr64\sqr64\sqr{4.2}3\sqr{3.0}3\nonumber\\
{\cal C}_{2}\Box&:=&\left( \frac{1}{2}P^{(1)}+3P^{(S)}\right) \mathchoice\sqr64\sqr64\sqr{4.2}3\sqr{3.0}3 \nonumber\\
{\cal C}_{3}\Box&:=&\left( P^{\{SW\}}+6P^{(S)}\right) \mathchoice\sqr64\sqr64\sqr{4.2}3\sqr{3.0}3 \\
{\cal C}_{4}\Box&:=&\left( P^{(W)}-3P^{(S)}\right) \mathchoice\sqr64\sqr64\sqr{4.2}3\sqr{3.0}3\nonumber
\end{eqnarray}
Thus, a general local LD operator has the form
\begin{eqnarray}
\Omega^{LD}&=&\sum_{i=1}^{4} \alpha_{i} {\cal C}_{i} \Box
+ a_{1} \bar{\eta}+a_{2}\bar{\bar{\eta}}\quad .
\end{eqnarray}
Einstein's (linearized) theory displays the operator ${\cal C}_{1}$. Fierz-Pauli's has the same
kinetic term and a mass term built with $\bar{\eta}-\bar{\bar{\eta}}$. When using the field basis
obtained by $Q(\lambda)$-transforming the theory, this kinetic term displays the operator
\begin{equation}
Q(\lambda) {\cal C}_{1} \Box Q(\lambda) =
\left( \frac{1}{2} P^{(2)}
-\frac{4}{27} \frac{(\lambda -1)^{2}}{\lambda^{2}}P^{(W)}\right)\Box \quad
\end{equation}
For $\lambda=-2$ it becomes local again, namely
$\left( \frac{1}{2}P^{(2)}-\frac{1}{3}P^{(W)} \right)\Box
={\cal C}_{1}\Box -\frac{1}{3}{\cal C}_{3}\Box $
which describes (linearized) gravity as properly as the original operator ${\cal C}_{1}\Box$ did.
\bigskip
\bigskip
\section*{Appendix II}
The kinetic terms for $\tilde h$ and $\tilde \pi$ in (25) contain the operator $N\Box$
which is local for arbitrary gauge parameters. In fact one has that
\begin{equation}
N=a{\cal C}_{1}
-\xi_{3}{\cal C}_{2}
-\lambda(\lambda-1)\xi_{3}{\cal C}_{3}
-\xi_{3}(\lambda-1)^{2}{\cal C}_{4}
\end{equation}
However, the ``mass term''
$ -\frac{1}{2} \tilde{\pi} NM^{-1}N \tilde{\pi} $ is local only for a choice of gauge parameters
obeying the conditions
\begin{eqnarray}
\xi_{1} &=& -c\frac{\xi_{3}^{2}}{a^{2}}\quad , \\
\frac{a^{2}}{2c}\frac{\xi_{1}-\xi_{2}}{\xi_{3}^{2}} &=& -\frac{3b+c}{4b+c}
\quad , \\
\lambda &=& \frac{b}{4b+c}\quad ,
\end{eqnarray}
that are obtained by requiring $NM^{-1}N$ to be a linear combination of
$\bar{\eta}$ and $\bar{\bar{\eta}}$. This leaves one of the parameters $\xi$
still arbitrary.
This same conditions make the fermion Lagrangian (48) local.
In view of the conditions above, a theory in which $4b+c=0$ does not have
a local LD equivalent. But this case was critical already for the complete Diff-invariant theory
since it is not regular in $R$ and a general-covariant Legendre transform (see equation (8)
of [13]) cannot be performed. With gauge fixing terms and for the
linearized field, the (just Lorentz-covariant) Legendre transform can
always be carried out (equation (17) is not singular for $4b+c=0$)
and we instead have non-locality of the LD theory.
\bigskip
When we consider the $Q$-transformed theory, the potentially non-local
operator is
\begin{eqnarray}
\hat{N}\hat{M}^{-1}{\hat N}&=&\frac{a^2}{2c}P^{(2)}
+\frac{4}{27}\frac{(\lambda -1)^2}{\lambda^{2}} \frac{a^{2}}{2(3b+c)}P^{(W)}
\nonumber\\
&-&\frac{\xi_{3}^{2}}{2\xi_{1}}P^{(1)}
-\frac{4}{27}\frac{(\lambda-1)^{4}}{\lambda^{2}}
\frac{\xi_{3}^{2}}{\xi_{1}-\xi_{2}}P^{(S)}\quad .
\end{eqnarray}
As explained before, we must take $\lambda=-2$ in order to keep
the locality of the source term. In that case $\hat{N}\Box$ remains local.
Requiring locality for $\hat{N}\hat{M}^{-1}\hat{N}$ leads to
\begin{eqnarray}
\xi_{1}=-\frac{1}{5}\xi_{2}\\
\xi_{3}^{2}=\frac{a^{2}}{5c}\xi_{2}\\
c=-\frac{9}{2}b
\end{eqnarray}
These conditions are the same found before, but now (104) yields a
condition, namely (108), on the parameters of the original gauge-invariant theory.
\bigskip
\bigskip
\section*{Appendix III}
We briefly outline the order-reduction of the higher-derivative FP Lagrangian for anticommuting
fields
\begin{equation}
{\cal L}_{HD}= \bar{C}_{\mu}\,
\left(
\mathchoice\sqr64\sqr64\sqr{4.2}3\sqr{3.0}3(a_1\mathchoice\sqr64\sqr64\sqr{4.2}3\sqr{3.0}3 +b_1)\,\theta^{\mu \nu}
+\mathchoice\sqr64\sqr64\sqr{4.2}3\sqr{3.0}3(a_2\mathchoice\sqr64\sqr64\sqr{4.2}3\sqr{3.0}3 +b_2)\,\omega^{\mu \nu}
\right)\,C_{\nu} +\bar{\zeta}^{\mu}C_{\mu}
+\bar{C}_{\mu}\zeta^{\mu}
\end{equation}
where $\zeta$ and $\bar{\zeta}$ are external sources which are also anticommuting. Dropping total
spacetime derivatives, conjugate momenta may be defined as the left derivatives
\begin{eqnarray}
{\cal P}^{\mu}&=&\frac{\partial^{L} {\cal L}_{HD}}
{\partial \,\mathchoice\sqr64\sqr64\sqr{4.2}3\sqr{3.0}3 \,\bar{C}_{\mu}}=
{\cal M}^{\mu \nu} \mathchoice\sqr64\sqr64\sqr{4.2}3\sqr{3.0}3\,C_{\nu}
+\frac{1}{2}{\cal N}^{\mu \nu} C_{\nu}\\
\bar{\cal P}^{\mu}&=&\frac{\partial^{L} {\cal L}_{HD}}
{\partial \,\mathchoice\sqr64\sqr64\sqr{4.2}3\sqr{3.0}3\, C_{\mu}}=
-{\cal M}^{\mu \nu} \mathchoice\sqr64\sqr64\sqr{4.2}3\sqr{3.0}3 \,\bar{C}_{\nu}
-\frac{1}{2}{\cal N}^{\mu \nu} \bar{C}_{\nu}
\end{eqnarray}
where
\begin{equation}
\begin{array}{lr}
{\cal M}\equiv a_1\theta +a_2\omega \quad, & {\cal N}\equiv b_1\theta +b_2\omega\quad ,
\end{array}
\end{equation}
from which
\begin{eqnarray}
\mathchoice\sqr64\sqr64\sqr{4.2}3\sqr{3.0}3\, C_{\mu}& =&{\cal M}^{-1}_{\mu \nu}\left(
{\cal P}^{\nu}-\frac{1}{2}{\cal N}^{\nu \rho} C_{\rho} \right)\\
\mathchoice\sqr64\sqr64\sqr{4.2}3\sqr{3.0}3 \,{\bar C}_{\mu}& =&-{\cal M}^{-1}_{\mu \nu}\left(
\bar{\cal P}^{\nu}+\frac{1}{2}{\cal N}^{\nu \rho}\bar{C}_{\rho}
\right)
\end{eqnarray}
Then the ``Hamiltonian'' is
\begin{eqnarray}
{\cal H} &\equiv&(\Box \bar{C}){\cal P}+(\Box C) {\cal P}-{\cal L}\nonumber\\
&=& -\left(\bar{\cal P} +\frac{1}{2}{\cal N}\bar{C}\right)
{\cal M}^{-1}
\left({\cal P} -\frac{1}{2}{\cal N}C \right)
-\bar{\zeta}^{\mu}C_{\mu}
-\bar{C}_{\mu}\zeta^{\mu}
\end{eqnarray}
With the field redefinition
\begin{equation}
\begin{array}{ll}
C=E+F&\bar{C}=\bar{E}+\bar{F}\\
{\cal P}=\frac{1}{2}{\cal N}\left( E-F\right)&\bar{{\cal P}}=\frac{1}{2}{\cal N}\left(\bar{F}
\bar{E}\right)
\end{array}
\end{equation}
the Helmholtz Lagrangian
\begin{equation}
{\cal L}_{H}\equiv(\mathchoice\sqr64\sqr64\sqr{4.2}3\sqr{3.0}3 \,C)\bar{\cal P} +(\mathchoice\sqr64\sqr64\sqr{4.2}3\sqr{3.0}3\, \bar{C}){\cal P} -{\cal H}
\end{equation}
becomes
\begin{eqnarray}
{\cal L}_{LD}&=& \bar{E}{\cal N} \mathchoice\sqr64\sqr64\sqr{4.2}3\sqr{3.0}3\, E
-\bar{F}
\left( {\cal N}\mathchoice\sqr64\sqr64\sqr{4.2}3\sqr{3.0}3 +{\cal N}{\cal M}^{-1}{\cal N} \right)
F \nonumber\\
& & \mbox{} +\bar{\zeta}( E + F )
+(\bar{E}+\bar{F})\zeta\quad .
\end{eqnarray}
\vfill
\eject
\section*{References}
\noindent [1] D.J.Gross and E.Witten,
{\it Nucl.Phys.}{\bf B277}(1986)1.
R.R.Metsaev and A.A.Tseytlin,
{\it Phys.Lett.}{\bf B185}(1987)52 .
M.C.Bento and O.Bertolami,
{\it Phys.Lett.}{\bf B228}(1989)348.
\noindent [2] N.D.Birrell and P.C.W.Davies, {\it Quantum Fields in
Curved Space},
Cambridge Univ.Press(1982).
\noindent [3] K.S.Stelle, {\it Phys.Rev.}{\bf D16}(1977)953.
\noindent [4] J.Julve and M.Tonin,
{\it Nuovo Cim.}{\bf B46}(1978)137.
\noindent [5] E.S.Fradkin and A.A.Tseytlin,
{\it Nucl.Phys.}{\bf B201}(1982)469.
\noindent [6] N.H.Barth and S.M.Christensen,
{\it Phys.Rev.}{\bf D28}(1983)1876.
\noindent [7] I.B.Avramidy and A.O.Barvinsky,
{\it Phys.Lett.}{\bf 159}(1985)269.
\noindent [8] I.Antoniadis and E.T.Tomboulis,
{\it Phys.Rev.}{\bf D33}(1986)2756.
\noindent [9] T.Goldman, J.P\'erez-Mercader, F.Cooper and M.M.Nieto,
{\it Phys.Lett.}{\bf 281}(1992)219.
E.Elizalde, S.D.Odintsov and A.Romeo,
{\it Phys.Rev.}{\bf D51}(1995)1680.
\noindent [10] I.L.Buchbinder, S.D.Odintsov and I.L.Shapiro,
{\it Effective Action in Quantum Gravity},
(IOP, Bristol and Philadelphia, 1992).
\noindent [11] S.Deser and D.G.Boulware,
{\it Phys.Rev.}{\bf D6}(1972)3368.
B.Whitt {\it Phys.Lett.}{\bf B145}(1984)176.
J.D.Barrow and S.Cotsakis,
{\it Phys.Lett.}{\bf B214}(1988)515.
G.Gibbons, {\it Phys.Rev.Lett.}{\bf 64}(1990)123 .
\noindent [12] M.Ferraris and J.Kijowski,
{\it Gen.Rel.Grav.}{\bf 14}(1982)165.
A.Jakubiec and J.Kijowski,
{\it Phys.Rev.}{\bf D37}(1988)1406.
G.Magnano, M.Ferraris and M.Francaviglia,
{\it Gen.Rel.Grav.}{\bf 19}(1987)465;
{\it J.Math.Phys.}{\bf 31}(1990)378;
{\it Class.Quantum.Grav.}{\bf 7}(1990)557.
\noindent [13] J.C.Alonso, F.Barbero, J.Julve and A.Tiemblo,
{\it Class.Quantum Grav.}{\bf11}(1994)865.
\noindent [14] A.Hindawi, B.A.Ovrut and D.Waldram,
{\it Phys.Rev.}{\bf D53}(1996)5583.
\noindent [15] A.Bartoli and J.Julve,
{\it Nucl.Phys.}{\bf B425}(1994)277.
\noindent [16] F.J.de Urries and J.Julve,
{\it J.Phys.A:Math.Gen.}{\bf 31}(1998)6949.
\noindent [17] M.Fierz and W.Pauli,
{\it Proc.R.Soc.}{\bf A173}(1939)211.
\noindent [18] M.Henneaux, C.Teitelboim and J.Zanelli,
{\it Nucl. Phys.}{\bf B332}(1990)169.
\noindent [19] P. van Nieuwenhuizen,
{\it Nucl.Phys.}{\bf B60}(1973)478.
\end{document}
| proofpile-arXiv_065-9212 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
There has been a great deal of interest in the problem of reaction
front propagation in non-equilibrium systems. This issue arises in
systems ranging from flames~\cite{kpp} to bacterial colonies~\cite{bacteria}, from solidification
patterns~\cite{review} to genetics~\cite{genetics}. Most of the
theoretical work in this
area involves solving deterministic reaction-diffusion equations. Here,
we focus on effects that occur when one goes beyond this mean-field
treatment and considers the effects of fluctuations.
By now, it is clear
that there are several possible mechanisms whereby the velocity of
a deterministic reaction-diffusion front can be selected. For cases where
we propagate into a linearly unstable state, the marginal stability
criterion~\cite{marginal} suggests that the fastest stable front is the one that
is observed for all physical initial conditions. For propagation into
a metastable state, there is a unique front solution consistent with
the boundary conditions and hence there is no selection to be done. In
between, there is the case of a nonlinearly unstable state in which the
exponentially localized front is chosen. These principles have been
verified in many examples and in some cases can be rigorously
derived~\cite{Aronson}.
However, it is understood that deterministic
equations are often only approximations to the actual non-equilibrium
dynamics. This is particularly clear in the case of chemical reaction
systems where the true dynamics is a continuous time Markov process which
gives rise to a reaction-diffusion equation only in the limit of an infinite
number of particles per site~\cite{Kampen}. More generally, having a finite number of
particles gives rise to fluctuations that may be important in the
front propagation problem. It has been hypothesized in a variety of
systems~\cite{mean-dla,gene-cutoff,virus1} that the
leading effect of such fluctuations is to provide an
effective cutoff on the reaction rate at very chemical concentrations.
If this is the case, calculations by Brunet and Derrida~\cite{BD} predict that in
the case of a system which
(in the deterministic limit) exhibits (linear) marginal stability (MS) selection ,
the front velocity obeys the scaling
$v_{MS} - v \sim {1\over \ln^2 {N}}$, where $N$ is the (mean-field) number of
particles per site in the stable state. Direct simulations of the
underlying Markov processes have, in two cases to date~\cite{BD,KNS}, been consistent
with this predicted form, albeit with some uncertainty regarding the
coefficient. Also, we note in passing that the cutoff idea is the simplest
one which explains the recently discovered fact\cite{nature} that one can have
diffusive instabilities of a front in a chemical reaction system which
do not show up in a reaction-diffusion treatment thereof.
Our purpose here is to introduce a different approach for studying the role of
these fluctuations in modifying the front velocity. There is a
well-established machinery which transforms the master equation for Markov processes for
chemical reaction systems to the solution of an associated stochastic
differential equation. This was first proposed by Doi\cite{Doi}, and clarified in
some seminal work of Peliti~\cite{Peliti}. This framework has in fact been used for
the study of critical phenomena associated with bulk transitions in
reaction dynamics~\cite{CT}, but has not been applied to the issue of front propagation
far from such a bulk transition. Here, we directly simulate the
relevant stochastic equation; this requires the analytic solution of
a (interesting in its own right) single-site problem, which then, via
a split-step method, allows up to time-step the entire spatially-extended
system. Our results to date verify the Brunet-Derrida scaling and in
fact are even consistent with the coefficient obtained by the
cutoff approach.
The outline of this work is as follows. In section II, we review the
mapping from the master equation to a Langevin equation with multiplicative
noise. Next, we solve a variety of single-site problems as a prelude to
introducing our simulation method. We then tackle the front problem
numerically and compare our findings to the results obtained by augmenting
the deterministic system with a cutoff. In order to accomplish this,
the findings of Brunet and Derrida are extended to include the effects
of finite resolution in space and time. At the end, we summarize the open
issues that we hope to address in the future.
\section{Derivation of the stochastic equation}
In this paper, we will study the following space-lattice reaction scheme:
\begin{equation}\label{split}
A \stackrel \alpha \rightarrow 2A \nonumber
\end{equation}
\begin{equation}\label{collis}
2A \stackrel \beta \rightarrow A \nonumber
\end{equation}
\begin{equation}\label{anih}
A \stackrel \lambda \rightarrow 0 \nonumber
\end{equation}
\begin{equation}\label{birth}
0 \stackrel \tau \rightarrow A \nonumber
\end{equation}
\begin{equation}\label{dif}
A_i \stackrel \mu \rightarrow A_e \nonumber
\end{equation}
where $e$ is the nearest neighbor sites of site $i$; $\alpha$, $\beta$, $
\lambda$,
$\mu$, $\tau$ are rates of the corresponding reactions, i.e. probabilities of
transition per unit time. This process is described by the master
equation
\begin{eqnarray} \label{master}
\frac{dP(\{n_i\};t)}{dt}=\sum_{i} \left[ \frac{\partial P(\{n_i\};t)}{\partial t}
\bigg\vert_{\alpha}+
\frac{\partial P(\{n_i\};t)}{\partial t}\bigg\vert_{\beta}+
\frac{\partial P(\{n_i\};t)}{\partial t}\bigg\vert_{\lambda} \right. +\nonumber\\
\left. \frac{\partial P(\{n_i\};t)}{\partial t}\bigg\vert_{\tau}+
\frac{\partial P(\{n_i\};t)}{\partial t}\bigg\vert_{\mu} \right]
\end{eqnarray}
which states that the probability $P(\{n_i\};t)$ of having $n_i$ particles
on sites
$i$ at some time $t$ changes via each of the elementary processes:
\begin{enumerate}
\item one particle splitting into two
\begin{equation}\label{massplit}
\frac{\partial P(\{n_i\};t)}{\partial t}\bigg\vert_{\alpha}=\alpha[(n_i-1)
P(\ldots,n_i-1,\ldots;t)-n_i P(\ldots,n_i,\ldots;t)]
\end{equation}
\item
two particle reaction with one being annihilated
\begin{equation}\label{mascol}
\frac{\partial P(\{n_i\};t)}{\partial t}\bigg\vert_{\beta}=\beta[(n_i+1)n_i
P(\ldots,n_i+1,\ldots;t)-n_i(n_i-1)P(\ldots,n_i,\ldots;t)]
\end{equation}
\item
one particle annihilation
\begin{equation}\label{masanih}
\frac{\partial P(\{n_i\};t)}{\partial t}\bigg\vert_{\lambda}=\lambda[(n_i+1)
P(\ldots,n_i+1,\ldots;t)-n_i P(\ldots,n_i,\ldots;t)]
\end{equation}
\item
particle birth from vacuum
\begin{equation}\label{masbirth}
\frac{\partial P(\{n_i\};t)}{\partial t}\bigg\vert_{\tau}=\tau[P(\ldots,n_i-1,
\ldots;t)
-P(\ldots,n_i,\ldots;t)]
\end{equation}
\item
diffusion
\begin{equation}\label{masdif}
\frac{\partial P(\{n_i\};t)}{\partial t}\bigg\vert_{\mu}=\mu\sum_{e}[(n_e+1)
P(\ldots,n_i-1,n_e+1,\ldots;t)-n_i P(\ldots,n_i,\ldots;t)]
\end{equation}
\end{enumerate}
In this section, we provide a self-contained derivation of the
stochastic equation whose solution is directly related to the solution
of this master equation. This is by now fairly standard, but we find
it useful to include this derivation here both for completeness and
for fixing various parameters in the final Langevin system.
Following Doi\cite{Doi},
we introduce a vector in Fock space $|\{n_i\}\rangle$ and raising and lowering
operators $\hat{a}^+_i$, $\hat{a}_i$ with the properties:
\begin{eqnarray}
\hat{a}_i|\ldots, n_i, \ldots\rangle =n_i |\ldots, n_i-1, \ldots\rangle \nonumber\\
\hat{a}_i^+|\ldots, n_i, \ldots\rangle = |\ldots, n_i+1, \ldots\rangle
\end{eqnarray}
and the commutation relation
\begin{equation} \label{com}
[\hat{a}_i,\hat{a}_j^+]=\delta_{ij}
\end{equation}
We choose an initial condition for the master equation $(\ref{master})$
to be a Poisson state,
\begin{equation}
P(\{n_i\};t=0)=e^{-N_A(0)} \prod_i \frac{n_{0i}^{n_i}}{n_i!}
\end{equation}
where $N_A(0)=\sum\limits_i n_{0i} $ is the expected total number of particles.
If we define the time dependent vector
\begin{equation}
|\phi(t)\rangle=\sum_{\{n_i\}}P(\{n_i\};t)|\{n_i\}\rangle
\end{equation}
the master equation can be written in the Schr\"{o}dinger form
\begin{equation}\label{Schr}
\frac{\partial}{\partial t}|\phi(t)\rangle=-\hat{H}|\phi(t)\rangle
\end{equation}
where $\hat{H} = \sum_i \hat{H}_i$ and the latter is given by
\begin{equation}\label{H}
-\mu \sum_e \hat{a}_i^+(\hat{a}_e-\hat{a}_i)+
\alpha [1-\hat{a}_i^+]\hat{a}_i^+a_i-
\beta [1-\hat{a}_i^+]\hat{a}_i^+\hat{a}_i^2-
\lambda [1-\hat{a}_i^+]\hat{a}_i+
\tau [1-\hat{a}_i^+]
\end{equation}
The formal solution of this equation is
\begin{equation}
|\phi(t)\rangle =e^{-\hat{H}t}|\phi(0)\rangle=e^{-N_A(0)}e^{-\hat{H}t}e^{
\sum_i \hat{a}_i^+ n_{0i}}|0\rangle
\end{equation}
To be able to calculate average values for observables, we need to introduce
the projection state
\begin{equation}
\langle |=\langle 0 | \prod_i e^{\hat{a}_i}
\end{equation}
The external product of this with any state $|\{n_i\}\rangle$ gives $1$.
Then any normal-ordered polynomial operator satisfies
\begin{equation}
\langle|Q(\{\hat{a}_i^+\},\{\hat{a}_i\})=\langle|Q(\{1\},\{\hat{a}_i\})
\end{equation}
Using this equation we get for any observable
\begin{equation} \label{avr}
\langle A(t) \rangle = \sum_{\{n_i\}} A(\{n_i\})P(\{n_i\};t)=\langle|
\tilde{A} (\{\hat{a}_i\})|\phi(t)\rangle
\end{equation}
where $\tilde{A} (\{\hat{a}_i\})$ is what we obtain by using
the commutation relation to normal order $A$ and thereafter setting $\hat{a}_i^+$ to
$1$.
In order to write a path integral representation for the time
evolution operator, we introduce a set of coherent states ( see \cite{CO} for
more strict treatment)
\begin{equation}
\| \{z_i\} \rangle = e^{\sum_i z_i \hat{a}_i^+-z_i} | 0 \rangle
\end{equation}
where $z_i$ is complex eigenvalue of $\hat{a} _i$.
In the case of real positive $\{z_i\}$ this states are Poissonian states with
$\langle {n}_i \rangle=z_i$.
By inserting the completeness relation,
\begin{equation}
1=(\prod_i \int \frac{d^2 z_i}{\pi}e^{-|z_i|^2+z_i+z_i^*})\|z\rangle \langle
z\|
\end{equation}
where $d^2 z_i=d(Re{}z_i)d(Im{}z_i)$, into expression $(\ref{avr})$ we get
\begin{eqnarray}
\langle|
\tilde{A}(\{\hat{a}_i\})e^{-\hat{H}t}|\phi(0)\rangle=\langle|
\tilde{A}(\{\hat{a}_i\})(1-\hat{H}\Delta t)^{N \Delta t}\|z^{(0)}\rangle= \nonumber \\
\langle|\tilde{A}(\{\hat{a}_i\})
(\prod_i \int \frac{d^2
z^{(N)}_i}{\pi} e^{-|z_i^{(N)}|^2+z_i^{(N)}+{z^*_i}^{(N)}})
\|z^{(N)}\rangle \langle z^{(N)}\| \times \nonumber \\
(\prod_{j=1}^N (1-\hat{H} \Delta t))
\|z^{(0)}\rangle=
\langle 0 | (\prod_{j=1}^N \prod_i \int \frac{d^2 z^{(j)}_i}{\pi})
\tilde{A}(\{z^{(N)}_i\}) e^{-S}|0 \rangle
\end{eqnarray}
where $\Delta t=t/N$, $|\phi(0)\rangle =\|z^{(0)}\rangle$ and
\begin{eqnarray}
S=\sum_{j=1}^{N}\sum_i(H_i({z^*}^{(j)},z^{(j-1)})\Delta t +
|z_i^{(j)}|^2 -{z_i^*}^{(j)} z_i^{(j-1)}-z_i^{(N)}
+z_i^{(0)})= \\
\label{daction}
\sum_{j=0}^{N-1}\sum_i\Delta t(\frac{\bar{z}_i^{(j+1)}(z_i^{(j+1)}-
z_i^{(j)})}
{\Delta t} + H_i(\bar{z}^{(j+1)}+1,z^{(j)}))
\end{eqnarray}
Here ${z_i^*}^{(j)}=\bar{z}_i^{(j)}+1$ and $H_i(\{{z_i^*}^{(j+1)}\},
\{z^{(j)}\})$
is the same function of ${z_i^*}^{(j+1)}$, ${z_i^*}^{(j)}$ as
$\hat{H}_i(\{\hat{a}_i^+\},\{\hat{a}_i\})$ of $\hat{a}_i^+$, $\hat{a}_i$.
In the continuous time limit we get
\begin{equation}
\langle A(t_0) \rangle = \frac{\int \prod_i {\cal D} \bar{z}_i {\cal D}
z_i
A(\{z_i(t_0)\})e^{-S[\{\bar{z}_i(t)\},\{z_i(t)\};t_0]}}{\int \prod_i
{\cal D} \bar{z}_i {\cal D} z_i e^{-S[\{\bar{z}_i(t)\},\{z_i(t)\};t_0]}}
\end{equation}
with
\begin{eqnarray}
S[\{\bar{z}_i(t)\},\{z_i(t)\};t_0]=\sum_i \int_0^{t_0} dt \ \left(\bar{z}_i(t)[\frac
{d}{d t} - \mu \nabla ^2 ] z_i(t)
-\alpha (1+ \bar{z}_i(t)) \bar{z}_i(t) z_i(t) + \right. \nonumber \\
\left. \beta (1+ \bar{z}_i(t)) \bar{z}_i(t) z_i^2(t) +
\lambda \bar{z}_i(t) z_i(t) -
\tau \bar{z}_i(t) \right)
\end{eqnarray}
where $\nabla ^2$ is the lattice Laplacian $ z_i(t) =\sum_e (z_e(t) - z_i(t))$.
Now, we linearize the action using the Stratonovich transformation
\begin{equation}
e^{\bar{z}_i^2(\alpha z_i -\beta z_i^2) dt} \sim
\int d \eta_i e^{-\frac{1}{2}\eta_i^2- \bar{z}_i \sqrt{2(\alpha z_i
-\beta z_i^2)} \eta_i \sqrt{dt} }
\end{equation}
and integrate out the $\bar{z}$ variables
\begin{eqnarray}
\langle A(t_0) \rangle \sim \int \prod_i {\cal D} \eta_i {\cal D}
z_i e^{-\frac{1}{2}
\sum_i \int_0^{t_0} dt \eta_i^2(t)}A(\{z_i(t_0)\})
\prod_{t=0}^{t_0}\delta ( dz_i(t) - \mu \nabla^2 z_i(t) dt - \nonumber\\
\alpha z_i(t)dt +
\beta z_i^2(t) dt + \lambda z_i(t) dt - \tau dt - \sqrt{2(\alpha z_i(t)
-\beta z_i^2(t))} \eta_i(t) \sqrt{dt})
\end{eqnarray}
In this expression, there are $\delta$-functions at every time; this means that
only $z_i(t)$ which satisfy to the Langevin equation
\begin{equation} \label{Langevin}
dz_i(t)= \mu \nabla^2 z_i(t) dt +
(\alpha- \lambda) z_i(t)dt -
\beta z_i^2(t) dt + \tau dt + \sqrt{2(\alpha z_i(t)
-\beta z_i^2(t))} dW_i(t)
\end{equation}
(where $W_i(t)$ is a Wiener process)
contribute to the path integral. In other words, the variables $z_i(t)$ remain
on the trajectories described by equation $(\ref{Langevin})$.
Note that this equation must be considered as an Ito stochastic differential
equation, since we can see from the form of the action $(\ref{daction})$
that the updating the variables $z_i$ to time-step $j+1$ only requires knowledge
of the variables at time-step $j$. Also, we note that for $\lambda
\geq 0$ and for small enough (positive) $\tau$, if the initial conditions
specify $0\leq z_i(0)\leq\alpha/\beta$, this will remain true for
all subsequent time. Thus, equation $(\ref{Langevin})$ describes the
temporal evolution of the system as sequence of Poissonian states~\cite{Gardiner}.
For further analysis, we rescale $(\ref{Langevin})$ with $z=u \alpha/\beta$,
$t\rightarrow t/\alpha$, $\tilde{\lambda}=\lambda/\alpha$, $\tilde{\tau}=
\tau \beta /\alpha^2$, and $\tilde{\mu}=\mu/\alpha$. If we
furthermore let $N=\alpha / \beta$ be the mean-field number of particles in the
presence only the first two processes (no spontaneous decay or spontaneous
creation), we obtain
\begin{equation} \label{L}
du_i= \tilde{\mu} \nabla ^2 u_i dt +
(1- \tilde{\lambda})u_idt -
u_i^2 dt + \tilde{\tau} dt + \sqrt{\frac{2}{N}} \sqrt{u_i
-u_i^2} dW_i
\end{equation}
with initial conditions $0 \le u_i (0) \le 1$.
\section{Exact Solutions of Some Local Langevin Equations}
In the absence of process $(\ref{birth})$, i.e. at $\tau=0$, equation $(\ref{L})$
has an absorbing state $u=0$.
In the vicinity of this point, equation $(\ref{L})$ cannot be treated by
merely setting $dt$ to a finite time-step. Such a scheme would often
give rise to a negative $u$, due to the (very-large) noise term.
One ad-hoc way to circumvent this difficulty was given by Dickman~\cite{Dickman}, who proposed
to re-introduce discreteness into the state-space in the vicinity of the
absorbing state. Although this approach appears to work (it seems to lead
to the correct critical behavior near the bulk transition of this class
of models), it seems to be a step backward; after all, the original process
was discrete and the whole purpose of using the Langevin formalism is to
provide a (hopefully more analytically tractable) continuum description.
But, one must then come up with a different scheme for updating the
stochastic variables.
Our approach is to solve exactly the stochastic part of the evolution
equation and embed this via the split-step method in a complete update
scheme for a finite time-step $\Delta t$. We will discuss the details
of this scheme in the next section. Here, we provide an analytic
solution for several (local) Langevin equations, as these results will be
needed later. Also, this solution set is of interest on its own. There is
some limited consideration of equations of this sort in the
literature~\cite{SQRT}, but
as far as we can determine, these explicit solutions for the case of physical
no-flux boundary conditions at the absorbing state has not previously appeared.
So,
we consider Langevin equations with just the noise term. Let us start with
the simplest example,
\begin{equation} \label{Lsqrt}
du=\sqrt{2 u} dW
\end{equation}
The probability density $P(u,t)$ satisfies the associated Fokker-Planck equation
\begin{equation} \label{FP}
\frac{\partial P(u,t)}{\partial t}=\frac{\partial^2}{\partial u^2} u P(u,t)
\end{equation}
with initial condition $P(u,t) \bigg\vert_{t=t_{in}}=\delta(u-u_0)$. We want
our solution to be equal to zero at $u<0$ and have no flux leaking out
of this point; this will guarantee that the total probability remains a
constant, which we will choose to be unity.
To solve this equation, we define $\psi = uP$ and Laplace transform in time to obtain
\begin{equation}
u \frac{\partial ^2 \tilde{\psi} } {\partial u^2} (s) \ - \
s \tilde{\psi} (s) \ = \
u_0 \delta (u-u_0)
\end{equation}
Here, $\tilde{\psi} (s)$ is the transform of $\psi$.
If we let $y=2\sqrt{u}$, this can be written as
\begin{equation}
\left( \frac{\partial^2}{\partial y^2} -
\frac{1}{y}\frac{\partial}{\partial y} - s \right) \tilde{\psi} \ = \
-\frac{y_0}{2} \delta (y-y_0)
\end{equation}
The homogeneous part of this equation can be recognized as a variant of
Bessel's equation. This allows us to write down a provisional solution in the form
\begin{equation}
\tilde{\psi} (s) \ = \ \frac{y y_0}{2} K_1 (\sqrt{s} y_> ) I_1 (\sqrt{s} y_<)
\end{equation}
where $I_1$ and $K_1$ are modified Bessel functions and $y_> (y_<)$ is
the larger (smaller) of $y$ and $y_0$. Returning to the original variables,
\begin{equation}
\tilde{P} (s) \ = \ 2 \frac{\sqrt{u_0}}{\sqrt{u}} K_1
(2 \sqrt{s u_> }) I_1 (2\sqrt{s u_<} )
\end{equation}
This solution does not, of course,
vanish for $u<0$ and hence we must modify it by multiplying by $\theta (u)$.
This does not change the fact that it solves the equation away from $u=0$
but it does introduce a discontinuity of size $2 \sqrt{u_0} K_1 (2 \sqrt{s u_0 })
$. If we look at the original equation, we see that
this leads to a $\delta$ function via $u \delta ' (u) = -\delta (u)$. This
must be compensated by adding an explicit $\delta$ function piece to the
solution. The final result is
\begin{equation}
\tilde{P} (s) \ = \ 2 \frac{\sqrt{u_0}}{\sqrt{u}} K_1
(2 \sqrt{s u_> }) I_1 (2\sqrt{s u_<} ) \\
+ \ 2 \sqrt{\frac{u_0}{s}} K_1 (2 \sqrt{s u_0 }) \delta (u)
\end{equation}
We can do the inverse transform by the usual contour integral approach.
The details are particularly unilluminating, so we merely quote the final
result
\begin{equation}\label{solsqrt}
P (u,t) \ = \ \frac{1}{t} \sqrt{\frac{u_0}{u}} e^{-(u+u_0) /t}
I_1 (\frac{2\sqrt{u u_0}}{t} )
+ e^{-u_0/t} \delta (u)
\end{equation}
One can check explicitly that this solves the equation and also that
$P$ remains normalized for all times. The $\delta$ function piece
represents accumulation at the absorbing state; as $t$ gets large, all
the probability end up there. The regular part of
$P(u,t)$ is presented
on the Figure \ref{f1}. We see that as ratio $u_0/ t$ becomes
smaller,
the distribution gradually shifts towards zero and differs
from the Gaussian expected at very short times.
For completeness, we write down the solution of Langevin equations
with additional terms. If we take the system,
\begin{equation} \label{tau}
du=\tau dt + \sqrt{2 u} dW ,
\end{equation}
the probability density is
\begin{equation}
P(u,t) \ = \ \left( \frac{u_0}{u} \right) ^{\frac{1-\tau}{2}} \
\frac{I_{\tau-1}(2
\frac{\sqrt {u u_0}}{t}) }{t} \ e^{-(u+u_0)/t}
\end{equation}
For this case, spontaneous birth from the vacuum prevents the system from
falling irreversibly to the state $u=0$. Instead, there is an integrable
power-law singularity near $u=0$ which becomes a $\delta$ function in the
$\tau \rightarrow 0$
limit; this is shown in Figure \ref{f2}.
For the system
\begin{equation}
du=\alpha u dt + \sqrt{2 u} dW \label{alpha}
\end{equation}
the probability density is
\begin{equation}
P(u,t) \ = \ e^{-\frac{\alpha u_0 e^{\alpha t}}{e^{\alpha t }-1}}
\delta (u) \ + \ \sqrt{g(t) \frac{u_0}{u}}
I_1 \left( 2
\sqrt {g(t) u u_0}\right) \ e^{-\alpha \frac{u+u_0
e^{\alpha t }} {e^{\alpha t }-1}}
\end{equation}
where $g(t)=\frac{\alpha^2 e^{\alpha t}}{ (e^{\alpha t }-1)^2}$. In this
case, the drift toward infinity gives rise to a finite total probability
(for long times) of falling into the absorbing state. One can also work
out the case of both finite $\tau$ and finite $\alpha$.
For the system
\begin{equation}\label{u-u2}
du=\sqrt{2 (u-u^2)} dW
\end{equation}
we can derive a series representation for the
probability density,
\begin{eqnarray}\label{full}
P(u,t)=
\sum_{m=1}^{+\infty} e^{-m(m+1)t} \frac{2m+1}{2m(m+1)} \sqrt{\frac{
u_0(1-u_0)}{u(1-u)}}P_m^1(2 u_0 -1)P_m^1(2 u -1) + \nonumber\\
\delta(u)[1-u_0-\sum_{m=1}^{+\infty}(-1)^m\frac{2m+1}{2m(m+1)}\sqrt{u_0(1-u_0)}
e^{-m(m+1)t} P_m^1(2 u_0 -1)] - \nonumber\\
\delta(1-u)[u_0 + \sum_{m=1}^{+\infty}\frac{2m+1}{2m(m+1)}\sqrt{u_0(1-u_0)}
e^{-m(m+1)t} P_m^1(2 u_0 -1)]
\end{eqnarray}
where $P_m^1(x)$ is Legendre polynomial. Successive terms in this sum decay
rapidly because of the fast exponent, and the sum can be computed numerically
to high accuracy. Note that there are absorbing states at both $u=0$ and
$u=1$. If the initial state starts close to one of these, the probability
is almost the same as $(\ref{solsqrt})$; if the initial state is in between,
then for short times the system is almost Gaussian.
\section{Front propagation}
\subsection{Numerical method}
We are interested in numerically solving equation $(\ref{Langevin})$,
for the particular case of $\tau = \lambda =0$; this is case which
reduces in the deterministic case to the well-studied Fisher equation{\cite{genetics} For
this purpose, we define a function
\begin{equation}
F_{u_0,\Delta t}(u)=\int_{0^-}^u P(u,\Delta t)du
\end{equation}
where $P(u,\Delta t)$ is the analytical solution of a single-site Langevin
equation such as
$(\ref{L})$. This function has values ranging from $0$ to $1$. If $y$ is random
variable homogeneously distributed on $[0,1]$, then
$u=F^{-1}_{u_0,\Delta t}(y)$
is distributed according to corresponding truncated Langevin equation at
time $\Delta t$. The remaining parts of
complete Langevin equation are deterministic and for those terms we can update $u$ via
a simple Euler scheme. We then can combine this two steps together; we first
compute the
change in $u$ due to fluctuations and then the change of $u$ due to the
deterministic part (using new value of $u$). Thus
\begin{equation}
u(t+\Delta t)=F^{-1}_{u(t),\Delta t}(y) + {\cal D}\{F^{-1}_{u(t),\Delta t}(y)\}
\Delta t
\end{equation}
where ${\cal D}\{u \}$ denotes the terms remaining after consideration of the
noise term.
It is important to note that this scheme never allows
the field to go below zero, but it does allow a variable to be
stuck at zero until it is ``lifted" by the diffusive interaction. This
is an absolutely necessary aspect of simulating processes with an absorbing
state. Approximations which do not allow for getting ``stuck", such as
the system-size expansion method of Van-Kampen~\cite{Kampen} (where the noise correlation
is taken to be related to the solution of the deterministic limit of the
equation) get this wrong and hence cannot get the correct front velocity.
This explains why the simulation results of \cite{LLM} do not at all exhibit
the anomalous $N$ dependence expected via the Brunet-Derrida cutoff argument.
As we will see, our approach is much more successful.
\subsection{Marginal stability criterion for a discretized Fisher equation}
\label{A1}
As $N$ gets large, our results should approach those of the deterministic
system. Since this problem corresponds to propagation into an unstable
region, the velocity should be given by the marginal stability approach.
As is well known, this predicts a velocity equal to $2 \sqrt{D}$,
in the continuum (in time and space) limit.
Here, we extend this result to a discrete lattice and a finite time
update scheme, so as to be able to directly compare our simulation data
with the theoretical expectation.
After linearization, the deterministic part of the discretized
equation takes the form
\begin{equation} \label{fd}
\frac{u_i^{(j+1)}-u_i^{(j)}}{\Delta t}= \mu (u_{i+1}^{(j)}-2 u_i^{(j)} +
u_{i-1}^{(j)}) + u_i^{(j)}
\end{equation}
We want to compare this equation to the usual Fisher equation with diffusion coefficient
$D$. This means that $\mu=D/h^2$; we will consider the case $D=1$. We assume
that the front moves with constant velocity $c$ and therefore the variables
$u_i^{(j)}$ show a stroboscopic picture of this motion at times $j \Delta t $ on the lattice
sites $i$. If we move with the speed of the front we will see that its shape
exponentially decays as $e^{q (ih-c j \Delta t)}$. Substitution of this
expression to $(\ref{fd})$ gives the dependence of $c$ on the decay rate $q$
\begin{equation}\label{disp}
\frac{e^{-q c \Delta t}-1}{\Delta t}\ = \ \frac{2}{h^2}(\cosh q h -1) +1
\end{equation}
The standard marginal stability argument predicts that we can determine
the decay rate and asymptotic speed of the front (for
a sufficiently localized initial state) by solving
$(\ref{disp})$ as well its derivative with respect to $q$
\begin{equation}\label{ms}
c e^{-q c \Delta t}\ = \ - \frac{2}{h}\sinh q h
\end{equation}
Simulations directly confirm this formula as well as
the Brunet and Derrida~\cite{BD} result
(actually derived earlier by Bramson~\cite{Bramson}) that
$c_\infty-c(t) \sim 1/t$ (see Figures \ref{fas01}).
As already mentioned, it has been conjectured that the leading effect
of the fluctuations is the imposition of an effective cutoff of order $1/N$ in the
deterministic equation. To check this, we need to extend the
Brunet and Derrida result to our discretized equation. The basic idea
is that there must be a small imaginary part of the decay rate so
at to satisfy the continuity conditions at the cutoff point; this
is discussed in detail in \cite{BD}. This leads directly to
\begin{equation}
Im \ q\ = \ \frac{\pi q_0}{\ln (A N) }
\end{equation}
where $q_0$ is solution from the marginal stability criterion and $A$ is some
constant. Since $c'(q)$ is zero at the marginal stability point, we
can find the change in velocity $\delta c$ by considering the second
derivative of the function $c(q)$ given by $(\ref{disp})$.
We thus get
\begin{equation} \label{delc}
\delta c = \frac{\pi^2 q_0 (e^{-c_0 q_0 \Delta t} \cosh q_0 h -
\frac{c_0^2 \Delta t}{2})}{\ln^2 N}
\end{equation}
Again, simulations confirm this formula (see Figure \ref{fcut}).
\subsection{Results}
We now present the results of our simulation. We chose to make one
further simplification. We use the pure square root noise term instead
of the precisely correct term given in equation (\ref{L}). We do
this for computational ease, inasmuch as the expression derived for
this case is much simpler than that of equation (\ref{full}). Since it
is only the effect of the noise near the $u=0$ absorbing state which is
crucial for altering the selected velocity, this simplification
should not be essential. Once we have done this, the resulting
equation has the nice feature that the coefficient $1/\sqrt{N}$
in front of the noise term can be removed by the re-scaling $\hat{u}=u N$.
This means that we can
simulate equation (\ref{L}) using a fixed probability table (with the same time
step) for any $N$.
To actually evaluate the probability table $F_{u_0,\Delta t}(u)$, we chose
$512$ equidistant values of $\hat{u}_0$ in the
interval from 0 to 30. For each $\hat{u}_0$, the interval of
values for $\hat{u}$ where $F_{\hat{u}_0,\Delta t=0.01}(\hat{u})$
is non-trivial was divided into $1024$ equidistant points.
The new value of
$\hat{u}$
was then determined by linear interpolation of the data from the table. For
$\hat{u}_0 > 30$, new values of $\hat{u}$ were determined using a standard
algorithm for the Gaussian distribution , since this distribution is
the asymptotic limit of equation $(\ref{solsqrt})$, when $\hat{u}_0 \gg
\Delta t$,
$\hat{u}\gg \Delta t$.
The difference for this distribution and exact solution is small
for $\hat{u}_0>30$.
Finally, the computation of the stochastic term was turned off for
$\hat{u}_0>10^{-3} N$. This should not affect the speed which, we have
already argued, is only sensitive to what happens near $u=0$;
this insensitivity was also checked directly by running some simulations in
which the stochastic term is included for all values of $u$.
All of our simulations were run up to the time $t=10^{6}$. Four
values of the velocity, corresponding to time intervals of
approximately $2. x 10^{5}$ were obtained so as to get an average and
some error bar. In Figure \ref{h}. we show data in the form
of $c-c_{\infty}$, where $c_{\infty}$ is calculated from equation (\ref{ms}),
versus $\ln^2{N}$. Also plotted for comparison is the function $ 11/\ln^2{N}$.
Note that over many orders of magnitude of $N$, the dependence derived
by Brunet and Derrida provide a very good fit to the data.
To get a more accurate indication of the data for large $N$, we present
in Figure \ref{fh1} a series of three runs, for differing values of the spatial lattice
spacing $h$. Under the hypothesis that the stochastic system should be
precisely the same as the deterministic system with the cutoff added,
the expected limiting values at infinite $N$ are shown as triangles
on the axis. It is clearly impossible to definitively conclude that the
curves are approaching these values. On the other hand, simple
extrapolations come very close and we believe that it is more likely than
not that this hypothesis is true. This is opposite to what was conjectured
based on simulations of a discrete Markov process, where the velocity
seems to scale albeit with a different coefficient. Given the incredibly
slow convergence of this velocity at large $N$, we are pessimistic as to
whether any purely simulational strategy would provide a definitive answer
to this question. This therefore
offers a crucial issue for future theoretical analysis to investigate.
\section{Summary}
In this work, we have shown how to use the field-theoretic mapping of
discrete Markov processes to stochastic equations for continuous density
variable to address the role of finite-particle number fluctuations on
the velocity of reaction fronts. Specifically, we studied a model which
leads to the well-known Fisher equation in the $N \rightarrow \infty$ limit,
where $N$ is the average number of particles per site in the stable state.
Our goal is to understand how the usual marginal stability criterion
becomes modified by these stochastic effects.
It is clear that having finite $N$ lowers the velocity at which a front (corresponding to the
invasion of the unstable state by the stable one) will propagate. One
attractive hypothesis is that the leading effect of the fluctuations is
to to introduce an effect cutoff into the deterministic equation; this
idea arose independently in model of biological
evolution~\cite{gene-cutoff} and in mean
field approaches for diffusion-limited-aggregation (DLA)~\cite{mean-dla}. Brunet and
Derrida have shown that if this is the case, one should expect
$v_{MS} - v = {C \over \ln^2 {N}}$, where $C=\pi ^2$ for the case of
continuous time and space. We have extended the calculation of
$C$ to the finite lattice size, finite time-step system and compared this
prediction with direct simulations of the relevant stochastic equation.
Our results verify the form of the scaling and suggest that the
coefficient may be correct as well.
One issue that is left unaddressed by our work to date concerns the
effects of higher spatial dimensionality. It is likely,
although unproven, that the velocity change will be smaller, as
the fluctuations get averaged over the transverse directions. This
seems to be the explanation for the findings of Riordan et al~\cite{doering} that
the reaction front looks mean-field like even for small $N$, in
three and four dimensions. We hope to report on this issue in the future.
Finally, we point out yet again that there is no analytic treatment
available for the velocity selection problem in the stochastic equation.
Obvious expansion methods such as the system-size approach cannot work,
as they neglect the essential role of the fluctuations to push the
system back into the absorbing state at small density. We need to find
a more powerful approach!
We thank D. Kessler for many useful discussions. We also acknowledge the support of the
US NSF under grant DMR98-5735.
| proofpile-arXiv_065-9214 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Rich superclusters are the ideal environment for the detection of cluster
mergings, because the peculiar velocities induced by the enhanced local density
of the large scale structure favour the cluster-cluster and cluster-group
collisions, in the same way as the caustics seen in the simulations.
The most remarkable examples of cluster merging seen at an intermediate stage
are found in the central region of the Shapley Concentration, the richest
supercluster of clusters found within 300 h$^{-1}$ Mpc (Zucca et al. 1993;
hereafter h=H$_o$/100). On the basis
of the two-dimensional distribution of galaxies of the COSMOS/UKSTJ catalogue,
it is possible to find several complexes of interacting
clusters, which suggest that the entire
region is dynamically active. Therefore, this supercluster represents a unique
laboratory where it is possible to follow cluster mergings and to test related
astrophysical consequences. It is believed that such dynamical
events may modify the properties of the emission of cluster galaxies
favouring
the formation of radio halos, relicts and wide angle tail radio sources and
the presence of post-starburst (E+A) galaxies.
\section{The A3558 cluster complex}
The A3558 cluster complex is a remarkable chain formed by the ACO
clusters A3558, A3562 and A3556, located at $\sim 14500$ km/s and
spanning $\sim 7$ h$^{-1}$ Mpc perpendicular to the line of sight
(see Fig. 1)
This structure is approximately at the geometrical centre of the Shapley
Concentration and can be considered the core of this supercluster.
\begin{figure}
\centerline{\vbox{
\psfig{figure=venturif1.ps,width=0.8\hsize}
}}
\caption[]{
Wedge diagram of the sample of galaxies in the velocity range
$10000 - 24000$ km/s. The coordinate range is
$13^h 22^m 06^s < \alpha (2000)< 13^h 37^m 15^s$ and
$-32^o 22' 40''< \delta(2000) < -30^o 59' 30''$. The three pairs of straight
lines (solid, dashed and dotted) show the projection in right ascension of
1 Abell radius for the three clusters A3562, A3558 and A3556, respectively.}
\end{figure}
\begin{figure}
\centerline{\vbox{
\psfig{figure=venturif2.ps,width=0.9\hsize}
}}
\caption[]{
Contours of the galaxy density in the A3558 complex. The centres of the
superimposed circles correspond to the pointing centres of the
observations, and the circle radius is the primary beam of ATCA at
22 cm. Pointings 1 to 10 are presented in Venturi et al. 1997. The pointings
labelled HY31, HY3, HY13 and HY22 correspond to the archive data.}
\end{figure}
By the use of multifibre spectroscopy, we obtained 714 redshifts of galaxies
in this region, confirming that the complex is a single connected structure,
elongated perpendicularly to the line of sight. In particular, the number of
measured redshifts of galaxies belonging to A3558 is $307$, and thus this
is one of the best sampled galaxy clusters in the literature.
Moreover, two smaller
groups, dubbed SC 1327-312 and SC 1329-313, were found both in the optical
(Bardelli et al 1994) and X-ray band (Bardelli et al 1996).
After a substructure analysis, Bardelli et al. (1998) found that a large
number of subgroups reside in this
complex, meaning that this structure is dynamically active.
It is possible to consider two main hypothesis
for the origin of the A3558 cluster complex: one is that we are observing a
cluster-cluster collision, just after the first core-core encounter, while
the other considers repeated, incoherent group-group and group-cluster mergings
focused at the geometrical centre of a large
scale density excess (the Shapley Concentration).
The second hypothesis seems to be favoured by the presence of an excess of blue
galaxies between A3558 and A3562, i.e. in the position where the shock is
expected.
\section{The radio survey}
In order to test the effects of merging and group collision on the
radio properties of clusters of galaxies, and on the radio properties
of the galaxies within merging clusters, we started an extensive radio
survey of the A3556-A3558-A3562 chain (hereinafter the A3558 complex)
in the Shapley Concentration core. Our survey is being carried out at
22/13 cm with the Australia Telescope Compact Array (ATCA).
The main aims
of our study can be summarised as follows:
\noindent
{\it (a)} derive the bivariate radio-luminosity function for the
early type galaxies in the complex, and compare it to that of galaxies
in more relaxed environments and in the field;
\noindent
{\it (b)} search for extended radio emission, in the form of {\it relics}
or {\it radio halos}, associated with the clusters rather than with
individual galaxies and presumably consequences of merging precesses;
\noindent
{\it (c)} study the physical properties of the extended radio galaxies
in the complex, in order to derive information on their age, on the
properties of the external medium, and on the projection effects
in the groups or clusters where these sources are located.
The observational data available in the optical band and at X--ray energies
ensure a global analysis of the environment and of the dynamical state of
the structure.
\subsection{Observations and Data Reduction}
Our starting catalogue is the sample observed at 22 cm.
The ATCA observations were carried out with a 128 MHz bandwidth,
and in order to
reduce bandwidth smearing effects, we took advantage of the spectral-line
mode correlation, using 32 channels. The data were reduced using the
MIRIAD package (Sault, Teuben \& Wright 1995), and the analysis of
the images was carried out with the AIPS package.
The resolution of our survey is
$\sim 10^{\prime\prime} \times 5^{\prime\prime}$, and the noise ranges from
$70 \mu$Jy to 0.2 mJy/beam. We considered reliable detections all sources
with flux density $S \ge 5 \sigma$, i.e. $S \ge 1$ mJy.
In Fig. 2 the pointing centres are superimposed to
the optical isodensity contours. The diameter of the circles
in the figure corresponds $\sim 33$ arcmin, which is the
size of the primary beam of the ATCA at 22 cm.
A region of $\sim 1$ deg$^2$ around the centre of
A3556 was mosaiced in 1994-1995 with ten pointing centres (Venturi et al.
1997), numbered 1 to 10 in Fig. 2. Three more
pointings were placed south of A3558, at the centre of the group SC 1329
and east of A3562 (N1, N2 and N3 respectively).
Furthemore, in order to completely cover the A3558 complex,
archive ATCA data at the same resolution and band were reduced. These
are the three fields centered on
A3558, SC1327 and A3562 (HY31, HY3 and HY22 respectively, from Reid \&
Hunstead).
Even if not part of the optical region, we included also archive
observations of A3560 (HY13 in Fig. 2), located half degree
south of the A3558 complex. The proximity of this cluster to the complex
suggests that A3560 is probably infalling toward this structure and possibly
at the early stages of interaction.
We will not include the analysis of the A3560 image in the present
paper, and the sources found in this field are not included in the
discussion, however in this paper we will present the peculiar extended
radio galaxy
associated with the dominant giant multiple nuclei galaxy in the next section.
As it is clear from Fig. 2 we completely and nearly
homogenously covered the A3558 complex with
our radio survey, going from the western extreme of A3556 to the
eastern extrem of A3562. The total area covered is 2.9 deg$^2$,
corresponding to $9 \times 10^{-4}$ sr.
Such good coverage allows us to study
all the possible environments along the structutre: from the regions
directly involved in the merging, where shocks, heating of gas and
galaxy-galaxy
interactions are expected to play an important role, to the more external
ones, only mildly perturbed by tidal forces.
\subsection{Radio Source Counts and Statistics}
We detected 323 radio sources with $S_{22 cm} \ge 1$ mJy.
Because of the primary beam attenuation, which increases the noise
in the external part of the field, our survey is not complete
down to this flux density limit. Taking into account this effect,
it is possible to consider our survey complete down to a flux density limit of
2.3 mJy.
Among the 323 radio sources, 69 have an optical counterpart, which corresponds
to 21\%.
27 out of the 69 identifications are in the redshift range of the Shapley
Concentration, one
is a foreground spiral galaxy (v = 9066 km sec$^{-1}$) and two are
background objects with a recession velocity v $\sim 58000$ km sec$^{-1}$.
Most of the 16 optical identifications brighter than M$_B$ = 18.5 and
without redshift information
are very likely to be part of the supercluster, so the number of
radio galaxies
in the A3558 complex may increase once more redshifts are available in
this region. The situation is summarised in Fig. 3.
\begin{figure}
\centerline{\vbox{
\psfig{figure=venturif3.ps,width=0.9\hsize}
}}
\caption[]{The radio sources detected are superimposed to the optical
isodensity contours of the A3558 complex. Filled circles represent radio
galaxies with known
redshift, filled squares represent radio galaxies without redshift
information and crosses represent radio sources without optical counterpart.}
\end{figure}
\noindent
We can make very preliminary considerations.
A first glance at Fig. 3 reveals that the radio sources are not
uniformly distributed in the A3558 complex.
Comparison between Fig. 2 and Fig. 3 ensures us that gaps in the
distribution of the radio sources do not correspond to uncovered
regions in our survey.
Moreover,
from Fig. 3 it is also clear that the radio galaxies in the complex
are not clustered in the same way as the galaxy density peaks.
We point out that the
inner 20 arcmin in A3558, the richest and most massive cluster in the
chain, contain
only two faint radio galaxies, while the much poorer cluster A3556 and the
outermost region of A3562 contain a large number of radio galaxies.
The peculiarity of A3556 at radio wavelengths was presented and discussed
in Venturi et al. 1997.
\subsection{Extended Radio Galaxies}
Four of the 27 radio galaxies belonging to the A3558 complex
exhibit extended radio emission. In addition, very extended and peculiar
emission is associated with the dominant galaxy in the A3560 cluster.
Their global properties are reported in Table 1.
\begin{table}
\caption{Properties of the Extended Galaxies in the A3558 Complex} \label{tbl-1}
\begin{center}\scriptsize
\begin{tabular}{lllllcllc}
Source & RA$_{J2000}$ & DEC$_{J2000}$ & LogP$_{1.4 GHz}$ & Radio & Optical &
Velocity & b$_j$ & Cluster \\
Name & & & W Hz$^{-1}$ & Type & ID & km s$^{-1}$ & & \\
\tableline
J1322$-$3146 & $13~22~06~~$ & $-31~36~18$ & 22.15 & WAT & E & 14254 & 14.7 &
A3556 \\
J1324$-$3138 & $13~24~01~~$ & $-31~38~~~$ & 23.05 & NAT & E & 15142 & 15.6 &
A3556 \\
J1332$-$3308 & $13~32~25~~$ & $-33~08~~~$ & 24.38 & WAT & cD & 14586 & 15.6 &
A3560 \\
J1333$-$3141 & $13~33~31.7$ & $-31~41~04$ & 23.33 & NAT & E & 14438 & 17.3 &
A3562 \\
J1335$-$3153 & $13~35~42.6$ & $-31~53~54$ & 22.60 & FRI & E & 14385 & 16.0 &
A3562 \\
\end{tabular}
\end{center}
\end{table}
All these extended sources are associated with bright elliptical galaxies, and
their radio power range is typical of low luminosity FRI radio galaxies
(Fanaroff \& Riley, 1974), as it is commonly found in clusters of galaxies.
\subsubsection{The wide-angle tail J1322$-$3146.}
This radio galaxy was presented and commented briefly on in Venturi
et al. 1997.
We remind here that it is very unusual, in that it
exhibits a {\it wide-angle tail} morphology, all embedded in the optical
galaxy, despite the large projected distance from the centre of A3556
(26 armcin, corresponding to 1 h$^{-1}$ Mpc). Wide-angle tail radio galaxies
are always found at the centre of clusters: this large distance from the
cluster
centre raised the question of what bent the tails.
Reanalysis of ROSAT archive data and of the Rosat All Sky Survey
(Kull \& Boheringer 1998)
shows that the gas distribution in the A3558 complex follows closely the
distribution of the optical galaxies reported in Fig. 2 and 3, and
has a low surface brightness extension which reaches
the location of J1322$-$3146. A detailed study of this
radio galaxy, including its dynamics and comparison with the properties
of the intergalactic medium as derived from the X--ray data, will be
presented elsewhere.
\subsubsection{The relict radio galaxy J1324$-$3138.}
A multiwavelength detailed study of this source was presented in Venturi
et al. 1998a. Here we summarise the most important conclusions, and
present further considerations on the geometry of the A3556 core.
J1324$-$3138 (see Figs. in Venturi et al. 1998a) is characterised by a steep
spectrum both in the extended emission ($\alpha = 1.3$ in the range 843 MHz -
4.9 GHz) and in the ``nuclear'' component coincident
with the optical counterpart. The source has low surface brightness and
lacks polarised emission at any frequency. Its internal pressure
and magnetic field are lower than in typical radio galaxies in the same power
range, and are intermediate between what is found along the tails of
radio sources in galaxies in clusters, and in the few known relic
source. By fitting the spectra of the extended and nuclear components
with a model taking into account reisotropisation of electrons
(Jaffe \& Perola 1973), we found that the age of last acceleration
of the emitting electrons is $\sim 10^8$ yrs.
We drew the conclusion that this radio galaxy is a
{\it dead} tailed source, in which the nuclear activity
has switched off. The final evolution of the source is presently
dominted by synchotron losses.
We interpreted the lack of polarisation assuming
that the source is seen through a dense screen of gas,
and concluded that
it is located beyond the core of A3556 and that a major
merging event between the core of A3556 and the subgroup hosting this
radio galaxy has already taken place.
Our suggestion is that merging
triggered the radio emission in the associated optical galaxy. The old age
derived for this source is consistent with the timescale of merging.
By estimating the distance of the shock front of the merging from the centre
of A3556 (Kang et al. 1997, Ensslin et al. 1998), and
comparing with the projected one, we found that the viewing angle is
$\sim 5^{circ}$ implying a maximum dimension for the source
of $\sim$ 1 h$^{-1}$ Mpc.
\subsubsection{The complex radio galaxy J1332$-$3308.}
Reduction of the archive 22 cm ATCA data centered on A3560 (field HY13 in
Fig. 3),
at the resolution of $10.7^{\prime\prime} \times 6.0^{\prime\prime}$
revealed the presence of a S$_{22 cm}$ = 943 mJy source,
misplaced from the centre of the multiple nuclei cD galaxy located at the
cluster centre and characterised by a very unusual morphology.
In order to understand the morphology of this source, and to
study how it relates
to the dominant cluster galaxy, we asked for a VLA ad-hoc
short observation at 20 cm and 6 cm, with the array in the
hybrid configuration BnA,
suitable for the low declination of the source. The results of these
observations are given in Figs. 4a and 4b, where the VLA images
at 20 cm and 6 cm respectively are superimposed on the DSS optical image.
\begin{figure}
\centerline{\vbox{
\psfig{figure=venturif4a.ps,width=0.7\hsize}
\psfig{figure=venturif4b.ps,width=0.7\hsize}
}}
\caption[]{(a) Upper. 20 cm VLA full resolution image of J1332$-$3308,
superimposed on the DSS optical image. The resolution is
$4.2^{\prime\prime} \times 3.7^{\prime\prime}$, in p.a. 61$^{\circ}$.
(b) Lower. 6 cm VLA image of J1332$-$3308 obtained with natural weight.
The resolution is
$2.1^{\prime\prime} \times 1.2^{\prime\prime}$, in p.a. 5.7$^{\circ}$.}
\end{figure}
As it is clear from Figs. 4a and 4b, the morphology of J1332$-$3308 is
complex, and it is suggestive of two distinct components.
The 20 cm emission departs from the northernmost optical nucleus,
in the shape of a wide-angle tail radio galaxy. The nucleus of the
emission is visible in the 6 cm image, coincident
with the norhernmost brightest optical nucleus and this reinforces the idea
that this radio galaxy is a wide-angle
tail source associated with an active nucleus, despite the lack of a
visible jet in the high resolution map.
The ridge of emission south of the secondary optical nucleus, well
visible at 6 cm, and the southernmost amorphous extended emission
are difficult to be interpreted. It seems unlikely that they are related
to the wide-angle tail component. In the 6 cm map the ridge seems to
point to the southernmost diffuse component.
The overall structure of J1332$-$3308 is reminescent of 3C338, seen under
a different viewing angle. 3C338 is an extended classical double source
associated with the multiple nuclei galaxy NGC6166, located at the centre
of the Abell cluster A2199. The steep spectrum ridge visible in 3C338
south of the location of the presently active nucleus was interpreted
by Burns et al. 1993 as the remnant of a previous activity in the
galaxy, associated with a different optical nucleus.
The resolution and u-v coverage of our observations at 20 cm and 6 cm are
too different to carry out a spectral index
study, which is essential to derive the intrinsic properties of
the various components, including their age, and to test
our preliminary hypothesis that J1332$-$3308 is a {\it reborn} radio
galaxy, similarly to 3C338. We are currently engaged in
a detailed multifrequency study of this source, in order to disentangle
its nature.
\subsubsection{The head-tail source J1333$-$3141 and the double J1335$-$3135.}
22 cm maps of these two radio galaxies were presented in Venturi et al. 1998b.
They both belong to A3562, the easternmost cluster in the A3558 complex.
The head-tail source J1333$-$3141 is associated with a 17th magnitude
elliptical galaxy
located at $\sim 1^{\prime}$ from the centre of A3562, and it is possibly
orbiting around the cD galaxy in the potential well of the cluster.
The total extent
of the source is $\sim 1^{\prime}$ corresponding to $\sim$ 40 kpc.
The tail is straight up to $\sim 20^{\prime}$ from the
nucleus, then the bending becomes relevant and the emission diffuse,
suggesting that
the two jets forming it (not visible in our map because of resolution
effects) open at this distance from the core.
In order to derive some global properties of this radio galaxy
we studied in the wavelength range 13 - 22 cm.
Fig. 5 shows the 13 cm flux density contours of this
source, superimposed on the DSS optical image.
\begin{figure}
\centerline{\vbox{
\psfig{figure=venturif5.ps,width=0.7\hsize}
}}
\caption[]{13 cm ATCA image of J1333$-$3141, superimposed
on the DSS optical image. The resolution is
$5.4^{\prime\prime} \times 3.2^{\prime\prime}$, in p.a. 0.7$^{\circ}$.}
\end{figure}
\begin{figure}
\centerline{\vbox{
\psfig{figure=venturif6.ps,width=\hsize}
}}
\caption[]{Ratio between the radio source counts and X--ray counts
in the A3558 complex for the complete radio sample .
The bins are the same as in Kull \& Boehringer 1998.}
\end{figure}
\noindent
The total spectral index $\alpha_{13 cm}^{22 cm}$ in this source
at the resolution our the 22 cm image survey is 0.9. This should probably
be considered an upper limit since the 13 cm u-v plane is not well
covered at the short spacings, and extended flux from the tail might
have been missed in our image. With this preliminary value of
the spectral index we derived the equipartition magnetic field and
internal pressure in the source, computed assuming a filling factor
$\phi$=1, k=1 (ratio between protons and electrons) and integrating
over a frequency range 10 MHz - 100 GHz. We obtained H$_{eq} = 3 \mu$G
and P$_{eq} = 5.5 \times 10^{-13}$ dyne cm$^{-2}$. As stated above,
these values should probably be considered lower limits, however, though
preliminary, they are consistent with the average intrinsic physical quantities
of the extended emission in cluster galaxies, and are significantly different
from what is derived for J1324$-$3138, the extended radio galaxy at the
other end of the A3558 complex.
J1335$-$3153 is a double source located at $\sim 12^{\prime}$ from the
centre of A3562, in the east direction. The radio morphology is asymmetric,
with the western lobe longer and more distorted than the eastern one.
It is possible that this source has a bent or distorted morphology
and that we see it face-on.
A more detailed study and comparison with the properties of the
intergalactic medium as derived from the X--ray data is in progress.
\section{Radio source distribution and X--ray emission}
It is now well established that the gas distribution in the A3558
complex perfectly matches the distribution of the galaxy density,
with galaxy density peaks coincident with gas density and temperature peaks.
This result
further reinforce the physical connection of gas and galaxies in the
Shapley Concentration core (Bardelli et al. 1994, Bardelli et al. 1996,
Ettori et al. 1997, Kull \& Boehringer 1998).
Since the 22 cm radio survey presented here covers the whole extent of
the optical and X--ray data, we carried out a comparison between
the X--ray distribution and the radio source distribution, in order
to see if the radio galaxies also peak in galaxy and gas density
peaks.
We referred to Kull \& Boehringer 1998 for our comparison, and
binned the radio sources in our survey in the same 30 bins chosen
by those authors to derive the gas density profile along the A3558 complex.
We computed and plotted the ratio between the radio source counts and
the X--ray luminosity (in counts sec$^{-1}$ arcmin$^{-2}$) in each bin,
both for the uncomplete (but deeper) 1 mJy sample and for the complete
2.3 mJy sample. We comment here that the sources in our survey
obviously include the radio background. However, since it can be considered
constant over the whole region of the A3558 complex it
does not affect the shape of the radio source distribution.
Our preliminary result, shown in Fig. 6, indicates that there is a deficiency
of radio sources in the the X--ray peaks, i.e. in coincidence with the
core of A3558 and A3562. Alternatively our plot can be interpreted as
an excess of radio sources in those regions where ongoing merging is
thought to be present.
\section{Preliminary Considerations}
The most strinking result of our 22 cm survey is the non-uniform
distribution of radio sources in the A3558 complex. The most
massive and relaxed cluster in the chain, A3558, exhibits very little
radio emission, while the two extremes of the chain, where interactions
and merging are supposed to dominate the dynamics, are populated by
a large number of radio galaxies. A peak in the radio source distribution is
also found in coincidence with the two groups SC1327 and SC1329, known to
be dynamically active.
The apparent anticorrelation between the radio source counts and the
X--ray emission in the chain also points towards the same considerations.
Our radio counts versus X--ray emission preliminary analysis on the
A3558 complex
suggests that the radio galaxies seem to avoid either very high gas density
environments or evolved clusters. At this stage we cannot discriminate
between these two possibilities. Comparison and analysis of literature
data is essential to further study this effect.
The study of the extended radio galaxies in this complex has revealed
the presence of a complex source associated with the multiple nuclei
dominant galaxy in A3560, which could be the result of a restarted
radio activity, possibly connected with merging processes.
| proofpile-arXiv_065-9215 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The discovery of charm was an exciting chapter in elementary particle physics.
The theoretical motivations were strong, the predictions were crisp, and the
experimental searches ranged from inadequate to serendipitous to inspired.
Perhaps we can learn something relevant to present-day searches from those
experiences. I would like to describe the evolution of the case for charm,
some subsequent developments, and some questions which remain nearly a quarter
of a century later.
The argument for charm was most compellingly made in the context of unification
of the weak and electromagnetic interactions, briefly described in Sec.~2.
Parallel arguments based on currents (Sec.~3) and quark-lepton analogies
(Sec.~4) also played a role, while gauge theory results (Sec.~5) strengthened
the case. In the early 1970's, when electroweak theories began to be taken
seriously, theorists began to exhort their experimentalist colleagues in
earnest to seek charm (Sec.~6). In the fall of 1974, the discovery of the
$J/\psi$ provided a candidate for the lowest-lying spin-triplet charm-anticharm
bound state, and several other circumstances hinted strongly at the existence
of charm (Sec.~7). Nonetheless, not everyone was persuaded by this
interpretation, and it remained for open charm to be discovered before
lingering doubts were fully resolved (Sec.~8).
Some progress in the post-discovery era is briefly noted in Sec.~9, while
some current questions are posed in Sec.~10. A brief epilogue in Sec.~11 asks
whether the search for charm offers us any lessons for the future. Part of the
author's interest in (recent) history stems from a review, undertaken with Val
Fitch, of elementary particle physics in the second half of the Twentieth
Century \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{AIPIOP}, which is to be issued in a second edition in a year or
two.
\section{Electroweak Unification}
The Fermi theory of beta decay \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{EF} involved a pointlike interaction (for
example, in the decay $n \to p e^- \bar \nu_e$ of the neutron). This
feature was eventually recognized as a serious barrier to its use in higher
orders of perturbation theory. By contrast, quantum electrodynamics (QED),
involving photon exchange, was successfully used for a number of higher-order
calculations, particularly following its renormalization by Feynman, Schwinger,
Tomonaga, and Dyson \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{SS}.
Attempts to describe the weak interactions in terms of particle exchange date
back to Yukawa \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{Yuk}. A theory of weak interactions involving exchange of
charged spin-1 bosons was written down by Oskar Klein in 1938 \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{Klein}, to
some extent anticipating that of Yang and Mills \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{YM} describing
self-interacting gauge particles.
Once the $V-A$ theory of the weak interactions had been established in 1957
\@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{VA}, descriptions involving exchange of charged vector bosons were
proposed \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{VB}. These tried to unify charged vector bosons (eventually
called $W^\pm$) with the photon ($\gamma$) within a single SU(2) gauge
symmetry. However, the (massless) photon couples to a vector current, while
the (massive) $W$'s couple to a $V-A$ current. The SU(2) symmetry was
inadequate to discuss this difference. Its extension by Glashow in 1961
\@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{SG} to an SU(2) $\times$ U(1) permitted the simultaneous description of
electromagnetic and charge-changing weak interactions at the price of
introducing a new {\it neutral} massive gauge boson (now called the $Z^0$)
which coupled to a specific mixture of $V$ and $A$ currents for each quark and
lepton.
The Glashow theory left unanswered the mechanism by which the $W^\pm$ and $Z$
were to acquire their masses. This was provided by Weinberg \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{Wbg} and
Salam \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{AS} through the Higgs mechanism \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{Higgs}, whereby the SU(2)
$\times$ U(1) was broken spontaneously to the U(1) of electromagnetism. Proofs
of the renormalizability of this theory, due in the early 1970's to G. 't
Hooft, M. Veltman, B. W. Lee, and J. Zinn-Justin \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{renorm}, led to intense
interest in its predictions, including the existence of charge-preserving weak
interactions due to exchange of the hypothetical $Z^0$ boson. By 1973, a
review by E. Abers and B. W. Lee \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{AL} already was available as a guide to
searches for neutral weak currents and other phenomena predicted by the new
theory.
\section{Currents}
Let $Q_l^{(+)}$ be the spatial integral of the time-component of the
charge-changing leptonic weak current, so that $Q_l^{(+)} |e^- \rangle_L =
|\nu_e \rangle_L$, where the subscript $L$ denotes a left-handed particle. It
is a member of an SU(2) algebra, since it is just an isospin-raising operator.
Defining $Q_l^{(-)} = [Q_l^{(+)}]^\dag$, we can form $2 Q_{3l} \equiv
[Q_l^{(+)}, Q_l^{(-)}]$ and then find that the algebra closes: $[Q_{3l},
Q_l^{(\pm)}] = \pm Q_l^{(\pm)}$. In order to describe the decays of strange and
non-strange hadrons in a unified way with a suitably normalized weak hadronic
current, M. Gell-Mann and M. L\'evy \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{GL} proposed in 1960 that the
corresponding hadronic charge behaved as $Q_h^{(+)} |n \cos \theta + \Lambda
\sin \theta \rangle = |p \rangle$, with $\sin \theta \simeq 0.2$. Such a
current also is a member of an SU(2) algebra. This allowed one to
simultaneously describe the apparent suppression of strange particle decay
rates with respect to strangeness-preserving weak interactions, and to account
for small violations of weak universality in beta-decay, which had become
noticeable as a result of radiative corrections \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{Sirlin}.
In 1963 N. Cabibbo adopted the idea of the Gell-Mann -- L\'evy current by
writing the weak current as
\begin{equation}
J^{\mu(+)} = \cos \theta J_{ \Delta S = 0}^{\mu (+)}
+ \sin \theta J_{ \Delta S = 1}^{\mu (+)}
\end{equation}
and using the newly developed flavor-SU(3) symmetry \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{GMN} to evaluate
its matrix elements between meson and baryon states. In the language
of the $u,d,s$ quarks this corresponded to writing the hadronic charge-changing
weak currents as
$$
Q_h^{(+)} = \left[ \begin{array}{c c c} 0 & \cos \theta & \sin \theta \\
0 & 0 & 0 \\
0 & 0 & 0 \\
\end{array} \right]~~,~~~Q_h^{(-)} = [Q_h^{(+)}]^\dag~~,~~~
$$
\begin{equation}
Q_{3h} \equiv \frac{1}{2}[Q_h^{(+)}, Q_h^{(-)}] =
\left[ \begin{array}{c c c}
1 & 0 & 0 \\
0 & - \cos^2 \theta & - \sin \theta \cos \theta \\
0 & - \sin \theta \cos \theta & - \cos^2 \theta \\
\end{array} \right]
\end{equation}
Again, the algebra closes: $[Q_{3h}, Q_h^{(\pm)}] = \pm Q_h^{(\pm)}$, so the
Cabibbo current is suitably normalized. A good fit to weak semileptonic decays
of baryons and mesons was found in this manner, with $\sin \theta \simeq 0.22$.
As a student, I sometimes asked about the interpretation of $Q_{3h}$, which has
strangeness-changing pieces! A frequent answer, reminiscent of the Wizard of
Oz, was: ``Pay no attention to that [man behind the screen]!'' The neutral
current was supposed just to close the algebra, not to have physical
significance.
\section{Quark-Lepton Analogies: Quartet Models}
Very shortly after the advent of the Cabibbo theory, a number of proposals
\@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{BjG,Hara,MO} sought to draw a parallel between the weak currents of
quarks and leptons in order to remove the strangeness-changing neutral currents
just mentioned. Since the electron and muon each were seen to have their own
distinct neutrino \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{Danby}, why shouldn't quarks be paired in the same way?
This involved introducing a quark with charge $Q=2/3$, carrying its own quantum
number, conserved under strong and electromagnetic but not weak interactions.
As a counterpoise to the ``strangeness'' carried by the $s$ quark, the new
quantum number was dubbed ``charm'' by Bjorken and Glashow. The analogy then
has the form
\begin{equation}
\left[ \begin{array}{c} \nu_e \\ e^- \end{array} \right]
\left[ \begin{array}{c} \nu_\mu \\ \mu^- \end{array} \right]
\Leftrightarrow
\left[ \begin{array}{c} u \\ d \end{array} \right]
\left[ \begin{array}{c} c \\ s \end{array} \right]~~~.
\end{equation}
The matrix elements of the hadronic $Q^{(+)}$ (we omit the subscript $h$) were
then
\begin{equation} \label{eqn:mat}
\langle u|Q^{(+)}|d \rangle = \langle c|Q^{(+)}|s \rangle = \cos \theta~~,~~~
\langle u|Q^{(+)}|s \rangle = - \langle c|Q^{(+)}|d \rangle = \sin \theta~~~,
\end{equation}
while those of $Q_3$ were
\begin{equation}
\langle u|Q_3|u \rangle = \langle c|Q_3|c \rangle = -\langle d|Q_3|d \rangle =
-\langle s|Q_3|s \rangle = \frac{1}{2}~~~,
\end{equation}
with all off-diagonal ({\it flavor-changing}) elements equal to zero. Here, as
before, $\sin \theta \simeq 0.22$. Bjorken and Glashow were the first to call
the isospin doublet of non-strange charmed mesons ``$D$'' (for ``doublet''),
with $D^0 = c \bar u$ and $D^+ = c \bar d$.
\section{Gauge theory results}
The promotion of electroweak unification to a genuine gauge theory permitted
quantitative predictions of the properties of the fourth quark.
\subsection{The Glashow-Iliopoulos-Maiani (``GIM'') paper}
Taking his gauge theory of electroweak interactions seriously, Glashow in 1970
together with J. Iliopoulos and L. Maiani observed that the quartet model of
weak hadronic currents banished flavor-changing neutral currents to leading
order of momentum in higher orders of perturbation theory \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{GIM}. Thus, for
example, higher-order contributions to $K^0$--$\bar K^0$ mixing, expected to
diverge in the $V-A$ theory or in a gauge theory without the charmed quark,
would now be cut off by $m_c$, where $m_c$ is the mass of the charmed quark. In
this manner an upper limit on the charmed quark mass of about 2 GeV was
deduced. In view of the predominant coupling (\ref{eqn:mat}) of the charmed
quark to the strange quark, charmed particles should decay mainly to strange
particles, with a lifetime estimated to be about $\tau_{\rm charm} \simeq
10^{-13}$ s.
The GIM paper contained a number of other specific predictions about the
properties of charmed particles. Among these were:
\begin{itemize}
\item A branching ratio of the charmed meson $D^0 = c \bar u$ to $K^- \pi^+$
of a few percent;
\item Strong production of charm-anticharm pairs;
\item Direct leptons in charm decays;
\item Charm production in neutrino reactions;
\item Neutral flavor-preserving currents;
\item The observability of a $Z^0$ in the direct channel of $e^+ e^-$
annihilations.
\end{itemize}
These were all to be borne out over the next few years. The discovery of the
$Z^0$ took longer, and was first made in a hadron rather than a lepton collider
\@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{Zz}.
\subsection{Anomalies}
Once the electroweak theory was on firm theoretical grounds, it was noticed by
several authors in 1972 \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{BIM,GG,GJ} that contributions to various
triangle diagrams involving fermion loops had to cancel. For the electroweak
theory it was sufficient to consider the sum over all fermion species of
$I_{3L} Q^2$, where $I_{3L}$ is the weak isospin of the left-handed states and
$Q$ is their electric charge. For the first family of quarks and leptons
the cancellation is arranged as follows:
\begin{center}
\begin{tabular}{l c c c c c}
Fermion: & $\nu_e$ & $e^-$ & $u$ & $d$ & Sum \\
Contribution: & $\frac{1}{2}(0)^2$ & $-\frac{1}{2}(-1)^2$ &
$\frac{1}{2}\left( 3 \right) \left(\frac{2}{3}\right)^2$ &
$-\frac{1}{2}\left( 3 \right) \left(-\frac{1}{3}\right)^2$ & 0 \\
Equal to: & 0 & $-\frac{1}{2}$ & $\frac{2}{3}$ & $-\frac{1}{6}$ & 0 \\
\end{tabular}
\end{center}
The corresponding cancellation for the second family reads
\begin{equation}
\nu_\mu + \mu + c + s = 0~~~,
\end{equation}
so that the charmed quark was {\it required} for such a cancellation, given the
existence of the muon and the strange quark.
\subsection{Rare kaon decays}
In a landmark 1973 paper, M. K. Gaillard and B. W. Lee \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{GaL} took the
charmed quark seriously in calculating a host of processes involving kaons
to higher order in the new electroweak theory. These included $K^0$--$\bar
K^0$ mixing and numerous rare decays such as $K_L \to (\mu^+ \mu^-,~
\gamma \gamma,~\pi^0 e^+ e^-,~\pi^0 \nu \bar \nu,~\ldots)$ and $K^+ \to
(\pi^+ e^+ e^-,~\pi^+ \nu \bar \nu,~\ldots)$. The analyses of $K^0$--$\bar
K^0$ mixing and $K_L \to \mu^+ \mu^-$ indicated that $m_c^2 - m_u^2$ obeyed
a strong upper bound, while the failure of $K_L \to \gamma \gamma$ to be
appreciably suppressed indicated that $m_u^2 \ll m_c^2$. Together these
results supported the GIM estimate of $m_c \le 2$ GeV and considerably
strengthened an earlier bound by Lee, J. Primack, and S. Treiman \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{LPT}.
\section{Exhortations}
K. Niu and collaborators already had candidates for charm in emulsion as early
as 1971 \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{Niu}. These results, taken seriously by theorists in Japan
\@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{Ogawa,KM}, will be mentioned again presently. Meanwhile, in the West,
theorists besides GIM began to urge their experimental colleagues to find
charm. C. Carlson and P. Freund \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{CF} discussed, among other things, the
properties of a narrow charm-anticharm bound state. George Snow \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{Snow}
listed a number of features of charm production and decays. Through an
interest in hadron spectroscopy, I became involved late in 1973 in these
efforts in collaboration with Gaillard and Lee. We started to look at charm
production and detection in hadron, neutrino, and electron-positron reactions.
It quickly became clear that a new quark, even one as light as 2 GeV, could
have been overlooked.
Glashow spoke on charm at the 1974 Conference on Experimental Meson
Spectroscopy, held at Northeastern University \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{EMS}. In addition to the
properties mentioned in the earlier GIM paper, he told his experimental
colleagues to expect:
\begin{itemize}
\item Charm lifetimes ranging between $10^{-13}$ and $10^{-12}$ s;
\item Comparable branching ratios for semileptonic and hadronic decays;
\item An abundance of strange particles in the final state;
\item Dileptons in neutrino reactions (with the second lepton due to charm
decay).
\end{itemize}
He ended with the following charge to his colleagues:
\begin{quote} {\tt \centerline{WHAT TO EXPECT AT EMS-76}
\smallskip
There are just three possibilities:\\
1. Charm is not found, and I eat my hat.\\
2. Charm is found by hadron spectroscopers, and we celebrate.\\
3. Charm is found by outlanders, and you eat your hats.}
\end{quote}
In the summer of 1974, Sam Treiman, then an editor of Reviews of Modern
Physics, pressed Ben Lee, Mary Gaillard, and me to write up our results with
the comment: ``It's getting urgent.'' Our review of the properties of charmed
particles was eventually published in the April 1975 issue \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{GLR}. Better
late than never. By then we were able to add an appendix dealing with the new
discoveries. The body of our article (``GLR'') was written before them. Our
conclusions, most of which I mentioned at a Gordon Conference late in the
summer of 1974, were as follows:
\begin{quote}{\tt
We have suggested some phenomena that might be indicative of charmed particles.
These include:\\
(a) ``direct'' lepton production,\\
(b) large numbers of strange particles,\\
(c) narrow peaks in mass spectra of hadrons,\\
(d) apparent strangeness violations,\\
(e) short tracks, indicative of particles with lifetime of order $10^{-13}$
sec.,\\
(f) di-lepton production in neutrino reactions,\\
(g) narrow peaks in $e^+e^-$ or $\mu^+ \mu^-$ mass spectra,\\
(h) transient threshold phenomena in deep inelastic leptoproduction,\\
(i) approach of the $(e^+ e^- \to {\tt hadrons})/(e^+ e^- \to \mu^+ \mu^-)$
ratio [``$R$''] to $3 \frac{1}{3}$, perhaps from above, and\\
(h) any other phenomena that may indicate a mass scale of 2 - 10 GeV.}
\end{quote}
A couple of these bear explanation. ``Apparent strangeness violations''
can occur in the transitions $c \leftrightarrow d$; otherwise strangeness would
directly track charm (aside from a sign; the convention is that the strangeness
of an $s$ quark is $-1$, while the charm of a charmed quark is $+1$). ``Narrow
peaks in $e^+e^-$ or $\mu^+ \mu^-$ mass spectra'' were not just dreamt up out
of the blue; we were aware of an effect in muon pairs at a mass around 3.5 GeV
\@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{oldJ} which could have been the lowest spin-triplet $c \bar c$ bound
state. John Yoh remembers hearing this interpretation from Mary K. Gaillard
in the Fermilab cafeteria in August of 1974. Our estimate of the width of this
state was about 2 MeV, based on extrapolating the Okubo-Iizuka-Zweig (OZI) rule
\@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{OZI} which suppressed ``hairpin'' quark diagrams. An early prediction by
T. Appelquist and H. D. Politzer \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{AP} of the properties of $c \bar c$
bound states used QCD to anticipate a narrower spin-triplet than GLR.
I invited Glashow to the University of Minnesota in October of 1974 to speak on
charm and much else (including grand unified theories, which he was then
developing with Howard Georgi \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{GUTs}). An unpersuaded curmudgeonly
astronomer turned to a younger colleague in the audience, whispering: ``When do
they bring in the men in white coats?'' The timing could not have been better.
Charm was to be discovered within a month.
\section{Hidden (and Not-So-Hidden) Charm}
As was suspected even before the days of QCD and asymptotic freedom, the ratio
$R \equiv \sigma(e^+ e^- \to {\rm hadrons})/\sigma(e^+ e^- \to \mu^+ \mu^-)$
probes the sum $\sum Q^2$ of the squared charges of quarks pair-produced at a
given c.m. energy. Thus, above the resonances $\rho$, $\omega$, and $\phi$
which are features of low-energy $e^+ e^-$ annihilations into hadrons, one
expected to see $R = 3[(2/3)^2 + (-1/3)^2 + (-1/3)^2] = 2$, corresponding to
the three light quarks $u$, $d$, and $s$. With wide errors, the ADONE Collider
at Frascati found this to be the case. (See \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{ADONE} for earlier
references.)
In 1972 the Cambridge Electron Accelerator (CEA) was converted to an
electron-positron collider. At energies above 3 GeV the cross section for $e^+
e^- \to {\rm hadrons}$, instead of falling with the expected $1/E_{\rm c.m.}^2$
behavior characteristic of pointlike quarks, was found to remain approximately
constant \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{CEA}. At $E_{\rm c.m.} = 4$ GeV, $R$ was $4.9 \pm 1.1$, while
it rose to $6.2 \pm 1.6$ at $E_{\rm c.m.} = 5$ GeV \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{Richter}. These
results were confirmed, with higher statistics, at the SPEAR machine
\@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{Richter}. At the 1974 International Conference on High Energy Physics,
Burt Richter voiced concern about the validity of the naive quark
interpretation of $R$.
The London Conference was distinguished by various precursors of charm in
addition to the rise in $R$ just mentioned. Deep inelastic scattering of muon
neutrinos was occasionally seen (in about 1\% of events) to lead to a pair of
oppositely-charged muons. One muon carried the lepton number of the incident
neutrino; the second could be the prompt decay product of charm. This
interpretation was mentioned by Ben Lee at the end of D. Cundy's rapporteur's
talk \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{dimuons}. Leptons produced at large transverse momenta \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{lepts}
were due in part to prompt decays of charmed particles. John Iliopoulos
\@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{JI} not only laid out a number of the predictions for properties of
charmed particles, but bet anyone a case of wine that they would be discovered
by the next (1976) International Conference on High Energy Physics. Though he
recalls several takers, they never paid off.
On November 11, 1974, the simultaneous discovery of the lowest-lying $^3S_1$
charm-anticharm bound state, with a mass of 3.1 GeV/$c^2$, was announced by
Samuel C. C. Ting and Burt Richter. Ting's group, inspired in part by the
suggestion of a peak in an earlier experiment \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{oldJ} and in part by an
innate confidence that lepton-pair spectra would reveal new physics, collided
protons produced at the Brookhaven Alternating-Gradient Synchrotron (AGS) with
a beryllium target to produce electron-positron pairs whose effective mass
spectrum was then studied with a high-resolution spectrometer \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{J}. The
new particle they discovered was called ``$J$'' (the character for ``Ting''
in Chinese). Richter's group, working at SPEAR, wished to re-check anomalies
in the cross section for electron-positron annihilations that had shown up in
earlier running around a center-of-mass energy of 3 GeV. By carefully
controlling the beam energy, they were able to map out the peak of a narrow
resonance at 3.1 GeV \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{psi}, which they called ``$\psi$'', a continuation
of the vector-meson series $\rho,\omega,\phi, \ldots$. The dual name $J/\psi$
has been preserved. I was made aware of these discoveries by a call from Ben
Lee on November 11. They certainly looked like charm to me, as well as to a
number of other people \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{AP,Charms}.
However, a large portion of the community offered alternative interpretations
\@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{PRLrefs}. Some potential objections to charm (see the next Section) were
worth putting to experimental tests (e.g., by finding singly-charmed particles
\@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{JLRDPF}). However, I doubt the situation was ever as grave as implied by
the comment made to me in March of 1975 by Dick Blankenbecler at SLAC:
\begin{quote}
{\tt Don't give up the ship. It has just begun to sink.}
\end{quote}
\section{Open Charm}
In 1971, well before the discovery of the $J/\psi$, there were intimations of
particles carrying a single charmed quark through the short tracks they left in
emulsions, as studied by K. Niu and collaborators at Nagoya \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{Niu}. The
best candidate appears now to be an example of the rare decay $D^+ \to \pi^+
\pi^0$. Tony Sanda reminded us in this meeting \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{Sanda} that by the 1975
International Conference on Cosmic Ray Physics this group had accumulated
\@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{ICRC} a significant sample of such ``short-lived particles.''
A candidate for the charmed baryon now called $\Lambda_c$ (as well as for
the decay $\Sigma_c \to \Lambda_c \pi$) was reported in neutrino interactions
in 1975 \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{Lambdac}. The properties of the $\Lambda_c$ and $\Sigma_c$ were
very close to those anticipated by an analysis of charmed-particle spectroscopy
\@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{DGG} which appeared shortly after the discussion of the $J/\psi$.
Despite these indications, as well as the discovery of a candidate for
the first radial excitation (``$\psi'$'') of the $J/\psi$ \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{psip} just 10
days after the observation of the $\psi$ in $e^+ e^-$ collisions, the charm
interpretation of the $J/\psi$ and $\psi'$ required several key tests to be
passed.
\subsection{Where was the $D \to \bar K \pi$ decay?}
The decays of charmed nonstrange mesons, with predicted masses of nearly 2
GeV/$c^2$, could involve a wide variety of final states, so that any individual
two-body (e.g., $D^0 \to K^- \pi^+$) or three-body (e.g., $D^+ \to K^- \pi^+
\pi^+$) mode should have a branching ratio of a few percent \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{GIM}.
GLR attempted to estimate this effect using a current algebra model to estimate
multiple-pion production \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{GLR}. Unfortunately we used a value of the pion
decay constant $f_\pi$ high by $\sqrt{2}$ \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{LQR}, and neglected other modes
besides $\bar K + n \pi$ \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{EQ}. Our results implied ${\cal B}(D^0 \to K
\pi)$ of nearly 50\% for a 2 GeV/$c^2$ charmed particle, clearly an
overestimate both in hindsight and intuitively (see, e.g., \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{GIM}). Our
result was quoted in the report \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{NoD} of an initial SPEAR search which
failed to find charmed particles, and may have led to overconfidence in some
other proposed experiments \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{NoC} which failed to find charm. Subsequent
calculations (also taking into account non-zero pion mass), based both on the
current algebra matrix element and on a statistical model \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{Fermi}, found
smaller $D \to \bar K \pi$ branching ratios than GLR \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{LQR}.
\subsection{Why did $R$ rise beyond its predicted value of $3 \frac{1}{3}$?}
The rise in $R$ observed at 4 GeV and higher was {\it too large} to account
for charm, which predicted $\Delta R = 3 Q_c^2 = 4/3$. The resolution of this
problem was that pairs of $\tau$ leptons \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{tau}, whose threshold
is $E_{\rm c.m.} = 2 m_\tau c^2 \simeq 3.56$ GeV, were also contributing to
$R$. These $\tau$ leptons also diluted the rise in kaon multiplicity
expected above charm threshold. This coincidence had all the aspects of a
mystery thriller \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{HH}; the near-degeneracy of charm and $\tau$ production
thresholds is one of those effects (like the comparable masses of the
muon and pion) that seems just to have been put in to make the problem harder.
The value of $R$ is still a bit large in comparison with theoretical
expectations in the range covered by SPEAR \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{Rval}.
\subsection{Where were the predicted electric dipole transitions from the
$\psi'$ to P-wave levels?}
The lowest P-wave charmonium levels (now called $\chi_c$) were predicted to lie
between the 1S and 2S levels \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{Pmasses}. Thus, one expected to be able
to see the electric dipole transitions $\psi' \to \gamma \chi_c$, leading to
monochromatic photons. Initial inclusive searches using a NaI(Tl) detector at
SPEAR did not turn up these transitions \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{Hof}, leading to some concern.
The problem turned out to be more experimentally demanding than originally
suspected. By looking for the cascade transitions $\psi' \to \gamma \chi_c \to
\gamma \gamma J/\psi$, the DASP group, working at the DORIS storage ring at
DESY, presented the first results \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{DASPchi} for the $\chi_{c1} = ~^3P_1$
level (with some possible admixture of $\chi_{c2} = ~^3P_2$). By looking for
events of the form $\psi' \to \gamma \chi_c \to \gamma + (\pi \pi, K \bar K,
\ldots)$ and reconstructing the mass of the final hadronic state, the Mark I
group at SPEAR \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{SPEARchi} detected states corresponding to both
$\chi_{c2}$ and $\chi_{c0} = ~^3P_0$.
\subsection{Discovery of the $D$ mesons}
By 1975, estimates based on the mass of the $J/\psi$, on QCD \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{DGG}, and on
potential models incorporating coupled-channel effects \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{ECC} predicted $D$
masses in the range of 1.8 to 1.9 GeV/$c^2$, so that the rise in $R$ could, at
least in part, be accounted for by $D \bar D$ threshold. Glashow urged Gerson
Goldhaber to re-examine the negative search results \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{Riordan}. Together
with F. M. Pierre and other collaborators, Goldhaber incorporated
time-of-flight information to improve kaon identification, and found peaks in
$D^0 \to K^- \pi^+$ and $K^- \pi^+ \pi^- \pi^+$ \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{Gh}, corresponding to a
mass which we now know to be 1.863 GeV/$c^2$. Low-multiplicity decays of the
$D^+$ were also seen shortly thereafter \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{Dplus}.
The first discoveries of $D$ mesons were announced in June of 1976. This would
have been too late for the 1976 Meson Conference, which was traditionally held
in April, so Glashow could have lost his bet made at the 1974 Conference
\@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{EMS}. (See, however, \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{Lambdac}.) But meson spectroscopy was
entering a slower period, and the next conference was not held until 1977.
Since charm had clearly been discovered by outlanders, the participants were
obliged to eat their (candy) hats, graciously distributed by the conference
organizers.
\subsection{The $\tau$ as interloper}
What about the $\tau$ lepton, whose appearance complicated the interpetation
of the SPEAR results? It destroyed the anomaly cancellation, mentioned
earlier! As a result, a new pair of quarks with charges 2/3 and $-1/3$, named
top and bottom by Harari \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{HH}, had to be postulated. Just such a quark
pair had been invented earlier (in 1973) by Kobayashi and Maskawa \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{KM}
in order to explain the observed CP violation in kaon decays. The discovery of
these quarks is another story, of which Fermilab has a right to be proud but
which we shall not mention further here.
\subsection{Total rate vs. purity in charm detection}
A question which arose in the search for charmed particles is being played out
again as present and future searches are planned. Is it better to work in a
relatively clean environment with limited rate, or in an environment where rate
is not a problem but backgrounds are high? For charm in the mid-1970's, the
choice lay between the reaction $e^+ e^- \to \gamma^* \to c \bar c$,
contributing $\Delta R = 4/3$ above charm threshold, and fixed-target
proton-proton collisions at 400 GeV/$c^2$, with $\sigma_{c \bar c} = {\cal
O}(10^{-3}) \sigma_{\rm tot}$ but overall greater charm production rates than
in $e^+ e^-$ collisions. (The CERN Intersecting Storage Rings (ISR) were also
running at that time, providing proton-proton c.m. energies of up to 63 GeV but
with limited rates compared to fixed-target experiments.)
After much time and effort, the balance eventually tipped in favor of fixed
target hadron (or photon) collisions. (In photon collisions the photon can
couple directly to a charm-anticharm pair via the electric charge, leading to
diffractive production.) Two advances that greatly enhanced the ability to
isolate charm were the use of the soft pion in $D^* \to D \pi$ decays
\@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{Nussinov} and the impressive growth in vertex detection technology
\@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{vertex}.
\paragraph*{Soft pion tagging.}
The lowest-lying $^1S_0$ and $^3S_1$ bound states of a charmed quark and a
nonstrange antiquark are called $D$ and $D^*$, respectively. Their masses are
such that $D^{*0}$ can decay to $D^0 \gamma$ and just barely to $D^0 \pi^0$,
while $D^{*+}$ can decay to $D^{*0} \gamma$ and just barely to $D^+ \pi^0$ or
$D^0 \pi^+$. In the last case, the charged pion has a very low momentum with
respect to the $D^0$, and can be used to ``tag'' it. One takes a hypothetical
set of $D^0$ decay products and combines them with the ``tagging'' pion. If
the decay products really came from a $D^0$, the difference in effective masses
of the products with and without the extra pion should be $M(D^{*+}) - M(D^0)
\simeq 145$ MeV/$c^2$. This method not only can help to see the $D^0$, but
can tell whether it was produced as a $D^0$ or a $\bar D^0$, since the only
low-mass combinations are $\pi^+ D^0$ or $\pi^- \bar D^0$. This distinction
is important if one wishes to study $D^0$--$\bar D^0$ mixing or suppressed
decay modes of the $D^0$ (where the flavor of the decay products does not
necessarily indicate the flavor of the decaying state).
\paragraph*{Vertex detection.}
The earliest technique for detecting the short tracks made by charmed
particles, nuclear emulsions, was successfully used in Fermilab E-531 for the
detection of charmed particles produced in neutrino interactions, has been used
by Fermilab E-653 for the study of decays of charmed and $B$ mesons, and is
still in use for detecting decays of $\tau$ leptons produced in
neutrino-oscillation experiments \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{CHORUS}. It has profited greatly from
automatic scanning methods introduced by Niu's group at Nagoya. Still, it can
be subject to systematic errors, such as a bias against long neutral decay
paths.
When it was realized that charmed particles could have lifetimes less than
$10^{-12}$ s, numerous attempts were made to improve the resolution of
existing devices such as bubble chambers and streamer chambers. Some of these
are described in \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{vertex}.
In the late 1970's, electronic spectrometers such as the OMEGA spectrometer at
CERN began to be equipped with new, high-resolution silicon vertex detectors.
These devices had the advantages of radiation hardness, excellent spatial
resolution, and electronic readout, making them {\it the} technique of choice
for resolving the tracks of short-lived particles in the busy environments of
hadro- and electroproduction. Experiments which have profited from this
technique over the years include CERN WA-82, WA-89, WA-92 and Fermilab E-687,
E-691, E-769, E-791, and E-831 (FOCUS).
\section{Examples of Further Progress}
\subsection{Emulsion results}
Emulsion studies of neutrino- and hadroproduction of charmed particles have
displayed the variation of lifetimes among charmed particles, measured the
decay constant $f_{D_s}$ of the charmed-strange meson $c \bar s \equiv D_s$,
and set limits on neutrino oscillations. The scanning techniques pioneered by
the Nagoya group are beginning to be disseminated so that many institutions can
analyze future results.
\subsection{Excited charmed mesons}
A meson containing a single heavy quark and a light antiquark is like a
hydrogen atom of the strong interactions. The heavy quark corresponds to the
nucleus, while the antiquark (and its accompanying glue) correspond to the
electron and electromagnetic field.
The lowest orbitally excited states of charmed mesons follow an interesting
pattern rather different from that in charm-anticharm bound states. In
$c \bar c$ levels, the charge-conjugation parity $C = (-1)^{L+S}$ prevents
the mixing of spin-singlet and spin-triplet levels with the same $L$. Thus,
the properties of levels are best calculated by first coupling the $c$ and
$\bar c$ spins to $S=0$ or 1 and then coupling $S$ with the orbital angular
momentum $L$ to total angular momentum $J$. One thus labels the states by
$^{2S+1}[L]_J$, where $[L] \equiv S,~P,~D,~F,~\ldots$ for $L = 0,~1,~2,~3,~
\ldots$. In heavy-light states, however, nothing prevents mixing of $^1P_1$
and $^3P_1$ levels, and there is a favored pattern in the limit that the heavy
quark's mass approaches infinity \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{DGG,JLRP,HQETP}. One first couples the
light antiquark's spin $s = 1/2$ to the orbital angular momentum $L=1$ to
obtain the total angular momentum $j=1/2,~3/2$ carried by the light quark. One
then couples $j$ to the heavy quark's spin $S_Q =1/2$ to obtain two pairs of
levels, as shown in Table 1.
\begin{table}
\caption{Lowest orbitally-excited charmed mesons.}
\begin{center}
\begin{tabular}{c c c c c} \hline
$j$ & $J = j - \frac{1}{2}$ state & $J = j + \frac{1}{2}$ state & $l(D^{(*)}
\pi)$ & Width \\
\hline
1/2 & $? \to D \pi$ & $? \to D^* \pi~^a$ & 0 & Broad \\
3/2 & $D(2420) [\to D^* \pi$] & $D(2460) [\to D^{(*)} \pi$] & 2 & Narrow \\
\hline
\end{tabular}
\end{center}
\leftline{$^a$Candidate exists (see below).}
\end{table}
The $j=1/2$ states are expected to decay to $D^{(*)} \pi$ via S-waves and thus
to be broad and hard to find, while the $j=3/2$ states should decay via D-waves
and thus should be narrower and more easily distinguished from background.
The first orbitally excited charmed mesons were reported by the ARGUS
Collaboration \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{ARP} in 1985. Since then, considerable progress has been
made on these states by the ARGUS, CLEO, LEP, and fixed-target Fermilab
collaborations, with the properties of the $j=3/2$ states well mapped out.
There is now a candidate for a broad ($j=1/2$) state, with spin-parity $J^P =
1^+$, mass $M = 2.461^{+0.041} _{-0.034} \pm 0.010 \pm 0.032$ GeV, and width
$\Gamma = 290^{+101}_{-79} \pm 26 \pm 36$ MeV \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{broad}.
\subsection{Charmonium with antiprotons}
The ability to control the energy of an antiproton beam, first in the CERN
ISR \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{ISR} and then in the Fermilab Antiproton Accumulator Ring \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{ACC},
permitted the study of charmonium states through direct-channel production on
a gas-jet tartget. A series of experiments studied the production and decay
of states like the $\eta_c$ (the $^1S_0$ charmonium ground state), the
$J/\psi$, and the $\chi_c$ levels, and led to the discovery of the $h_c$, the
$^1P_1$ level. Precise measurements of masses and decay widths were made,
and an earlier claim \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{etacp} for the $2^1S_0$ level, the $\eta_c'$, has
been disproved. The search for this state, as well as for possible narrow
$c \bar c$ levels above $D \bar D$ threshold, continues at Fermilab as well
as elsewhere (see, e.g., \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{DELetac}).
\subsection{Photo- and hadroproduction with vertex detection}
An impressive series of fixed-target experiments has refined the technique
of vertex detection using silicon strips or pixels \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{vtxexps}, obtaining
unparalleled numbers of charmed particles. Among the significant results are
detailed studies of lifetime differences among charmed particles, ranging
from greater than $10^{-12}$ s for the $D^+$ to less than $10^{-13}$ s
for the $\Omega_c = css$.
\subsection{Electron-positron collisions}
The ARGUS and CLEO Collaborations continued to contribute significant results
on charmed particles produced in $e^+ e^-$ collisions, with results still
flowing from CLEO on such topics as the leptonic decay of the $D_s$ \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{Ds}
and the spectroscopy of charmed baryons \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{Charmedb}.
\section{Examples of Current Questions}
\subsection{Lifetime hierarchies}
The charmed-particle lifetimes mentioned in the previous Section, with
\begin{equation}
\tau(\Omega_c) < \tau(\Lambda_c) < \tau(D^0) \simeq \tau(D_s) < \tau(D^+)
\end{equation}
varying by more than a factor of 10, continue to be a mild source of concern to
theorists. The above hierarchy is better-understood \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{Shifman,NS} than
that in strange particle decays, where lifetimes vary by more than a factor of
$600 \simeq \tau(K_L)/\tau(K_S)$. However, the same methods which appear to
have described the charm lifetime hierarchy do not explain why $\tau(\Lambda_b)
/\tau(B^{+,0}) < 0.8$, whereas a ratio more like 0.9 to 0.95 is expected. It
appears that non-perturbative effects, probably the main feature of the
lifetime differences for kaons and still important for charmed particles,
continue to have some residual effects even for the decays of the heavy $b$
quark.
\subsection{Decay constants}
The latest average for the $D_s$ decay constant \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{Stone} is $f_{D_s} = 255
\pm 21 \pm 28$ MeV, based on observation of the decays $D_s \to \mu \nu,~\tau
\nu$. We still need the values of the other heavy meson decay constants:
$f_D$, $f_B$, and $f_{B_s}$. Lattice \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{Draper} and QCD sum rule \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{SN}
predictions for these quantities exist. The value of $f_{D_s}$ is consistent
with predictions, though a bit on the high side. The value of $f_D$ is in
principle accessible with present CLEO data samples \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{LKGpc}. One would
like to be able to distinguish between the quark-model prediction \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{QMP}
$f_{D_s}/ f_D \simeq 1.25$ and the lattice/sum rule predictions of this ratio,
which range between 1.1 and 1.2. One may be able to isolate $D^+ \to \mu^+
\nu_\mu$ via the kinematics of the decay $D^{*+} \to \pi^0 D^+$ \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{JRFD}.
\subsection{Excited $D$ mesons}
Using heavy-quark symmetry, we can relate the properties of a meson containing
a heavy quark $Q$ and a light antiquark $\bar q$ to those where $Q$ is replaced
by another heavy quark $Q'$. Thus, further study of excited $D = c \bar q$
mesons would give us information about the corresponding $\bar B = b \bar q$
states. The properties of P-wave $b \bar q$ (``$B^{**}$'') mesons would be
very useful for ``tagging'' neutral $B$'s \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{tags}, since a $\bar B^0$
resonates with a $\pi^-$ to form a $B^{**-}$ while a $B^0$ resonates with a
$\pi^+$ to form a $B^{**+}$.
\subsection{Charm-anticharm mixing and CP violation}
Both mixing and CP-violating effects are expected to be far smaller for charmed
particles than for $B$'s \@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{charmCP}. Since these effects are easier to
study in the charm system (at least in hadronic production, where charm
production is much easier than $b$ production), they are thus ideal for
displaying beyond-standard-model physics, since the standard-model effects
are so much smaller.
\section{Lessons?}
Should we be learning from history, or will we always be fighting the last war?
The search for charm has possible lessons, perhaps to be taken with a grain of
salt, for theory, experiment, and their synthesis in the form of future
searches.
\subsection{Theory}
The optimism of theorists was justified in the search for charm. The charmed
quark indeed was light, $m_c \simeq 1.5$ GeV/$c^2$. Perturbative QCD was at
least a qualitative guide to the properties of charmonium and charmed
particles. The discovery of the first quark with mass substantially exceeding
that of the QCD scale was a tremendous boost to the idea (already strongly
suggested by deep inelastic scattering) that fundamental quarks needed to be
taken seriously.
\subsection{Experiment}
Many searches for charmed particles were harder than people thought. Sometimes
they were aided by sheer instrumental ``overkill,'' as in the case of the
superb mass resolution attained in the experiment which discovered the $J$
particle. Sometimes the choice of a fortunate channel also helped, as in the
production of the $\psi$ by $e^+ e^-$ collisions with carefully controlled beam
energies, or in the choice of the $e^+ e^-$ decay mode in which to observe the
$J$. Advances in instrumentation proved crucial, whether in the use of
particle identification to pull out the initial $D^0$ signal from background
or the study of charmed particles in high-background environments using vertex
detection.
\subsection{Future searches}
I do not see as clear a path in future searches as there was toward charm. In
the case of supersymmetry, for example, the landscape looks very different.
There is a wide variety of predictions, and one is looking for the whole
supersymmetric system at once. Alternate schemes for solving the problems
addressed by supersymmetry (e.g., dynamical electroweak symmetry breaking)
are not yet even formulated in a self-consistent manner. Perhaps that makes the
searches for physics beyond the standard model, which will be addressed in
future experiments here at Fermilab and elsewhere, even more exciting.
\section*{Acknowledgments}
I wish to thank Val Fitch for a pleasant collaboration on Ref.~\@ifnextchar[{\@tempswatrue\@citex}{\@tempswafalse\@citex[]}{AIPIOP},
Ikaros Bigi and Joel Butler for the chance to take this trip down memory lane,
and Peter Cooper, Mary K. Gaillard, John Iliopoulos, Scott Menary, Chris Quigg,
Tony Sanda, Lincoln Wolfenstein, and John Yoh for helpful comments. This work
was supported in part the United States Department of Energy under Contract No.
DE FG02 90ER40560.
\newpage
\def \ajp#1#2#3{Am.~J.~Phys.~{\bf#1}, #2 (#3)}
\def \apny#1#2#3{Ann.~Phys.~(N.Y.) {\bf#1}, #2 (#3)}
\def \app#1#2#3{Acta Phys.~Polonica {\bf#1}, #2 (#3)}
\def \arnps#1#2#3{Ann.~Rev.~Nucl.~Part.~Sci.~{\bf#1}, #2 (#3)}
\def and references therein{and references therein}
\def \cmp#1#2#3{Commun.~Math.~Phys.~{\bf#1}, #2 (#3)}
\def \cmts#1#2#3{Comments on Nucl.~Part.~Phys.~{\bf#1}, #2 (#3)}
\def \corn93{{\it Lepton and Photon Interactions: XVI International
Symposium, Ithaca, NY August 1993}, AIP Conference Proceedings No.~302,
ed.~by P. Drell and D. Rubin (AIP, New York, 1994)}
\def \cp89{{\it CP Violation,} edited by C. Jarlskog (World Scientific,
Singapore, 1989)}
\def \dpff{{\it The Fermilab Meeting -- DPF 92} (7th Meeting of the
American Physical Society Division of Particles and Fields), 10--14
November 1992, ed. by C. H. Albright {\it et al.}~(World Scientific, Singapore,
1993)}
\def DPF 94 Meeting, Albuquerque, NM, Aug.~2--6, 1994{DPF 94 Meeting, Albuquerque, NM, Aug.~2--6, 1994}
\def Enrico Fermi Institute Report No. EFI{Enrico Fermi Institute Report No. EFI}
\def \el#1#2#3{Europhys.~Lett.~{\bf#1}, #2 (#3)}
\def \flg{{\it Proceedings of the 1979 International Symposium on Lepton
and Photon Interactions at High Energies,} Fermilab, August 23--29, 1979,
ed.~by T. B. W. Kirk and H. D. I. Abarbanel (Fermi National Accelerator
Laboratory, Batavia, IL, 1979}
\def \hb87{{\it Proceeding of the 1987 International Symposium on Lepton
and Photon Interactions at High Energies,} Hamburg, 1987, ed.~by W. Bartel
and R. R\"uckl (Nucl. Phys. B, Proc. Suppl., vol. 3) (North-Holland,
Amsterdam, 1988)}
\def {\it ibid.}~{{\it ibid.}~}
\def \ibj#1#2#3{~{\bf#1}, #2 (#3)}
\def \ichep72{{\it Proceedings of the XVI International Conference on High
Energy Physics}, Chicago and Batavia, Illinois, Sept. 6--13, 1972,
edited by J. D. Jackson, A. Roberts, and R. Donaldson (Fermilab, Batavia,
IL, 1972)}
\def \ijmpa#1#2#3{Int.~J.~Mod.~Phys.~A {\bf#1}, #2 (#3)}
\def {\it et al.}{{\it et al.}}
\def \jmp#1#2#3{J.~Math.~Phys.~{\bf#1}, #2 (#3)}
\def \jpg#1#2#3{J.~Phys.~G {\bf#1}, #2 (#3)}
\def \ky85{{\it Proceedings of the International Symposium on Lepton and
Photon Interactions at High Energy,} Kyoto, Aug.~19-24, 1985, edited by M.
Konuma and K. Takahashi (Kyoto Univ., Kyoto, 1985)}
\def {\it Lattice 98}, Proceedings, Boulder, CO, July 13--17, 1998{{\it Lattice 98}, Proceedings, Boulder, CO, July 13--17, 1998}
\def \lkl87{{\it Selected Topics in Electroweak Interactions} (Proceedings
of the Second Lake Louise Institute on New Frontiers in Particle Physics,
15--21 February, 1987), edited by J. M. Cameron {\it et al.}~(World Scientific,
Singapore, 1987)}
\def \lnc#1#2#3{Lettere al Nuovo Cim.~{\bf#1}, #2 (#3)}
\def \lon{{\it Proceedings of the XVII International Conference on High Energy
Physics}, London, July 1974, edited by J. R. Smith (Rutherford Laboratory,
Chilton, England, 1974)}
\def \mpla#1#2#3{Mod.~Phys.~Lett.~A {\bf#1}, #2 (#3)}
\def \nc#1#2#3{Nuovo Cim.~{\bf#1}, #2 (#3)}
\def \nima#1#2#3{Nucl.~Instr.~Meth.~A {\bf#1}, #2 (#3)}
\def \np#1#2#3{Nucl.~Phys.~{\bf#1}, #2 (#3)}
\def \pisma#1#2#3#4{Pis'ma Zh.~Eksp.~Teor.~Fiz.~{\bf#1}, #2 (#3) [JETP
Lett. {\bf#1}, #4 (#3)]}
\def \pl#1#2#3{Phys.~Lett.~{\bf#1}, #2 (#3)}
\def \plb#1#2#3{Phys.~Lett.~B {\bf#1}, #2 (#3)}
\def \ppmsj#1#2#3{Proc.~Phys.-Math.~Soc.~Japan {\bf#1}, #2 (#3)}
\def \pr#1#2#3{Phys.~Rev.~{\bf#1}, #2 (#3)}
\def \pra#1#2#3{Phys.~Rev.~A {\bf#1}, #2 (#3)}
\def \prd#1#2#3{Phys.~Rev.~D {\bf#1}, #2 (#3)}
\def \prl#1#2#3{Phys.~Rev.~Lett.~{\bf#1}, #2 (#3)}
\def \prp#1#2#3{Phys.~Rep.~{\bf#1}, #2 (#3)}
\def \ptp#1#2#3{Prog.~Theor.~Phys.~{\bf#1}, #2 (#3)}
\def \rmp#1#2#3{Rev.~Mod.~Phys.~{\bf#1}, #2 (#3)}
\def \rp#1{~~~~~\ldots\ldots{\rm rp~}{#1}~~~~~}
\def \si90{25th International Conference on High Energy Physics, Singapore,
Aug. 2-8, 1990}
\def \slc87{{\it Proceedings of the Salt Lake City Meeting} (Division of
Particles and Fields, American Physical Society, Salt Lake City, Utah,
1987), ed.~by C. DeTar and J. S. Ball (World Scientific, Singapore, 1987)}
\def \slac89{{\it Proceedings of the XIVth International Symposium on
Lepton and Photon Interactions,} Stanford, California, 1989, edited by M.
Riordan (World Scientific, Singapore, 1990)}
\def \smass82{{\it Proceedings of the 1982 DPF Summer Study on Elementary
Particle Physics and Future Facilities}, Snowmass, Colorado, edited by R.
Donaldson, R. Gustafson, and F. Paige (World Scientific, Singapore, 1982)}
\def \smass90{{\it Research Directions for the Decade} (Proceedings of the
1990 Summer Study on High Energy Physics, June 25 -- July 13, Snowmass,
Colorado), edited by E. L. Berger (World Scientific, Singapore, 1992)}
\def \stone{{\it B Decays}, edited by S. Stone (World Scientific,
Singapore, 1994)}
\def \tasi90{{\it Testing the Standard Model} (Proceedings of the 1990
Theoretical Advanced Study Institute in Elementary Particle Physics,
Boulder, Colorado, 3--27 June, 1990), edited by M. Cveti\v{c} and P.
Langacker (World Scientific, Singapore, 1991)}
\def \Vanc{XXIX International Conference on High Energy Physics, Vancouver,
BC, Canada, July 23--29, 1998, Proceedings}
\def \yaf#1#2#3#4{Yad.~Fiz.~{\bf#1}, #2 (#3) [Sov.~J.~Nucl.~Phys.~{\bf #1},
#4 (#3)]}
\def \zhetf#1#2#3#4#5#6{Zh.~Eksp.~Teor.~Fiz.~{\bf #1}, #2 (#3) [Sov.~Phys.
- JETP {\bf #4}, #5 (#6)]}
\def \zp#1#2#3{Zeit.~Phys.~{\bf#1}, #2 (#3)}
\def \zpc#1#2#3{Zeit.~Phys.~C {\bf#1}, #2 (#3)}
| proofpile-arXiv_065-9233 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In the last decade, deep radio surveys (Condon \& Mitchell 1984; Windhorst
1984; Windhorst et~al. 1985) have pointed out the presence of
a new population of radio sources appearing below a few mJy and
responsible for the observed flattening in the differential
source counts (normalized to Euclidean ones).
Several scenarios have been developed to
interpret this ``excess'' in the number of faint radio sources,
invoking a non--evolving population of local ($z < 0.1$) low--luminosity
galaxies (Wall et~al. 1986), strongly--evolving normal spirals (Condon 1984,
1989) and actively star--forming galaxies (Windhorst et~al. 1985, 1987;
Danese et~al. 1987; Rowan--Robinson et~al. 1993). The latter scenario is
supported by the existing optical identification works performed for the
sub--mJy population. These works have, in fact,
shown that the sub--mJy sources are mainly identified with faint blue
galaxies (Kron, Koo \& Windhorst 1985; Thuan \& Condon 1987), often showing
peculiar optical morphologies indicative of interaction and merging phenomena
and spectra similar to those of the star--forming galaxies detected by IRAS
(Franceschini et~al. 1988; Benn et~al. 1993). However, since the majority of
these objects have faint optical counterparts, visible only in deep CCD
exposures (down to $B \sim$ 24--25), all these works are based on small
percentages of identification. For example, the Benn et~al. spectroscopic
sample, despite the fact that it is the largest sample so far available in
literature, corresponds to slightly more than 10 per cent of the total radio
sample.
In order to better understand the nature of the sub--mJy radio galaxy
population on the basis of a larger identification fraction than the ones
obtained so far, we performed deep photometric and spectroscopic
identifications for a faint radio source sample in the ``Marano Field''.
Here we present the results of the identification of 68 objects, which
represent the total radio sample obtained by joining together the two $S > 0.2$
mJy complete samples at 1.4 and 2.4 GHz in the ``Marano Field''
(see Gruppioni et~al. 1997). We were able to reach a relatively high
identification fraction with respect to previous works, since we optically
identified 63\% of the 68 radio sources and we obtained spectra for 34 of them
($\sim$50\% of the total sample). These constitute the highest identification
fractions so far available in literature for sub--mJy radio samples.
The paper is structured as follows:
in section 2 we describe the radio sample; in section 3 we present the
photometric data and optical identifications; in section 4 we present the
spectroscopic results, including spectral classification for the optical
counterparts and notes on individual objects;
in section 5 we discuss the radio and optical properties of the faint radio
source population; in the last two sections we discuss our results and
present our conclusions.
\section{The Radio Sample}
Deep radio surveys with the Australia Telescope Compact Array (ATCA) have been
carried out at 1.4 and 2.4 GHz, with a limiting flux of $\sim$0.2 mJy at each
frequency, in the ``Marano Field'' (centered at $\alpha(2000) = 03^h 15^m 09^s,
\delta(2000) = -55^{\circ} 13^{\prime} 57^{\prime \prime}$), for which deep
optical and X--ray data are also available.\\
The two radio samples, complete at the 5$\sigma_{local}$ level, consist of 63
and 48 sources respectively at 1.4 and 2.4 GHz. The main results of the
radio data analysis are extensively described by Gruppioni et~al. (1997).
The 1.4 GHz differential source counts show the flattening below about 1 mJy
found by previous authors (Condon \& Mitchell 1984; Windhorst et~al. 1985) and
considered as the typical feature of the sub--mJy population.
From the study of the spectral index distribution as a function of flux, a
significant flattening of the spectral index toward fainter fluxes has been
found for the higher frequency selected sample (2.4 GHz), while the median
spectral index ($\alpha_{med}$) is consistent with remaining constant at $\sim
$0.8 ($f_{\nu} \propto \nu^{-\alpha}$) for the sample selected at 1.4 GHz.
However, at both frequencies a significant number of sources with inverted
spectrum do appear at flux densities ${_<\atop^{\sim}} 2$ mJy. In particular, objects
with inverted spectra constitute $\sim$13\% of the total 1.4 GHz sample and
$\sim$25\% of the total 2.4 GHz one. For the latter sample this percentage
increases to $\sim$40\% for $S < 0.6$ mJy.
The total radio source sample considered in this paper for the optical
identification work consists of the 1.4 GHz complete sample joined together
with the 5 sources detected only at 2.4 GHz above the 5$\sigma_{local}$ level.
\section{Photometry and Optical Identification}
For the identification of our radio sources we used the photometric
data already available for the ``Marano Field''. These data consist
of \hbox{ESO~3.6--m} \ plates in the bands $U_K$, $J_K$ and $F_K$ (Kron 1980)
covering an area of $\sim$0.69~sq.deg. (the plates include the entire radio
field, that is $\sim$0.36 sq. deg.) and reaching limit in magnitudes of
\hbox{$J_K \sim$ 22.5} and \hbox{$U_K,F_K \sim$ 22.0}. The data from the plates
have been utilized for the selection and definition of a complete sample of
faint quasars with $J \leq 22.0$, the MZZ sample (Marano, Zamorani
\& Zitelli 1988; Zitelli et~al. 1992). Moreover, the inner part of the field
($\sim$15$^{\prime}$ radius) has been observed with the ESO NTT telescope,
with deep CCD exposures in the $U$, $B$, $V$ and $R$ bands, down to
limit in magnitudes $U$$\sim$23.5, $B$$\sim$25.0, $V$$\sim$24.5 and $R$$\sim$24.0.
With successive observing runs a rather complete coverage of the central
part of the field (corresponding to $\sim$60\% of the area covered by the radio
observations) has been obtained in the $V$ and $R$ bands (see Figure~1).
\begin{figure}
\centerline{
\psfig{figure=fig1.eps,width=8cm}
}
\caption{\label{fig1} CCD coverage of the inner part of the ``Marano Field''
in the $V$ and $R$ bands. The area covered by
the radio observations is represented by the dashed box. The filled dots are the 68
radio sources.}
\end{figure}
A multi--colour catalog based on the deep CCD observations has been created.
The images were reduced in a standard way using {\tt IRAF}\footnote{{\tt IRAF}
is distributed by the National Optical Astronomy Obser\-vatories, which are
operated by AURA Inc. for the NSF.} and for the data analysis
(detection, photometry and star/galaxy classification)
we used SExtractor (Bertin and Arnouts 1996). More details on the data
reduction and analysis can be found elsewhere (Mignoli, 1997) and a full
description of the catalog is in preparation.
We used the CCD overlapping regions to check the photometric errors and
homogeneity in the detection and morphological classification.
In the B band the typical errors are $\sim$0.05 mag up to
$B = 23$, growing to 0.15$\div$0.20 at the limiting magnitude of B=25.
In the $R$ band the error ranges from $\sim$$0.07$ at $R = 22$ to $\sim$$0.20$ at
$R = 24$.
For the identification of the 50 radio sources which
fall in the $R$ band CCD frames, we used the magnitudes and
positions of the optical sources given in the catalog.
For the 18 radio sources outside the area covered
by CCD frames, we used the magnitudes and positions of the optical
sources taken from the plates data. In this case the $U_K$, $J_K$ and $F_K$
magnitudes of the photographic system have been transformed to $U$, $B$, $V$
and $R$ magnitudes of the standard Johnson/Cousin system of the CCD data,
according to the transformation formulae given by Kron (1980) and Koo \& Kron
(1982):
$$ U \approx U_K \qquad {V+R\over2} \approx F_K $$
$$ B = J_K + 0.20(J_K - F_K) $$
$$ V = F_K + 0.34(J_K - F_K) $$
The correctness of this transformation between the two photometric systems
has been checked using the
objects detected both in the plates and in the CCD frames: in all the four
bands the average of the magnitude differences is less than 0.05 mag,
implying a good consistency between the CCD and the plates photometry,
whereas the scatter of the points gives an estimate of the random errors of
$\sim$0.10~mag for the $B,V$ bands and of $\sim$0.15~mag for the $U,R$ bands.
\begin{table*}
\centering
\begin{minipage}{150mm}
\caption{Optical Identifications}
\begin{tabular}{ccrrrrrrrrrc}
& & & & \\ \hline \hline
& & & & & \\
\multicolumn{1}{c}{N$_{1.4}$} & \multicolumn{1}{c}{N$_{2.4}$} &
\multicolumn{1}{c}{$S_{1.4}$} & \multicolumn{1}{c}{$S_{2.4}$} &
\multicolumn{1}{c}{$\alpha_{r}$} &
\multicolumn{1}{c}{$U$} & \multicolumn{1}{c}{$B$} & \multicolumn{1}{c}{$V$} &
\multicolumn{1}{c}{$R$} & \multicolumn{1}{c}{$\Delta$} &
\multicolumn{1}{c}{$L$} & \multicolumn{1}{c}{Envir} \\
& & & & \\
& & (mJy) & (mJy) & & & & & & ($^{\prime \prime}$) & & \\
& & & & \\ \hline
& & & & \\
01 & 01 & 8.28 & 4.86 & 0.97 & &$>$~22.5*& &$>$~21.8*& & & \\
02 &$--$& 0.62 & $<~$0.53 & $>$~0.28 & &$>$~22.5*& &$>$~21.8*& & & \\
03 & 02 & 26.40 & 26.14 & 0.02 & 19.5* & 19.3* & 18.0* & 17.1* & 1.5 & 58.9 & GR \\
04 &$--$& 0.67 & $<~$0.48 & $>$~0.62 & & &$>$~24.50&$>$~24.00& & & \\
05 & 03 & 0.54 & 0.79 & -0.68 & & &$>$~24.50& 23.21 & 1.0 & 5.7 &~GR? \\
& & & & & &$>$~22.5*& 23.34 & 22.03 & 2.1 & 4.5 & \\
06 &$--$& 0.78 & 0.40 & 1.21 & 17.6* & 17.9* & 17.0* & 16.9* & 2.4 & 28.8 & \\
07 & 04 & 0.80 & 1.01 & -0.42 & 21.4* & 21.6* & 21.04 & 20.60 & 3.2 & 1.8 & \\
08 &$--$& 0.39 & $<~$0.28 & $>$~0.60 & 19.2* & 19.9* & 19.66 & 19.43 & 0.3 & 52.4 & \\
09 & 05 & 1.52 & 2.21 & $-$0.68 & 20.4* & 19.8* & 18.34 & 17.65 & 1.1 & 57.2 & \\
$--$& 06 & $<~$0.32 & 0.46 & $<-$0.66 & & &$>$~24.50& 24.60 & 1.4 & $--$ & \\
10 & 07 & 20.02 & 15.90 & 0.42 & & &$>$~24.50&$>$~24.00& & & \\
11 &$--$& 0.49 & $<~$0.50 & $>-$0.03 & 21.6* & 22.2* & 21.4* & 20.9* & 2.8 & 2.2 & \\
12 &$--$& 0.50 & 0.32 & 0.83 &$>$~23.50&$>$~24.50&$>$~24.50&$>$~24.00& & & \\
13 & 08 & 0.60 & 0.51 & 0.28 & 19.8* & 20.0* & 19.1* & 18.8* & 0.6 & 71.0 & \\
14 & 09 & 2.91 & 1.60 & 1.09 & &$>$~22.5*& &$>$~21.8*& & & \\
15 & 10 & 158.00 & 95.94 & 0.90 & 19.39 & 19.91 & 19.77 & 19.41 & 1.1 & 40.3 & \\
$--$& 11 & 0.20 & 0.30 & $-$0.72 & & &$>$~24.50&$>$~24.00& & & \\
16 & 13 & 1.33 & 0.83 & 0.85 & & &$>$~24.50&$>$~24.00& & & \\
17 & 12 & 3.08 & 1.76 & 1.02 & & 23.1* & 22.3* & 21.7* & 1.1 & 4.9 & \\
18 &$--$& 0.28 & $<~$0.20 & $>$~0.64 & 21.0* & 20.9* & 19.7* & 19.2* & 1.6 & 44.1 & ~D~ \\
19 & 14 & 6.27 & 5.31 & 0.30 &$>$~23.50&$>$~24.50&$>$~24.50&$>$~24.00& & & \\
20 & 15 & 0.38 & 0.43 & $-$0.24 & &$>$~22.5*& 23.50 & 22.12 & 0.9 & 9.9 & CL \\
21 & 16 & 1.02 & 1.00 & 0.04 & & &$\sim$25.0&$\sim$24.0&1.0 & $--$ & \\
22 &$--$& 0.47 & 0.28 & 0.97 & 21.3* & 22.2* & 20.77 & 20.07 & 1.1 & 12.0 & \\
23 &$--$& 0.40 & $<~$0.24 & $>$~0.92 & &$>$~22.5*& &$>$~21.8*& & & \\
24 &$--$& 0.46 & $<~$0.36 & $>$~0.48 & 20.0* & 20.4* & 20.01 & 19.60 & 1.1 & 18.3 & \\
25 & 17 & 3.30 & 2.68 & 0.38 & 23.75 & 23.72 & 21.98 & 20.85 & 0.5 & 12.3 & ~D~ \\
& & & & &$>$~23.50& 24.50 & 24.25 & 23.30 & 3.6 & 0.5 & \\
& & & & & 22.75 & 23.20 & 22.41 & 21.57 & 3.6 & 0.8 & \\
$--$& 18 & 0.20 & 0.33 & $-$0.88 & &$>$~22.5*& 23.85 & 23.72 & 4.8 & 0.0 & \\
26 & 19 & 0.45 & 0.48 & $-$0.11 & & 23.1* & 21.6* & 20.05 & 1.2 & 16.9 & ~D~ \\
27 & 20 & 2.60 & 1.98 & 0.49 & 20.0* & 19.6* & 17.81 & 17.10 & 0.5 & 70.0 & \\
28 & 21 & 1.66 & 0.98 & 0.95 & &$>$~22.5*& &$>$~21.8*& & & \\
29 &$--$& 0.62 & $<~$0.44 & $>$~0.61 & & &$>$~24.50&$>$~24.00& & & \\
30 & 22 & 6.32 & 3.88 & 0.89 & 23.11 & 23.68 & 22.73 & 21.78 & 0.4 & 6.2 & GR \\
& & & & &$>$~23.50& 24.40 & 23.36 & 22.47 & 3.5 & 0.6 & \\
31 & 23 & 0.92 & 0.52 & 1.05 & &$>$~22.5*& 22.78 & 21.58 & 2.4 & 2.7 & ~D~ \\
& & & & & &$>$~22.5*& 22.55 & 21.93 & 4.3 & 0.1 & \\
32 & 24 & 0.50 & 0.32 & 0.85 &$>$~23.50& 25.30 & 23.80 & 22.50 & 1.7 & 5.8 & GR \\
& & & & &$>$~23.50&$>$~24.50& 23.75 & 23.15 & 1.5 & 3.3 & \\
33 & 25 & 4.83 & 2.68 & 1.07 &$>$~23.50&$>$~24.50&$>$~24.50&$>$~24.00& & & \\
34 & 26 & 0.40 & 0.34 & 0.28 &$>$~23.50&$>$~24.50&$>$~24.50& 23.50 & 3.1 & 0.6 & \\
35 &$--$& 0.41 & 0.25 & 0.87 &$>$~23.50& 24.56 & 23.86 & 23.06 & 4.3 & 0.1 & \\
36 & 27 & 1.17 & 0.75 & 0.80 & & &$\sim$24.5&$\sim$24.3&1.4 & & \\
37 & 28 & 1.01 & 0.62 & 0.87 & & &$>$~24.50&$>$~24.00& & $--$ & \\
38 & 30 & 1.25 & 0.71 & 1.02 & 22.50 & 21.85 & 20.35 & 19.32 & 1.8 & 13.3 & \\
39 & 29 & 23.10 & 15.70 & 0.70 & & &$>$~24.50&$>$~24.00& & & \\
40 & 31 & 0.60 & 0.36 & 0.93 &$>$~23.50&$>$~24.50&$>$~24.50&$>$~24.00& & & \\
41 & 32 & 1.45 & 2.17 & $-$0.73 &$>$~23.50&$>$~24.50&$>$~24.50& 23.92 & 2.0 & 3.1 & \\
42 & 33 & 30.12 & 19.74 & 0.77 &$>$~23.50&$>$~24.50&$>$~24.50&$>$~24.00& & & \\
43 & 34 & 0.42 & 0.32 & 0.47 & 20.73 & 20.58 & 19.30 & 18.60 & 1.4 & 47.1 & \\
44 & 35 & 15.08 & 9.59 & 0.82 & & &$>$~24.50&$>$~24.00& & & \\
45 &$--$& 0.20 & 0.21 & $-$0.07 &$>$~23.50& 25.10 &$>$~24.50&$>$~24.00& 0.5 & $--$ & \\
46 &$--$& 0.50 & $<~$0.47 & $>$~0.11 &$>$~23.50&$>$~24.50&$>$~24.50&$>$~24.00& & & \\
47 &$--$& 0.47 & 0.27 & 0.98 &$>$~23.50& 22.86 & 21.39 & 20.05 & 1.1 & 16.2 & GR \\
& & & & & 21.50 & 21.86 & 21.38 & 20.75 & 4.8 & 0.1 & \\
\hline
\end{tabular}
\end{minipage}
\end{table*}
\begin{table*}
\centering
\begin{minipage}{150mm}
\contcaption{}
\begin{tabular}{ccrrrrrrrrrc}
& & & & \\ \hline \hline
& & & & & \\
\multicolumn{1}{c}{N$_{1.4}$} & \multicolumn{1}{c}{N$_{2.4}$} &
\multicolumn{1}{c}{$S_{1.4}$} & \multicolumn{1}{c}{$S_{2.4}$} &
\multicolumn{1}{c}{$\alpha_{r}$} &
\multicolumn{1}{c}{$U$} & \multicolumn{1}{c}{$B$} & \multicolumn{1}{c}{$V$} &
\multicolumn{1}{c}{$R$} & \multicolumn{1}{c}{$\Delta$} &
\multicolumn{1}{c}{$L$} & \multicolumn{1}{c}{Envir} \\
& & & & \\
& & (mJy) & (mJy) & & & & & & ($^{\prime \prime}$) & & \\
& & & & \\ \hline
& & & & \\
48 & 36 & 0.54 & 0.34 & 0.82 & 19.01 & 18.98 & 17.96 & 17.40 & 0.6 & 92.6 & \\
49 & 37 & 9.04 & 6.66 & 0.55 &$>$~23.50&$>$~24.50&$>$~24.50&$>$~24.00& & &~CL? \\
50 &$--$& 0.28 & 0.18 & 0.81 & 19.11 & 19.44 & 18.48 & 18.01 & 0.5 & 60.6 & \\
51 & 38 & 0.49 & 0.37 & 0.52 & &$>$~22.5*& 23.10 & 21.90 & 0.5 & 9.5 & ~D~ \\
& & & & & &$>$~22.5*& 23.62 & 22.11 & 2.3 & 3.6 & \\
52 & 39 & 2.06 & 1.40 & 0.70 & &$>$~22.5*& 23.20 & 22.70 & 1.5 & 5.4 & \\
53 & 40 & 1.54 & 1.79 & $-$0.28 &$>$~23.50&$>$~24.50& 23.00 & 21.45 & 1.5 & 8.7 & CL \\
54 & 41 & 0.91 & 0.68 & 0.53 & &$>$~22.5*& &$>$~21.8*& & & \\
55 &$--$& 0.31 & 0.18 & 0.99 & 20.6* & 20.4* & 19.0* & 18.22 & 1.2 & 38.3 & ~D~ \\
56 & 42 & 2.38 & 1.87 & 0.44 & &$>$~22.5*& & 22.01 & 0.5 & 14.2 & \\
57 & 43 & 1.71 & 1.07 & 0.85 & 22.69 & 23.14 & 22.81 & 22.09 & 0.9 & 7.9 & \\
58 & 44 & 1.29 & 0.91 & 0.63 & 21.2* & 21.3* & 20.20 & 19.68 & 4.1 & 1.2 & \\
& & & & &$>$~23.50&$>$~24.50& & 22.76 & 4.2 & 0.1 & \\
& & & & &$>$~23.50&$>$~24.50&$>$~24.50&$>$~24.00& & & \\
$--$& 45 & 0.15 & 0.33 & $-$1.42 & &$>$~22.5*& &$>$~21.8*& & & \\
59 &$--$& 0.38 & 0.30 & 0.43 & &$>$~22.5*& &$>$~21.8*& & & \\
60 & 46 & 0.94 & 0.42 & 1.46 & 23.25 & 23.23 & 22.75 & 21.15 & 2.0 & 5.1 & GR \\
$--$& 47 & $<~$0.28 & 0.31 & $<-$0.21 & 22.0* & 21.9* & 20.5* & 19.8* & 3.6 & 2.8 & \\
61 &$--$& 0.31 & $<~$0.20 & $>$~0.74 & 20.1* & 20.8* & 20.1* & 19.8* & 0.8 & 31.7 & \\
62 & 48 & 0.45 & 0.41 & 0.19 & &$>$~22.5*& &$>$~21.8*& & & \\
63 &$--$& 0.35 & $<~$0.28 & $>$~0.38 & 22.0* & 22.2* & 20.6* & 20.2* & 0.3 & 31.7 & \\
& & & & & 21.6* & 20.1* & 20.5* & 19.5* & 1.5 & 12.8 & \\
& & & & \\ \hline \hline
\end{tabular}
\end{minipage}
\end{table*}
A summary of the results of the optical identification and photometry
for all the 68 radio sources is given in Table~1. The table is arranged
as follows. The source number, their radio flux in the 1.4 and 2.4 GHz catalogs
of Gruppioni et~al. (1997) and the radio spectral index are listed in the
first five columns. The following four columns give respectively the
$U$, $B$, $V$ and $R$ magnitudes for all the optical counterparts within 5 arcsec
from the radio position. The magnitudes reported in the table
followed by an asterisk ($\ast$) are those obtained by the photographic plates.
There are few cases where the $U$ and $B$ magnitudes are
missing; since the size of CCDs in these bands are smaller than in the $V$ and
$R$ ones, the $U$ and $B$ CCD data do not cover the entire area covered by $V$
and $R$ CCD.
Thus for the faintest sources, not visible on the plates, it has not
been possible to measure these magnitudes.
The offset (in arcsec) of the optical counterpart, its likelihood ratio and a note
about the environment (D=double, GR=group or CL=cluster)
are listed in the last three columns.
The likelihood ratio technique adopted for source
identification is the one described by Sutherland \& Saunders (1992), where the likelihood
ratio ($L$) is simply the probability of finding the true optical counterpart in exactly
the position with exactly this magnitude, relative to that of finding a similar chance
background object. As probability distribution of positional errors we adopted a gaussian
distribution with standard deviation of 1.5 arcseconds. This value for $\sigma$ is slightly
larger than the mean radio positional errors (see Gruppioni et~al. 1997), so to take into
account the combined effect of radio and optical positional uncertainties.
For each optical candidate we evaluated also the reliability ($REL$), by
taking into account the presence or absence of other optical candidates for the same radio
source (Sutherland \& Saunders 1992). Once that $L$ has been computed for all the optical
candidates, one has to choose the best threshold value for $L$ ($L_{th}$) to discriminate
between
spurious and real identifications. This is done by studying how the completeness ($C$) and
reliability ($R$) of the identification sample vary as a function of $L_{th}$.
The best choice
for $L_{th}$ is the value which maximizes the estimator $(C+R)/2$. For this purpose, we
defined $C$ and $R$ as functions of $L_{th}$ according to the formulae given by de Ruiter,
Willis \& Arp (1977). Since we performed our optical identifications on two different
kinds of optical images (with different limiting magnitudes), we had to apply the likelihood
ratio method in two separate steps for two separate sub--samples of our total identification
sample. First, we computed $L$ and $REL$ for each optical counterpart visible on plates ($R
\simeq 21.8$) within 15 arcseconds from the radio source (we chose a relatively large search
radius so to obtain a significant statistics for the evaluation of $L$). For this ``bright''
sub--sample thus we computed $C$ and $R$ for different values of $L_{th}$, obtaining as best
values: $L_{th} = 1.5$, $C = 95.7$\%, $R = 89.8$\%. Then we applied the same method
to the optical candidates
visible only on our CCD exposures, having 21.8 ${_<\atop^{\sim}} ~R~ {_<\atop^{\sim}}$ 24. For optical
candidates fainter that $R \simeq 24$ we were not able to give any reliable estimate of $L$
and $REL$, since our optical catalogue is fairly incomplete at this limit.
There are four such faint optical candidates in our identification sample, all
of them within
1.5 arcsec from the radio position. Also for the
``fainter'' identification sub--sample we found $L_{th} = 1.5$ as best choice, with
corresponding $C = 89.6$\% and $R = 82.0$\%. With this threshold we have
28 likely identifications
brighter than $R \simeq 21.8$ and 8 with $21.8 {_<\atop^{\sim}} R {_<\atop^{\sim}} 24$ (plus four additional
possibly good identifications with objects fainter than $R \sim 24$, too faint
for a reliable determination of their likelihood ratio). The reliability ($REL$)
of each of these optical identifications is always relatively high ($> 80$\%), except for the
cases where more than one optical candidate with $L > 1.5$ is present for the same source.
As shown in the last column of Table 1,
a significant fraction (${_>\atop^{\sim}}$35\%) of the radio sources with reliable identification
occurs in pairs or groups; moreover many of
these objects show a peculiar optical morphology, suggesting
an enhanced radio emission due to interaction or merging phenomena.
This is in agreement with the results obtained by Kron, Koo \& Windhorst (1985)
and Windhorst et~al. (1995) in the optical identification of sub--mJy, or
even $\mu$Jy, radio sources.
\section{Spectroscopy}
\subsection{Observations}
Spectroscopic observations of 34 of the 36 optical
counterparts with likelihood ratio greater than 1.5 have been carried out
at \hbox{ESO~3.6--m} \ telescope.
The sub--sample of these 34 sources is constituted by all
the 28 objects with reliable optical counterpart on the
photographic plates and by 6 of the 8 with optical
counterpart on CCDs having $R \leq 23.5$.
The spectroscopic observations have been performed in two different
observing runs, October 29, 30 and 31 1995, and November 12 1996, with
the EFOSC1 spectrograph (Enard and Delabre, 1992). The spatial scale at~the
detector (RCA 512$^2$, ESO CCD \#8) was 0.61 arcsec~pixel$^{-1}$.
The spectral ranges covered were usually 3600--7000~\AA \ for the blue objects
at $\sim$6.3~\AA/pix resolution (using the grism B300) and 6000--9200~\AA \
for the red objects at $\sim$7.7~\AA/pix resolution (grism R300).
In a few cases (for some bright or puzzling objects) we obtained spectra with
both instrumental configuration in order to cover a larger spectral domain.
The slit width was between 1.5 and 2.0 arcsec in order to optimize the balance
of the fraction of object's light within the aperture (due to seeing effects)
and the sky--background contribution. Because of this relatively small size,
no effort was made to achieve spectrophotometric precision.
The exposure times varied from a minimum of 10 minutes for the
brighter optical counterparts, to a maximum of 2 hours for the fainter ones
(with $R$ close to 23).
\subsection{Data Reduction}
Data reduction has been entirely performed with the NOAO
``Long--slit'' package in IRAF.
For every observing night, a bias frame was constructed averaging ten
``zero exposures'' taken at the beginning and at the end of each night.
The pixel-to-pixel gains were calibrated using flat fields obtained from an
internal quartz lamp. The background was removed subtracting a sky--spectrum
obtained by fitting the intensities measured along the spatial direction in
the column adjacent to the target position. Finally, one--dimensional
spectra were obtained using an optimal extraction (Horne 1986) in order
to have the maximum signal--to--noise ratio also for the fainter objects.
Wavelength calibration was carried out using Helium--Argon lamps taken at the
beginning and at the end of each night. From the position of the sky lines
in the scientific frames, we estimated the accuracy of the wavelength calibration
to be about 2~\AA.
During each night, two standard stars have been observed for flux-calibration purpose.
\subsection{Optical Spectra and Classification}
From our spectroscopic observations we were able to obtain reliable redshifts
for 29 of the 34 observed optical candidates. This corresponds to
$\sim$43\% of the original radio sample. All the 34 spectra are presented in
Figure~2, together with the corresponding optical images, with
superimposed the contour levels of radio emission.
The redshifts have been determined by gaussian--fitting of the emission lines
and via cross--correlation for the absorption--line cases. As templates
for the cross--correlation we used the templates of Kinney et~al. (1996).
These spectra represent a variety of galaxy spectral types --- from
early to late--type and starbursts --- and cover a wide spectral range, from
UV to near--IR, very useful for our sample with a wide range of galaxy
redshifts, up to z~$\approx$~1.
\begin{figure*}
\begin{minipage}{160mm}
\vspace{20cm}
\caption{\label{fig2} EFOSC1 spectra of the 34 spectroscopically observed objects.
The abscissae are wavelengths in \AA, while the ordinate are monochromatic fluxes in
arbitrary units.
Below each spectrum, the corresponding $R$ CCD image (when
available, otherwise the $F_K$ photographic plate image) is shown.
Contour levels of the radio emission
corresponding to 2,4,6,8,12,15,20,30,50,75,100 $\sigma$
are plotted superimposed to each optical image. The size of each image is
1$\times$1 arcmin (in a few cases, where the object was close to the
limit of the CCD, only one of the two dimensions is 1 arcmin) except for
\# 15--10 and \# 38--30, whose images are 1.5$\times$1.5 arcmin because
of the radio emission extent.}
\end{minipage}
\end{figure*}
The results of spectroscopic analysis are presented in Table~2, which has
the following format: in the first three columns the radio source number (in both
1.4 and 2.4 GHz samples) and the $R$
magnitude are repeated with the same convention as in Table~1.
The measured redshift, whenever determined, and the list of the
detected emission lines are in the following columns. In the next two columns
there are the [O{\tt II}] 3727\AA \ ~equivalent width at rest, with associated error,
followed by the 4000~\AA \ break index\footnote{The 4000~\AA \ ~break index as
defined by Bruzual (1983), is the ratio of the average flux density $f_\nu$
in the bands 4050$\div$4250~\AA \ and 3750$\div$3950~\AA \ at rest.}.
The ``spectral classification'', the ``final classification'' (based also on
colours) and a short comment are in the last three columns.
The distinction between spectroscopic types given in column 9 was
based on spectra, continuum indices and visual morphologies. We
divided the objects in several broad classes: Early--type galaxies ({\it Early}),
Late--type galaxies ({\it Late}), AGNs, stars and unclassified objects.
The classification {\it Late(S)} indicates galaxies in which the detected emission
lines allowed some analysis of line ratios and these ratios are consistent
with the lines being due to star formation.
Because of the faint magnitudes and the relatively low signal--to--noise ratios
of our spectra, a more detailed spectral classification was very difficult to
obtain. Moreover, at this stage,
a number of spectra remain unclassified. The final classification reported in column 10
is based also on colours (see next section).
\begin{table*}
\centering
\begin{minipage}{170mm}
\caption{Spectroscopic Results}
\begin{tabular}{ccrcllrclll}
& & & & & \\ \hline \hline
& & & & & \\
\multicolumn{1}{c}{N$_{1.4}$} &
\multicolumn{1}{c}{N$_{2.4}$} &
\multicolumn{1}{c}{R} &
\multicolumn{1}{c}{z} &
\multicolumn{2}{l}{Emission Lines} &
\multicolumn{1}{c}{W$_0$[O{\tt II}]} &
\multicolumn{1}{c}{D(4000)} &
\multicolumn{1}{l}{Class} &
\multicolumn{1}{l}{Class} &
\multicolumn{1}{l}{Comment} \\
& & & & \\
& & & & & \multicolumn{1}{l}{Measured} &\multicolumn{1}{c}{(\AA)}& &\multicolumn{1}{l}{Spectral}
&\multicolumn{1}{l}{Final} & \\
& & & & \\ \hline
& & & & \\
03 & 02 & 17.1* & 0.094 & no & & & 2.03 & Early & Early & \\
05 & 03 & 22.03 & ? & no & & & & Early? & Early & low S/N\\
06 &$--$& 16.9* & 0.000 & no & & & & Star & Star & G6 type \\
07 & 04 & 20.60 & ? & no & & & & BL~Lac? & BL~Lac? & \\
08 &$--$& 19.43 & 2.166 & yes& Ly$\alpha$, ~C{\tt IV}, ~C{\tt III}] & & & AGN1 & AGN1 & x--ray source\\
09 & 05 & 17.65 & 0.165 & no & & & 2.06 & Early & Early & \\
11 &$--$& 20.9* & 0.368 & yes& [O{\tt II}], ~H$\beta$, ~[O{\tt III}], ~H$\alpha$ & 50.8$\pm$8.1 & 1.35 & Late(S) & Late(S) & \\
13 & 08 & 18.8* & 0.229 & yes& [O{\tt II}], ~H$\beta$, ~[O{\tt III}] & 51.3$\pm$1.3 & 1.32 & Late(S) & Late(S) & \\
15 & 10 & 19.41 & 1.663 & yes& C{\tt IV}, ~C{\tt III}] & & & AGN1 & AGN1 & x--ray source\\
17 & 12 & 21.7* & 1.147 & yes& [O{\tt II}] & 18.5$\pm$0.8 & 1.24 & Unclass. & Early & \\
18 &$--$& 19.2* & 0.209 & yes& [O{\tt II}], ~H$\beta$, ~[O{\tt III}] & 17.2$\pm$0.5 & 1.47 & Late(S) & Late(S) & \\
20 & 15 & 22.12&$\sim$0.7&no & & & & Early? & Early & z from cluster \\
22 &$--$& 20.07 & 0.255 & yes& [O{\tt II}] & 25.7$\pm$2.0 & 1.67 & Late & Late & \\
24 &$--$& 20.02 & 0.280 & yes& [O{\tt II}], ~H$\beta$, ~[O{\tt III}] & 40.5$\pm$7.1 & 1.43 & Late(S) & Late(S) & \\
25 & 17 & 20.85 & 0.688 & yes& [O{\tt II}], ~[O{\tt III}] & 18.7$\pm$0.7 & 1.92 & AGN2 & AGN2 & Seyfert~2 gal.\\
26 & 19 & 20.05 & 0.551 & no & & &$>$1.9& Early & Early & \\
27 & 20 & 17.10 & 0.217 & no & & & 2.04 & Early & Early & \\
30 & 22 & 21.78 & 0.957 & yes& Mg{\tt II}, ~[O{\tt II}] & 6.7$\pm$1.2 & 1.75 & Unclass. & Early & x--ray source\\
31 & 23 & 21.58 & 0.757 & no & & & 1.64 & Early & Early & \\
32 & 24 & 22.50 & 0.814 & yes& [O{\tt II}] & 9.4$\pm$1.1 & 1.58 & Unclass. & Early & \\
38 & 30 & 19.32 & 0.387 & no & & & 1.97 & Early & Early & x--ray source\\
43 & 34 & 18.60 & 0.219 & no & & & 1.49 & Early & Early & noisy spectr. \\
47 &$--$& 20.05 & 0.579 & no & & &$>$2.1& Early & Early & \\
48 & 36 & 17.40 & 0.154 & yes& [O{\tt II}], ~H$\beta$, ~[O{\tt III}] & 7.6$\pm$0.9 & 1.61 & Late & Late & Bright Spiral \\
50 &$--$& 18.01 & 0.255 & yes& [O{\tt II}], ~H$\beta$, ~[O{\tt III}] & 18.7$\pm$0.8 & 1.32 & Late(S) & Late(S) & \\
51 & 38 & 21.90 & ? & no & & & & Early? & Early & \\
52 & 39 & 22.70 & 1.259 & yes& [O{\tt II}] &111.4$\pm$9.7 & & Unclass. & Early & \\
53 & 40 & 21.45 & 0.809 & yes& [O{\tt II}] & 28.4$\pm$2.0 & 1.92 & Unclass. & Early & merging \\
55 &$--$& 18.22 & 0.276 & yes& [O{\tt II}], ~H$\alpha$ & 6.9$\pm$0.4 & 1.74 & Late? & Late? & \\
57 & 43 & 22.09 & ? & & & & & Unclass. & Late? & low S/N\\
60 & 46 & 21.15 & 0.702 & no & & & 2.02 & Early & Early & \\
$--$& 47 & 19.8* & 0.275 & yes& [O{\tt II}], ~H$\alpha$ & 11.0$\pm$1.6 & 1.63 & Late? & Late? & \\
61 &$--$& 19.8* & 2.110 & yes& Ly$\alpha$, ~C{\tt IV}, ~C{\tt III}] & & & AGN1 & AGN1 & \\
63 &$--$& 20.2* & 0.203 & yes& [O{\tt II}] & 15.7$\pm$1.6 & 1.18 & Late & Late & \\
& & & & \\ \hline \hline
\end{tabular}
\end{minipage}
\end{table*}
A preliminary distinction between different spectroscopic classes was based on
the spectral lines only. Thus, first we divided the
objects into those which show only absorption lines, those which show emission
lines, and those which show no spectral features at all. The ones that show only
absorption lines are most likely to be early--type galaxies.
For the emission--line objects we attempted a classification separating objects
in which emission lines are probably produced by star--formation, from those in
which an active galactic nucleus is present.
Four objects have been classified as AGN: three of them have strong broad
lines and unresolved optical image, so they have been classified as QSOs
or type~1 AGN, while the fourth one, with only narrow lines, is likely to be
a type~2 Seyfert galaxy on the basis of the lines intensity ratios.
In order to produce an objective classification of the narrow
emission--line objects, we tentatively used the diagnostic diagrams described
by Baldwin, Phillips and Terlevich (1981) and by Rola, Terlevich and Terlevich
(1997), the latter for the higher redshift sources (up to $z \approx 0.7$).
Unfortunately, in a few cases, the observable spectral range accessible
for high redshift galaxies makes these methods useless,
and the same applies to poor S/N spectra.
The five spectra allowing the use of diagnostic diagrams (i.e. showing more than one line)
and falling into the H{\tt II}/Starburst region ({\it Late(S)}) show an [O{\tt II}] \ equivalent width
at rest greater than 15 \AA, suggesting a strong star--formation.
Other five galaxies clearly showing late--type spectra, for which the diagnostic
diagrams could not be applied, have been classified as {\it Late}. They are all at
relatively low redshift ($z < 0.3$) and, on average, have low [O{\tt II}] equivalent
widths. In a couple of cases also H$\alpha$ is detected.
Five emission--line spectra remained unclassified at this stage, because
they could not be unambiguosly classified into any of the above categories
({\it Early}, {\it Late}, AGN, etc.) on the basis
of their spectra only. They are all relatively high--$z$ ($>0.8$) galaxies, whose redshift
determination was mainly based on a single emission line identified with
[OII]$\lambda$372.
For their classification we used their colours, as well as their absolute magnitudes and
radio luminosities (see next section).
For 5 objects, showing no obvious absorption nor
emission lines in their spectra, it was not possible to determine a
redshift, although for one of them (\# 20--15) we assumed a redshift
from nearby galaxies, which are very likely to form a cluster (see notes on
individual sources reported below).
Four of these objects have $R > 21.8$. Therefore, although we could determine
a redshift for two objects fainter than $R = 21.8$, we can consider this
magnitude as the approximate limit of our spectroscopic sample.
Three of the objects for which no redshift was determined (\# 05--03,
\# 20--15, \# 51--38) are tentatively classified as {\it Early} on the basis
of their red spectra, without prominent emission features.
Instead, \# 57--43, a relatively blue object with
two possible emission lines in its spectrum (not
identified with any obvious spectral feature), remained unclassified at this stage.
The last object (\# 07--04) shows an extremely
blue spectrum without any distinguishable line or structure, despite the
relatively good S/N.
The spectrum shape, together with its inverted radio spectrum and optical
colours, make it a possible BL Lacertae object.
To summarize the results of our spectral classification, we subdivided the 34
spectroscopically observed objects into the following populations:
\begin{description}
\item [\bf{12~}] early--type galaxies showing only absorption lines (or no detectable
lines at all, but with red spectra);
\item [\bf{~5~}] late--type objects, with continua typical of evolved star population,
but showing modest [O{\tt II}] \ (and eventually H$\alpha +$ NII) emission lines;
\item [\bf{~5~}] star--forming emission line galaxies with more than one line in their spectra
and W([O{\tt II}]) $>$ 15~\AA;
\item [\bf{~5~}] Active Galactic Nuclei, consisting of 3 broad--line QSOs, 1
Seyfert 2 galaxy and a possible BL Lac object;
\item [\bf{~1~}] star;
\item [\bf{~6~}] spectroscopically unclassified objects, 5 of which have a redshift
determined mainly on the basis of a single
emission line.
\end{description}
\subsection{Notes on individual sources}
Brief comments on the optical and/or radio properties are given for all the objects
in Table 2 and for a few additional sources.
\begin{description}
\item [\bf{03--02}] Bright early--type galaxy, in a group; the close--by galaxy
at $\sim 12^{\prime \prime}$ north of the radio position has the same redshift.
\item [\bf{05--03}] The most likely identification ($L=5.7$) is with a very faint galaxy,
classified as {\it Early},
with a low S/N spectrum for which a redshift determination was not possible.
It is likely to be a member of a compact group of galaxies, since other four objects
are within 5$^{\prime \prime}$. The brightest
one ($L=4.5$) has a tentative redshift of 0.165, but, due to its much brighter magnitude
with respect to the other members of the group, it is more likely to be a foreground object.
\item [\bf{06--00}] Bright G star.
\item [\bf{07--04}] Blue object without any obvious spectral feature in its spectrum.
Its colours ($U-B=-0.2$, $B-V=0.56$, $V-R=0.44$),
together with its inverted radio spectrum ($\alpha_r = -0.42$), may suggest that this
object is a BL Lac AGN. However, this identification has the lowest likelihood ratio
in our sample ($L=1.8$) because of its relatively large distance from the radio position
(3.2 arcsec).
\item [\bf{08--00}] Broad--line quasar (Zitelli et~al. 1992; MZZ7801), which is also
X--ray emitter.
\item [\bf{09-05}] Bright early--type galaxy with radio emission probably powered by a mini--AGN
in its nucleus, as suggested by its inverted radio spectrum ($\alpha_r = -0.68$).
\item [\bf{11--00}] Blue, emission--line galaxy with line ratios consistent with the
lines being due to star--formation.
\item [\bf{13--08}] Emission--line galaxy with line ratios consistent with the
lines being produced by a strong star--formation activity.
\item [\bf{15--10}] Broad--line quasar (MZZ5571), also X--ray emitter (Zamorani et al. in preparation).
\item [\bf{17--12}] High--$z$ (1.147) galaxy, whose redshift determination is based on the
presence of a single, relatively strong (EW=18.5~\AA) emission--line, identified with [O{\tt II}]$\lambda$3727.
\item [\bf{18--00}] Emission line galaxy with a starburst--like spectrum; the southern
companion is at the same redshift and shows a late--type galaxy spectrum.
\item [\bf{20--15}] Faint, red galaxy surrounded by a number of galaxies with similar colours.
It is likely to be at the center of a cluster.
The cross--correlation analysis does not lead to any statistically significant redshift
determination. The shape of the continuum at $\sim$8000~\AA \ suggests the possible
presence of the CaH$+$K break at $z \sim 1.04$, but also this identification is not
supported by any other emission or absorption line in the spectrum. For this reason,
the redshift of this object has not been determined from its spectrum, but it
has been tentatively estimated from the redshifts we measured for three other cluster
members ($z \simeq 0.7$). This object has an inverted radio spectrum ($\alpha_r = -0.24$).
Due to its red colours, we classified this object as an {\it Early} galaxy.
\item [\bf{22--00}] Noisy spectrum, with only a relatively strong [O{\tt II}] \ (EW=25.7~\AA) but
a quite reddened continuum; a brighter galaxy, 19$^{\prime \prime}$ south of the radio
position has the same redshift.
\item [\bf{24--00}] Late--type spectrum galaxy with lines produced by strong star--formation
activity.
\item [\bf{25--17}] Very red emission--line object classified as Seyfert 2 on the basis of its line--ratios,
while its colours are more typical of an evolved elliptical/S0 galaxy. It has a companion at the
same redshift within a few arcseconds, so that interaction could be partially responsible for the
enhanced radio and line emission.
\item [\bf{26--19}] Early--type galaxy in a faint, inverted--spectrum
($\alpha_r = -0.11$) radio source.
\item [\bf{27--20}] Bright early--type galaxy at relatively low redshift ($z = 0.217$).
\item [\bf{30--22}] Triple radio source identified with a high redshift galaxy ($z = 0.957$), which is
also an X--ray source. In the UV portion of the spectrum, some possible broad lines (Mg{\tt II}$\lambda$2798
C{\tt II}]$\lambda$2326) indicate the presence of an active nucleus, but the continuum
shortward of 5500~\AA \ is noisy
and shows a suspect fall. Its colours are consistent with this object being an early--type galaxy.
It is surrounded by other fainter objects, suggesting the presence of a group, but all of them are
too faint for any redshift determination.
\item [\bf{31--23}] The faint early--type galaxy suggested as the most likely identification
is the closest to the radio position. An equally
faint, but bluer galaxy with strong [O{\tt II}] \ at the same redshift is at 4.3 arcsec from the
radio source. Other faint galaxies within a few arcsec suggest the presence of a group.
\item [\bf{32--24}] A compact group of faint galaxies coincides with the radio position.
The more likely identification is with a high redshift galaxy ($z = 0.814$)
showing a moderate [O{\tt II}] \ emission--line in its spectrum. The colours are
consistent with this object being an early--type galaxy.
\item [\bf{38--30}] Early--type galaxy coincident with the central component of a triple radio source,
which is also an X--ray emitter.
\item [\bf{43--34}] Early--type spectrum galaxy (but noisy spectrum!) with a disky--like morphology.
The spectrum is almost a straight line over the whole observed range and its shape is very
similar to those of the young, reddened galaxies found by Hammer et~al. (1997).
\item [\bf{47--00}] Early--type galaxy at the center of a small group at $z = 0.579$. The object at 5$^{\prime \prime}$ is a blue compact galaxy at the same
redshift.
\item [\bf{48--36}] Bright spiral galaxy, spectrally classified as {\it Late}. Its line--ratios suggest
current star--formation activity, although the spectrum is dominated by old stellar continuum.
\item [\bf{49--37}] A classical double radio source with no obvious optical identification. The CCD image
suggests a possible association with a faint cluster.
\item [\bf{50--00}] Late--type spectrum galaxy, with line ratios consistent with the lines
being due to star--formation activity.
\item [\bf{51--38}] Faint pair of sources, possibly forming a merging system. Despite a reasonable
S/N in their spectra it was not possible to identify any obvious structure, nor
obtaining a reliable redshift determination. Due to their
red colours, this pair has been associated to the {\it Early} class.
\item [\bf{52--39}] Very faint, high--$z$ (1.259) emission line galaxy, whose redshift determination was
based on a single, strong emission--line (EW = 111~\AA), identified with
[O{\tt II}]$\lambda$3727. The significant detection of continuum shortward of the line seems to exclude the
Ly$\alpha$ \ hypothesis. The colours of this object, together with its radio and optical luminosities are
consistent with it being a high--$z$ elliptical galaxy.
\item [\bf{53--40}] Very faint, extremely red ($B-R > 3$) [O{\tt II}] \ emitting
galaxy at relatively high $z$ (0.809).
This is probably a close merging system, since its CCD image shows that it has two faint
nuclei. Moreover, it is likely to be surrounded by a faint cluster. Due to its very red colours
and to the inverted spectrum of its radio emission ($\alpha = -0.28$), it is likely
to be an evolved object with a mini--AGN in its nucleus, whose radio and line emission are enhanced
by the on--going merging.
\item [\bf{55--00}] Late--type, H$\alpha$ emitting galaxy with peculiar optical morphology.
This is a very puzzling objects, since its distorted optical and radio morphologies
strongly suggest this galaxy being in interaction with another close disky--galaxy (optical and
radio tails connecting the two objects are clearly visible). However, their redshifts show a
significant cosmological distance between them ($\Delta$v = 6534 km/s), which makes the suggested
interaction rather
unlikely. Moreover, the optical counterpart of the radio source has a close companion at the same
redshift (at a distance of 6.1 arcsec).
\item [\bf{56--42}] This galaxy, although relatively bright ($R = 22.01$)
and with a high likelihood ratio, has not been
spectroscopically observed, since it was not in the area covered by the CCD exposures and was close to
the detection limits of the photographic plates. However, it fell by chance in a 5 mins CCD frame
taken during the last spectroscopic run, thus we could determine its magnitude and position.
\item [\bf{57--43}] Blue unclassified object. The spectrum shows two possible emission features
(a large at $\sim$8820~\AA \ and a narrow at $\sim$7250~\AA) which we were not able to identify
with any obvious spectral line. It is possible
that one of the two lines be a spurious one, since it is a very low S/N spectrum. Due to its colours
and spectral shape this object has been associated to the {\it Late} class.
\item [\bf{58--44}] The most likely identification for this object is a bright star, but, due to
its likelihood ratio value $< 1.5$ we assumed it as a mis--identification.
\item [\bf{60--46}] Relatively high--$z$ (0.702) early--type galaxy, possibly in a faint group:
another object at 6.2$^{\prime \prime}$ has the same redshift and several fainter ones are within
$\sim8^{\prime \prime}$.
\item [\bf{00--47}] Late--type, H$\alpha$ emitting galaxy with a spectrum very similar to that of
55--00.
\item [\bf{61--00}] Broad--line quasar (MZZ8668).
\item [\bf{63--00}] Complex optical image constituted by two superimposed sources in the plates,
whose redshifts ($z = 0.203$ and $z = 0.654$) show that they are not related. Both galaxies
have emission line
spectra, the most likely identification being with the galaxy at lower redshift and with moderate
[O{\tt II}] \ emission (EW = 15.7). The other galaxy is bluer, at higher redshift and with
strong [O{\tt II}] \ line (EW = 33.8).
\end{description}
\section{Radio and Optical Properties of the Faint Radio Galaxy Population}
Colour--redshift diagrams are presented in figure 3, while
radio spectral index--radio flux, ~radio~flux--optical~magnitude, magnitude--redshift
and radio luminosity--absolute magnitude plots are presented in Figs 4$a$, $b$, $c$,
and $d$. In both figures the objects are plotted with different symbols according to their spectral
classification, while the spectrally unclassified galaxies are represented by a filled dot with either
a circle or a square around, indicating colour classification from figure 3. The dashed
lines in figure 4$b$ represent different values of the radio--to--optical ratio $R$, defined as
$R = S \times 10^{\frac{(m-12.5)}{2.5}}$, where $S$ is the radio flux in mJy and $m$ is the
apparent magnitude.
For most objects the colours are consistent with their spectral classification (see
fig. 3). The different classes of objects are discussed individually below.
\begin{figure}
\centerline{
\psfig{figure=fig3.eps,width=17cm}
}
\caption{\label{fig3} $V-R$ ($a$) and $B-R$ ($b$) colour versus redshift
for the extragalactic identifications. The three quasars at $z > 1.5$ are not shown. The
different symbols represent the different classes of objects: the empty circles stand for
{\it Early} galaxies, the squares for {\it Late} galaxies (empty for normal and filled for
star--forming galaxies), the empty triangles for AGNs (the type 2 Seyfert galaxy and the
possible BL Lac object)
and the filled dots for spectrally unclassified objects. The latter have
either a circle or a square around the dot, indicating classification from these diagrams.
At the right side of these figures also the colours for objects without redshift
determination are plotted.
The different curves correspond to the color--redshift relations
for galaxies derived from Bruzual \& Charlot (1993) models and represent two different models
for elliptical galaxies (solid lines), Sab--Sbc spirals (dashed line), Scd--Sdm spirals
(long dashed line) and starburst galaxies (dotted--dashed line). The parameters of these
models are given in Table 3 in Pozzetti et al. (1996).}
\end{figure}
\begin{figure*}
\begin{minipage}{170mm}
\centerline{
\psfig{figure=fig4.eps,width=17cm}
}
\caption{\label{fig4} Radio spectral index vs. radio flux ($a$) and radio flux vs. $R$ magnitude
($b$) for all radio sources. Symbols are the same as in figure 3, with the addition of filled
triangles for quasars, an empty star for the star, diagonal crosses for objects with optical ID
but no spectrum and vertical crosses for empty fields (arrows in panel $b$).
The dotted lines in panel $b$ are different radio--to--optical ratios, corresponding
to $logR = 1.5, 2.5, 3.5, 4.5, 5.5$.
Redshift vs. $R$ magnitude ($c$) and radio luminosity vs. absolute $R$ magnitude ($d$) for the
identifications. The dashed line in panel $c$ is the best--fit R--$z$
relation for {\it Early} galaxies in the sample.}
\end{minipage}
\end{figure*}
\subsection{Early--type galaxies}
In this section we will consider in the group of early--type galaxies both those with
redshift determination and those for which we were unable to measure the redshift
(\# 05--03, \# 20--15 and \# 51--38, plotted in the right side of figure 3), but
with red colours consistent with those of the early--type class.
The colour--redshift (figs. 3), magnitude--redshift (fig. 4$c$) and radio
luminosity--absolute magnitude (fig. 4$d$) diagrams for our
early--type objects (all the empty circles) are consistent with those expected
for redshifted elliptical and S0 galaxies.
The radio luminosities for all these galaxies are in the
range $10^{23.0} < P_{1.4~GHz} < 10^{24.8}$ W Hz$^{-1}$ ($H_0 = 50$ km s$^{-1}$ Mpc$^{-1}$,
$q_0 = 0.0$), consistent with them being Fanaroff--Riley I galaxies.
Consistently with previous results, the early--type galaxies are the dominant population
at $S > 1$ mJy. However, at variance with what found by other authors (see, for example,
Benn et al. 1993), we find a significant number of early--type galaxies also in the
flux range $0.2 \leq S \leq 1$ mJy. A more detailed discussion of the relative
importance of different types of galaxies as a function of radio flux and optical
magnitude will be given in section 6.
Below 2 mJy, about 13\% of our radiosources have inverted radio spectrum (see figure 4$a$).
Even if not all of them have been optically identified, it appears that most of these
objects belong to the early--type class, in agreement with the results of
Hammer et~al. (1995), who found an even higher fraction of early--type
galaxies with inverted radio spectra among their identified $\mu$Jy radio sources
($S_{4.86~GHz} > 16~ \mu$Jy). The suggested
presence of a low--luminosity AGN in the nuclei of these objects, responsible for the observed radio
emission, applies also to the galaxies in our sample, which can be the ``bright'' counterpart
of the Hammer et~al. $\mu$Jy sources (also with very faint optical magnitude, in the
range $23~ {_<\atop^{\sim}} ~V~ {_<\atop^{\sim}} ~26$). Our non--inverted radio spectra early--type
galaxies are probably powered by an active nucleus, too, since they all have absolute magnitude
greater than $M_R = -21.5$ and relatively high radio luminosity
and other plausible sources of radio emission (HII regions, planetary
nebulae, supernova remnants) cannot account for the observed radio luminosity in galaxies
brighter than this magnitude (Sadler, Jenkins \& Kotany 1989; Rees 1978; Blanford \& Rees 1978).
\subsection{Late--type and Star--forming Galaxies}
Figures 3 and 4 show that late--type and
star--forming galaxies occupy a narrow range at low--moderate redshift ($0.15 < z < 0.4$) and
most of them have relatively bright apparent magnitudes ($R < 21.0$, $B < 22.5$),
faint radio fluxes ($S_{1.4~GHz} < 1$ mJy) and relatively low radio luminosities
($L_{1.4~GHz} < 10^{23.5}$ W Hz$^{-1}$).
All the galaxies classified as {\it Late(S)} (i.e. with significant star formation)
on the basis of their spectra have blue colours, consistent with those of late spirals.
The galaxies classified as {\it Late} are, instead, redder and some of them
(\# 55--00, \# 00--47 and \# 63--00) have colours close to the locus expected for
elliptical galaxies. Despite of this, we
will keep for them the spectral classification.
Our {\it Late(S)} galaxies are part of the starburst population at low/intermediate
redshift found in almost all the published sub--mJy identification works (e.g. Windhorst et~al. 1985;
Benn et~al. 1993), whose radio and optical properties they fully resemble. Also their radio spectral
indices (which are all steep, see fig.~4$a$) are consistent with their radio emission
being due to syncrothron emission
from supernova remnants, the main source of radio emission at 1.4 GHz in starburst galaxies, with
typical spectral index in the range $0.5 - 0.9$ ($S_{\nu} \propto \nu^{-\alpha}$). The radio luminosity
of these star--forming galaxies occupy the range $10^{22.7} < L_{1.4~GHz} < 10^{23.5}$ W Hz$^{-1}$,
similar to (but narrower than) that found by Benn et~al. (1993) for their starburst objects.
The {\it Late} galaxies occupy about the same range of radio and optical luminosities and all but one
of them have steep radio spectral indices.
The blue unclassified object \# 57--43 has been associated to the {\it Late} class, due to its colour
properties. However, the radio flux and the apparent magnitude of this object are out of the ranges occupied
by all the other {\it Late} galaxies in our sample. So, its classification remains quite uncertain.
\subsection{AGNs}
Three objects (\# 08--00, \# 15--10 and \# 61--00) have broad emission lines and are in the MZZ quasar
sample (Zitelli et al. 1992). They are the highest redshift objects in our sample ($z = 2.166, 1.663$ and 2.110
respectively) and
their absolute optical magnitudes range from $-$25.9 to $-$27.2, all brighter than the limit $M_B = -23$ often
used to separate Seyferts and quasars. They are probably hosted by elliptical galaxies, though only one of them
(15--10, which is also the brightest radio source in our sample) can be defined as radio--loud on the basis of
the ratio between the radio and optical luminosities (Gruppioni and
Zamorani, in preparation). Their radio powers are in the range
10$^{25.3} - 10^{27.8}$ W Hz$^{-1}$, comparable to those of lower redshift 3CR and 5C quasars.
One object (\# 25--17) has only narrow lines and was classified Seyfert 2 on the basis of its emission line ratios.
The radio and optical properties of this object resemble those of the high redshift radio galaxies in our sample
(described in the following section). In fact, its radio luminosity is 10$^{24.9}$ W Hz$^{-1}$, well inside the
range occupied by our high--$z$ galaxies, and its colours, as well, are as red as those of
evolved ellipticals.
\subsection{High--redshift Emission--Line Galaxies}
The group of 5 galaxies at relatively large redshifts ($z > 0.8$) whose spectra were left
unclassified in section 4.3 is composed by intrinsically powerful radio sources.
Their radio powers occupy the range $10^{24.4} < L_{1.4~GHz} < 10^{25.7}$ W Hz$^{-1}$.
In this range of
radio powers, far too high for classical late--type and star--forming galaxies,
Fanaroff--Riley class I
and class II radio galaxies (FRI and FRII) coexist in roughly equal numbers
(Zirbel and Baum 1995).
All these five radiogalaxies are close to the dividing line between FRI and FRII galaxies in the
radio luminosity--absolute magnitude plane (see, for example, Ledlow and Owen 1996). At the
relatively poor angular resolution of our radio data, a morphological classification is not
possible: three of them are unresolved with a typical upper limit to the size of a few arcsec,
one consists of a slightly resolved single component, one (\# 30--22) is a triple radio source,
which, however, does not resemble the classical FRII double radio sources, since the three
components are not aligned with each other.
Thus, our higher redshift
and more powerful radio sources are very unlikely to be star--forming galaxies as they
would have been classified on the basis of their emission lines (in most cases just
one emission line associated with [O{\tt II}]).
Four out of five of these galaxies have EW([OII]) in the range 6--28~ \AA,
the only exception being \# 52--39, which has an EW larger than 100~ \AA.
Since at the magnitudes and redshifts of these galaxies ~80\% of the
field galaxies have EW([OII])$> 15~$ \AA \ (Hammer et al. 1997),
we conclude that the star formation in these galaxies is not particularly
strong. This would be even more true if some, if not most, of the
emission line flux were due not to stellar but to nuclear ionization.
The relatively low [OII] emission would be consistent with these
radiogalaxies belonging to the FRI class, with \# 52--39 being the only good
candidate for an FRII classification. In fact, for a given absolute
magnitude, line luminosity in FRII radiogalaxies is significantly
higher than in FRI galaxies, while the latter have only slightly higher
line luminosity than normal ``radio quiet'' elliptical galaxies
(Zirbel and Baum 1995).
Figure 3 shows that the colours for these five galaxies are reasonably
consistent with those expected at their redshift for early type galaxies
(ellipticals or early spirals), while they are significantly redder than
those expected for late type spirals or starburst-dominated galaxies.
For this reason, and taking into account also the continuity with
the other early-type galaxies at lower redshift in the redshift--magnitude
plane (figure 4$c$) and their relatively high radio luminosity (figure 4$d$),
we are confident that, despite the presence of [OII] emission in their
spectra, they are physically
unrelated to the star-forming galaxies identified at low-moderate redshift.
However, we can not exclude that some of these red galaxies at relatively
high redshift ($z > 0.8$) are highly obscured galaxies, similar to the
heavily reddened starbursts at $z \leq 1$ recently detected in the
mid/far--infrared by ISO and in the sub--mm by SCUBA
(Hammer \& Flores 1998; Lilly et al. 1998). These objects, showing star
formation rates, derived from infrared data, in excess of 100 M$_{\odot}$ yr$^{-1}$
are far from being classical star--forming galaxies. Similarly to our red
radio sources, they have red colours and relatively faint [OII]
emission lines (Hammer et al. 1995). At variance with our high redshift
red galaxies, however, they have smaller radio--to--optical ratios.
\subsection{Radio sources without identification}
For 6 radio sources we have likely optical counterparts, but no
spectroscopic data. Five of them (\# 00--06, \# 21--16, \# 36--27,
\# 41--32 and \# 45--00) are at the limit of our CCD
exposures ($R \sim 24$ and $B \sim 25$), while one (\# 56--42), at
the limit of our plates, has $R \sim 22$.
Twenty--eight additional radio sources have no optical counterpart either
in the plates (9 objects, $R \geq 21.8$) or in the CCD data (19 objects,
$R \geq 24$). The location of these objects in the radio flux -- optical
magnitude plane is shown in figure 4$b$ (crosses and arrows). Fourteen of
these objects have $S > 1$ mJy, while 14 have $S < 1$ mJy; most of them have
steep radio spectra ($\alpha > 0.5$). Figure 4$b$ shows that
almost all these objects have a radio to optical ratio significantly higher
than that typical of late type galaxies, including those with
significant star--formation. We therefore conclude that most of them
are likely to be associated with early--type galaxies. This is consistent
with the fact that early--type galaxies constitute the large majority
of the identifications with objects fainter than $B \sim 22.5$.
In the following discussion we will focus only on the sample of 19
radio sources without reliable optical
counterpart on CCD data. Under the assumption that our unidentified
objects are early--type galaxies, we used the $z - R$ magnitude relation
defined by the objects in figure 4$c$, and other similar relations from
larger samples (Vigotti, private communications, for a sample which
contains data for about 100 3CR and B3 radiogalaxies) to estimate
their expected redshifts. The redshift -- magnitude
relation in the $R$ band has a significantly larger scatter than the similar
relation in $K$ band, because the $R$ band, probing the UV rest-frame luminosity,
where a large intrinsic scatter in high--$z$ radiogalaxies is present (Dunlop
et al. 1989), is more affected by possible AGN contamination, recent
star formation or dust. For this reason, given an optical magnitude, only
a relatively large redshift range rather than a true redshift can be estimated.
For the magnitude corresponding to the limits in
the CCD data, this range turns out to be $1.2 {_<\atop^{\sim}} z {_<\atop^{\sim}} 3.0$.
The corresponding radio luminosities would be in the range
$logP_{1.4~GHz} = 25.6 \pm 0.6$ at $z \sim 2$.
Of course, we can not exclude that a few of the unidentified radio sources
are, instead, associated to starburst galaxies. If so, however, they
should be really extreme objects in terms of their ratio between radio and
optical luminosities. With just one exception (\# 57--43, whose association
with the {\it Late} class is rather uncertain (see Section 4.4)), all the
late--type galaxies in our sample have $1.5~ {_<\atop^{\sim}} ~log R~ {_<\atop^{\sim}} ~3.0$,
while all but one of the unidentified radio sources have $log R~ {_>\atop^{\sim}} ~3.3$.
Radio observations of large and complete samples of spiral galaxies show
that the fraction of such galaxies with $log R$ higher than this
value is $< 10^{-3}$ (Condon 1980; see also Hummel 1981). Given the
number of galaxies with $R \leq 22$ in the entire area covered by our radio
data ($\sim$5400), and assuming that 50\% of them can be considered part of the
starburst class, we can qualitatively estimate that the number of such
galaxies which could have been detected with $log R \geq 3.3$ is at most of the
order of a few (i.e. $< 3$). Obviously, this argument {\it assumes} that the
radio--to--optical ratio for starburst galaxies does not undergo a strong
evolution with redshift at $z {_>\atop^{\sim}} 1$.
In any case, to shed light on the nature of our unidentified radio sources
deeper optical observations would be needed, but if they indeed are high--$z$
radio galaxies the best observing band would be in the near infrared, where
both the K-correction effects and the dispersion in the redshift--magnitude
relation are much smaller than in the optical.
\section{Discussion}
In previous works it has been shown that the optically brighter
part of the radio source population at sub--mJy level is
composed largely of starburst galaxies at relatively low redshifts.
Although these results were based on small fraction of spectroscopic
identifications (Benn et al. 1993), it has often been unduly assumed that
they could still be true for the entire population of sub--mJy radio sources.
Our data, based on a significantly higher fraction of optical
identifications (close to 50\%) although on a relatively small sample
of radio sources, do not support this assumption.
In fact, we find that early--type galaxies (including the high--$z$
emission--line galaxies, which are probably the faint end of the
more powerful elliptical radio galaxy population like the 3CR)
are (44 $\pm$ 16)\% of all the radio sources fainter than 1 mJy
identified with galaxies in our sample. In the same radio flux interval
Benn et~al. found a dominance of blue narrow emission line
galaxies and a percentage of early--type galaxies of only about 8\%
(7/84 early--type against 76/84 star--forming galaxies!). The reason of
this discrepancy is very likely to be ascribed to the deeper optical magnitude
reached in our identification work with respect to the previous ones ($B \simeq
24$ to be compared with $B \simeq 22.3$ of Benn et al. 1993). In fact, our
sample suggests an abrupt increase in the fraction of identifications with
early--type galaxies at around $B \simeq 22.5$, which is just above the
magnitude limit of the Benn et al. sample (see figure 4b).
Dividing the sub-mJy sample in two sub--samples (brighter and fainter than
$B = 22.5$), the fraction of early--type galaxies with respect to the total
number of radio sources spectroscopically identified with galaxies increases
from (9 $\pm$ 9)\%, in good agreement with the Benn et al. results
in the same magnitude range, to about 100\%.
Moreover if, as discussed in Section 5.5, also most of the
unidentified radio sources are likely to be associated with high redshift
elliptical radio galaxies, the total fraction of early--type galaxies
in our sub--mJy sample can be estimated to be of the order of
(60 -- 70)\%. This fraction is in good agreement with the prediction
of the model for the evolution of faint radio sources developed by
Rowan--Robinson et al. (1993). Integrating the radio luminosity functions
of spiral galaxies, derived from the Benn et al. sample, and elliptical
galaxies (Dunlop and Peacock 1990) and testing various models for the
evolutionary laws of the spiral luminosity function, they indeed find that
ellipticals still contribute about 60\% of the integrated counts to a
radio limit of 0.1 mJy. Previous models for the interpretation of
the sub--mJy radio counts, based on older luminosity functions and
different models for the evolution, predicted a substantially lower fraction
of early type galaxies in the same flux range (see, for example, Danese et
al. 1987).
Although the percentages of early and late type galaxies we estimated from our
data are in agreement with the predictions of the Rowan--Robinson et al.
models, the redshift distribution of our sample of late type galaxies
(all of them with z $<$ 0.4) appears not to be consistent with the
predictions of the same models, in which a not negligible tail of high
redshift galaxies is expected (see figure 6 in Rowan--Robinson et al. 1993).
Although with relatively large errors, because of the small size of our
sample, the ``local'' volume--densities of our late type galaxies are
consistent with the radio luminosity function of spiral galaxies
computed by Rowan--Robinson et al. (1993). If our conclusion that
most of the unidentified radio sources are likely to be associated with
high redshift elliptical radio galaxies is correct, this would imply
a smaller amount of evolution for the radio luminosity function of late
type galaxies than that assumed in Rowan--Robinson et al. models.
Alternatively, agreement with Rowan--Robinson et al. models could be
obtained only if a significant fraction of our unidentified radio sources
were instead to be classified as starburst galaxies. If actually placed at the
high redshift predicted by these models, their radio powers, in the range
$10^{24} - 10^{25}$ W Hz$^{-1}$, would require unplausibly high star formation
rates, in excess of a few thousand $M_{\odot} yr^{-1}$, on the basis of the
relation between star formation rate and non--thermal radio emission
(Condon 1992). Moreover, their radio--to--optical ratios, significantly higher
than those of brighter late type galaxies, would suggest a radio emission
mechanism different from that of local starburst galaxies, probably not
directly related to the star formation episodes. In any case, larger and
fainter samples of identifications would be needed
in order to choose between these alternatives.
In this respect, it is interesting to compare our results at sub--mJy level
with the existing data at $\mu$Jy level, where the preliminary identification
results obtained for the very few existing samples (Hammer et al. 1995;
Windhorst et al. 1995; Richards et al. 1998)
are still somewhat unclear. These papers have shown
that the population of $\mu$Jy radio sources is made of star forming
galaxies, ellipticals and AGNs. Given the faint magnitude of the optical
counterparts, the exact fraction of each category is not well defined yet.
\begin{figure}
\centerline{
\psfig{figure=fig5.eps,width=9cm}
}
\caption{\label{fig5} 1.4 GHz flux vs. $V$ magnitude for our identifications
(symbols as in fig. 3) and for Hammer et al. (1995) and Windhorst et al.
(1995) counterparts of $\mu$Jy sources (represented, respectively, by
asterisks and diagonal crosses). The 1.4 GHz fluxes for the
Windhorst et al. objects have been obtained assuming the median spectral index
of the Hammer et al. sample ($\alpha_{med} = 0.2$).
The thick arrows at $V = 26.0$ are the lower limits of Hammer et al., while the
thick arrows at $V = 26.5$ are the lower limits of Windhorst et al.
The dotted lines correspond to the same radio--to--optical ratios as in
figure 4$b$.}
\end{figure}
In figure 5 we show the 1.4 GHz flux versus $V$ magnitude for our data and
the Hammer et al. and Windhorst et al. ones (for which $V$ magnitudes are
available). The 1.4 GHz fluxes have been computed using the 1.5--5 GHz
spectral indices reported by Hammer et al. for their objects and assuming the
median value of the Hammer et al. spectral index distribution ($\alpha_{med}
= 0.2$) for the Windhorst et
al. ones. The dotted lines correspond to the same radio--to--optical ratios
as in figure 4$b$. The figure shows that the fraction of radio sources with
large radio--to--optical ratios, typical of the more powerful radio sources,
decreases in the $\mu$Jy samples. For log R $>$ 3.5 this fraction is larger
than 50\% (37/68) in our sample, while it is smaller than 35\% (17/51) for the
$\mu$Jy samples.
Vice versa, most of the $\mu$Jy radio sources have the same radio--to--optical
ratio as our low redshift star--forming and elliptical galaxies.
\section{Conclusions}
Optical identifications down to $R \sim 24$ have been performed for a sample
of 68 radio sources
brighter than 0.2 mJy at 1.4 or 2.4 GHz. About 60\% of the radio sample have a
likely optical counterpart on deep CCD exposures or ESO plates.
Even in the CCD data, reaching $R \sim 24$ and $B \sim 25$, 19 out of 50
sources are not identified.
Spectra have been obtained for 34 optical counterparts brighter than
$R \simeq 23.5$. The spectra provided enough information
to determine object type and redshift in most cases (29 objects).
This percentage of spectroscopic identifications is the highest
obtained so far for radio sources in this radio flux range.
The objects are a mixture of classical early--type galaxies (E/S0), with no
detectable emission lines, covering the redshift range 0.1--0.8, star--forming
and late--type galaxies at moderate redshifts ($z < 0.4$), emission--line
galaxies at relatively high $z$ ($> 0.8$) and AGNs. The star--forming
galaxies are very similar in colour, luminosity and spectral
properties to those found in other sub--mJy surveys (i.e. Benn et al.
1993). Contrary to previous results, star--forming galaxies do not constitute
the main population in our identification sample. In fact, even at sub--mJy
level the majority of our radio sources are identified with early--type
galaxies. This apparent discrepancy with previous results is due to the
fainter magnitude limit reached in our spectroscopic identifications. In fact,
the fraction of sub--mJy early--type galaxies in our sample abruptly increases
around $B \simeq 22.5$, which is approximately the magnitude limit reached by
previous identification works. Moreover, the high--$z$ emission--line galaxies
have spectra, colours, and absolute magnitudes similar to those of
the classical bright elliptical radio galaxies found in surveys carried out at
higher radio fluxes. Their radio luminosity ($10^{24.4} < L_{1.4~GHz} < 10^{25.7}$
W Hz$^{-1}$),
far too high for classical star--forming galaxies, is in the range where
FRI and FRII radiogalaxies coexist in roughly equal number. These objects are
therefore likely to constitute the faint radio luminosity end of the distant
elliptical radio galaxy population, thus further increasing the fraction of
early--type galaxies in our identified sample. Moreover, using mainly the
large radio--to--optical ratio and the information from the available
limits on the optical magnitudes of the unidentified radio sources, we
conclude that the great majority of them are likely to be early--type galaxies,
at z $>$ 1. Our classification for these faint objects can be tested
with photometric and spectroscopic observations in the near infrared.
If correct, it would suggest that the evolution of the radio
luminosity function of spiral galaxies, including starbursts, might not be
as strong as suggested in previous evolutionary models.
\section{ACKNOWLEDGEMENTS}
Support for this work was provided by ASI (Italian Space Agency) through
contracts 95--RS--152 and ARS--96--70.
| proofpile-arXiv_065-9235 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{The Gamma Matrices}
\setcounter{equation}{0}
In this appendix we give our conventions for the gamma matrices. We
follow closely
the conventions of \cite{green}, however some relabeling of the
coordinates will be required.
The $32\times 32$ gamma matrices are in the Majorana representation
and are purely imaginary. They are
\begin{eqnarray}
\Gamma^0&=&\tau_2\times I_{16}\nonumber\\
\Gamma^I&=&i \tau_1\times \gamma^I, \;\;\;\;\;\;\;\; I=1,...8\nonumber\\
\Gamma^9&=&i \tau_3\times I_{16}\nonumber\\
\Gamma^{10}&=&i \tau_1\times \gamma^9
\end{eqnarray}
where $\tau_i$ are the Pauli matrices, $I_x$ are $x\times x$
identity matrices and the $16\times 16$ real
matrices $\gamma^I$
satisfy
\be
\{\gamma^{{I}},\gamma^{{J}}\}=2\delta^{{I}{J}},\; \;\;\;\;\;\;\;
\; {I},{J} =1,...8.
\end{equation}
and
\be
\gamma^9=\prod_{I=1}^{8}\gamma^{{I}}.
\end{equation}
This ensures that
\be
\{\Gamma^\mu,\Gamma^\nu\}=-2\eta^{\mu\nu}.
\end{equation}
We now construct the $spin(8)$ Clifford algebra.\footnote{
This construction is that presented in Appendix 5.B of Ref.\cite{green}}
The matrices $\gamma^{{I}}$ take the form
\begin{eqnarray}
\gamma^{\hat{I}}&=&\pmatrix{0& \tilde{\gamma}^{{\hat{I}}}\cr
-\tilde{\gamma}^{{\hat{I}}}&0\cr },\ {\hat{I}}=1,...7,\nonumber\\
\gamma^{8}&=&\pmatrix{I_{8}& 0\cr
0&-I_{8}\cr },
\end{eqnarray}
where the $8\times 8$ matrices $\tilde{\gamma}^{{\hat{I}}}$ are
antisymmetric and explicitly given by
\begin{eqnarray}
\tilde{\gamma}^1&=&-i \tau_2\times\tau_2\times\tau_2\nonumber\\
\tilde{\gamma}^2&=&i 1_2\times\tau_1\times\tau_2\nonumber\\
\tilde{\gamma}^3&=&i 1_2\times\tau_3\times\tau_2\nonumber\\
\tilde{\gamma}^4&=&i \tau_1\times\tau_2\times1_2\nonumber\\
\tilde{\gamma}^5&=&i \tau_3\times\tau_2\times1_2\nonumber\\
\tilde{\gamma}^6&=&i \tau_2\times1_2\times\tau_1\nonumber\\
\tilde{\gamma}^7&=&i \tau_2\times1_2\times\tau_3
\end{eqnarray}
It follows that $\gamma^{9}$ is given by
\be
\gamma^{9}=\pmatrix{0&-I_{8}\cr
-I_{8}&0\cr }.
\end{equation}
Furthermore
\be
\Gamma^+=\frac{1}{\sqrt{2}}\pmatrix{i & -i \cr i & -i \cr}\times
I_{16},\;\;\;\;\;\;
\Gamma^-=\frac{1}{\sqrt{2}}\pmatrix{-i & -i \cr i & i \cr}\times I_{16},
\end{equation}
such that
\be
(\Gamma^+)^2=(\Gamma^-)^2=1,\;\;\;\;\;\; \{ \Gamma^+,\Gamma^-\}=2.
\end{equation}
Then it is straightforward to show that the condition $\Gamma^+\theta=0$
leads to
\be
\theta=\pmatrix{S_1\cr S_2 \cr S_1 \cr S_2 \cr}.
\end{equation}
Moreover, it follows that
\begin{eqnarray}
&\bar{\theta}\Gamma^\mu\partial\theta=0&,\;\;\;\;\;\;\;\;\;\;\;\;
\mbox{unless}\;\;\mu=-\nonumber\\
&\bar{\theta}\Gamma^{\mu\nu}\partial\theta=0&,\;\;\;\;\;\;\;\;\;\;\;\;
\mbox{unless}\;\;\mu\nu=-M
\end{eqnarray}
where $\bar{\theta}=\theta^\dagger\Gamma_0=\theta^{T}\Gamma_0\;$ ($\theta$
is real). Finally notice that
\be
(\Gamma^\mu)^\dagger=\Gamma^0\Gamma^\mu\Gamma^0,\;\;\;\;\;\;\; \;
\Gamma^{11}=\prod_{\mu=0}^{10}\Gamma^{{\mu}}=i\Gamma^{10}.
\end{equation}
\newpage
| proofpile-arXiv_065-9243 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
There have been a large number of papers on the application of string
theory and M-theory to cosmology \cite{cosmoa}-\cite{cosmog}. In the
present paper
we will study the cosmology of
toroidally compactified M-theory, and argue that
some of the singularities encountered in the low energy field theory
approximation can be resolved by U-duality. We argue that near any of
the singularities we study, new light states appear, which cannot be
described by the low energy field theory. The matter present
in the original universe decays into these states and the universe
proceeds to expand into a new large geometry which is described by a
different effective field theory.
In the course of our presentation we will have occasion to investigate the
moduli space of M-theory with up to ten compactified rectilinear toroidal
dimensions and vanishing three form potential. We believe that the results
of this investigation are extremely interesting. For tori of dimension, $d
\leq 8$, we find that all noncompact regions of the moduli space can be
mapped into weakly coupled Type II string theory, or 11D SUGRA, at large
volume. For $d \leq 7$ this result is a subset of the results of Witten
\cite{witten} on toroidal compactification of string theory. The result
for $ d= 9$ is more interesting. There we find a region of moduli space
which cannot be mapped into well understood regions. We argue that a
spacetime cannot be in this regime if it satisfies the Bekenstein bound.
For the ten torus the moduli space can be viewed as a $9+1$ dimensional
Minkowski space. The interior of the future light cone is the region that
can be mapped into Type II or 11D (we call the region which can be so
mapped the safe domain of the moduli space). We again argue that the
other regions violate the Bekenstein bound, in the form of the
cosmological holographic principle of \cite{lenwilly}. Interestingly, the
pure gravity Kasner solutions lie precisely on the light cone in moduli
space. The condition that homogeneous perturbations of the Kasner
solutions by matter lie inside the light cone of moduli space is precisely
that the energy density be positive.
However, every Kasner solution interpolates between the past and future
light cones. Thus, M-theory appears to define a natural {\it arrow of
time} for cosmological solutions in the sense that it is reasonable to
define the future as that direction in which the universe approaches the
safe domain. Cosmological solutions appear to interpolate between a
past where the holographic principle cannot be satisfied and a future
where it can.
We argue that the $9+1$ dimensional structure, which we derive purely
group theoretically, \footnote{and thus presumably the structure of the
(in)famous hyperbolic algebra $E_{10}$, about which we shall have nothing
to say in this paper, besides what is implicit in our use of its Weyl
group.} is intimately connected to the De~Witt metric on the moduli space.
In particular, in the low energy interpretation, the signature of the
space is a consequence of the familiar fact that the conformal factor has
a negative kinetic energy in the Einstein action. Thus, the fact that the
duality group and its moduli space are exact properties of M-theory tells
us that this structure of the low energy effective action has a more
fundamental significance than one might have imagined. The results of
this section are the core of the paper and the reader of limited patience
should concentrate his attention on them.
In the Section 4 of the paper we speculatively generalize our arguments to
moduli spaces with less SUSY. We argue that the proper arena for the
study of M-theoretic cosmology is to be found in moduli spaces of compact
ten manifolds with three form potential, preserving some SUSY. The
duality group is of course the group of discrete isometries of the metric
on moduli space. We argue that this is always a $p+1$ dimensional Lorentz
manifold, where $p$ is the dimension of the moduli space of appropriate,
static, SUSY preserving solutions of 11D SUGRA in 10 compact dimensions,
restricted to manifolds of unit volume. We discuss moduli spaces with
varying amounts of SUSY, raise questions about the adequacy of the 11D
SUGRA picture in less supersymmetric cases, and touch on the vexing puzzle
of what it means to speak about a potential on the moduli space in those
cases where SUSY allows one. In the Appendix we discuss how the even
self-dual lattices $\Gamma_8$ and $\Gamma_{9,1}$ appear in our
framework.
\section{Moduli, Vacua, Quantum Cosmology and Singularities}
\subsection{Some Idiosyncratic Views on General Wisdom}
M-theorists have traditionally occupied themselves with moduli spaces of
Poincar\'e invariant SUSY vacua. It was hoped that the traditional field
theoretic mechanisms for finding stable vacuum states would uniquely pick
out a state which resembled the world we observe.
This point of view is very hard to maintain after the String Duality
Revolution. It is clear that M-theory has multiparameter families of
exact quantum mechanical SUSY ground states, and that the first
phenomenological question to be answered by M-theory is why we do not live
in one of these SUSY states. It is logical to suppose that the answer to
this question lies in cosmology. That is, the universe is as it is not
because this is the only stable endpoint to evolution conceivable in
M-theory but also because of the details of its early evolution. To
motivate this proposal, recall that in infinite Poincar\'e invariant space
time of three or more dimensions, moduli define superselection sectors.
Their dynamics is frozen and solving it consists of minimizing some
effective potential once and for all, or just setting them at some
arbitrary value if there is no potential. Only in cosmology do the moduli
become real dynamical variables. Since we are now convinced that
Poincar\'e invariant physics does not destabilize these states, we must
turn to cosmology for an explanation of their absence in the world of
phenomena.
The focus of our cosmological investigations will be the moduli which are
used to parametrize SUSY compactifications of M-theory. We will argue that
there is a natural Born-Oppenheimer approximation to the theory in which
these moduli are the slow variables. In various semiclassical
approximations the moduli arise as zero modes of fields in compactified
geometries. One of the central results of String Duality was the
realization that various aspects of the space of moduli could be discussed
(which aspects depend on precisely how much SUSY there is) even in regions
where the notions of geometry, field theory and even weakly coupled string
theory, were invalid. {\it The notion of the moduli space is more robust
than its origin in the zero modes of fields would lead us to believe.}
The moduli spaces of solutions of the SUGRA equations of motion that
preserve eight or more SUSYs, parametrize exact flat directions of the
effective action of M-theory. Thus they can perform arbitrarily slow
motions. Furthermore, their action is proportional to the volume of the
universe in fundamental units. Thus, once the universe is even an order
of magnitude larger than the fundamental scale we should be able to treat
the moduli as classical variables. They provide the natural definition of
a semiclassical time variable which is necessary to the physical
interpretation of a generally covariant theory. In this and the next
section we will concentrate on the case with maximal SUSY. We relax this
restriction in section 4. There we also discuss briefly the confusing
situation of four or fewer SUSYs where there can be a potential on the
moduli space.
In this paper we will always use the term moduli in the sense outlined
above. They are the slowest modes in the Born-Oppenheimer approximation to
M-theory which becomes valid in the regime we conventionally describe by
quantum field theory. In this regime they can be thought of as special
modes of fields on a large smooth manifold. However, we believe that
string duality has provided evidence that the moduli of supersymmetric
compactifications are exact concepts in M-theory, while the field
theoretic (or perturbative string theoretic) structures from which they
were derived are only approximations. The Born-Oppenheimer
approximation for the moduli is likely to be valid even in regimes
where field theory breaks down.
The first task in understanding an M-theoretic cosmology is to discuss
the dynamics of the moduli. After
that we can begin to ask when and how the conventional picture of quantum
field theory in a classical curved spacetime becomes a good approximation.
\subsection{Quantum Cosmology}
The subject of Quantum Cosmology is quite confusing. We will try to be
brief in explaining our view of this arcane subject. There are two issues
involved in quantizing a theory with general covariance or even just time
reparametrization invariance. The first is the construction of a Hilbert
space of gauge invariant physical states. The second is describing the
physical interpretation of the states and in particular, the notion of
time evolution in a system whose canonical Hamiltonian has been set equal
to zero as part of the constraints of gauge invariance. The first of these
problems has been solved only in simple systems like first quantized
string theory or Chern-Simons theory, including pure $2+1$ dimensional
Einstein gravity. However, it is a purely mathematical problem, involving
no interpretational challenges. We are hopeful that it will be solved in
M-theory, at least in principle, but such a resolution clearly must await
a complete mathematical formulation of the theory.
The answer to the second problem, depends on a semiclassical
approximation. The principle of time reparametrization invariance forces
us to base our measurements of time on a physical variable. If all
physical variables are quantum mechanical, one cannot expect the notion of
time to resemble the one we are used to from quantum field theory. It is
well understood \cite{bfstbruba}-\cite{bfstbrubc} how to derive a
conventional time dependent Schr\"odinger equation for the quantum
variables from the semiclassical approximation to the constraint equations
for a time reparametrization invariant system. We will review this for
the particular system of maximally SUSY moduli.
In fact, we will restrict our attention to the subspace of moduli space
described by rectilinear tori with vanishing three form.
In the language of 11D SUGRA, we are
discussing metrics of the Kasner form
\eqn{metric}{ds^2 = - dt^2 + L_i^2 (t) (dx^i)^2 }
where the $x^i$ are ten periodic coordinates with period $1$.
When restricted to this class of metrics, the Einstein Lagrangian has
the form
\eqn{lag}{\mathcal L = V\left[
\sum_i {\dot{L}_i^2 \over L_i^2} -
\left(\sum_i {\dot{L}_i \over L_i}\right)^2
\right],}
where $V$, the volume, is the product of the $L_i$.
In choosing to write the metric in these coordinates, we have lost the
equation of motion obtained by varying the variable $g_{00}$. This is
easily restored by imposing the constraint of time reparametrization
invariance. The Hamiltonian $E_{00}$ derived from (\ref{lag}) should
vanish on
physical states. This gives rise to the classical Wheeler-De~Witt equation
\eqn{wd}{
2E_{00} = \left(\sum_i \frac{\dot{L}_i}{L_i}\right)^2
- \sum_i \left(\frac{\dot{L}_i}{L_i}\right)^2 = 0 ,}
which in turn leads to a naive quantum Wheeler-De~Witt equation:
\eqn{qwd}{{1\over 4V}\left(\sum_i \Pi_i^2 - {2\over 9}(\sum_i \Pi_i)^2
\right)\Psi = 0.}
That is, we quantize the system by converting the unconstrained phase
space variables (we choose the logarithms of the $L_i$ as canonical
coordinates)
to operators in a function space. Then physical states
are functions satisfying the partial differential equation (\ref{qwd}).
There are complicated mathematical questions involved in constructing an
appropriate inner product on the space of solutions, and related
problems of operator ordering. In more complex systems it is
essential to use the BRST formalism to solve these problems.
We are unlikely to be able to resolve these questions before discovering
the full nonperturbative formulation of M-theory. However, for our
present semiclassical considerations these mathematical details are not
crucial.
We have already emphasized that when the volume of the system is large
compared to the Planck scale,
the moduli behave classically. It is then possible to use the time
defined by a particular classical solution (in a particular coordinate
system in which the solution is nonsingular for almost all time).
Mathematically what this means is
that in the large volume limit, the solution to the Wheeler De~Witt
equation takes the form
\eqn{semicsoln}{ \psi_{W\!K\!B} (c) \Psi (q , t[c_0]) }
Here $c$ is shorthand for the variables which are treated by classical
mechanics, $q$ denotes the rest of the variables and $c_0$ is some
function of the classical variables which is a monotonic function of
time. The wave function $\Psi$ satisfies a time dependent Schr\"odinger
equation
\eqn{schrod} {i \partial_t \Psi = H(t) \Psi}
and it is easy to find an inner product which makes the space of its
solutions into a Hilbert space and the operators $H(t)$ Hermitian.
In the case where the quantum variables $q$ are quantum fields on the
geometry defined by the classical solution, this approximation is
generally called Quantum Field Theory in Curved Spacetime. We emphasize
however that the procedure is very general and depends only on the
validity of the WKB approximation for at least one classical variable,
and the fact that the Wheeler De~Witt equation is a second order
hyperbolic PDE, with one timelike coordinate. These facts are derived
in the low energy approximation of M-theory by SUGRA. However, we will
present evidence in the next section that they are consequences of the
U-duality symmetry of the theory and therefore possess a validity beyond
that of the SUGRA approximation.
From the low energy point of view, the hyperbolic nature of the equation
is a consequence of the famous negative sign for the kinetic energy of the
conformal factor in Einstein gravity, and the fact that the kinetic
energies of all the other variables are positive. It means that the
moduli space has a Lorentzian metric.
\subsection{Kasner Singularities and U-Duality}
The classical Wheeler-De~Witt-Einstein equation for Kasner metrics takes
the form:
\eqn{cwde}{({\dot{V} \over V})^2 - \sum_{i=1}^{10} ({\dot{L_i} \over
L_i})^2 = 0}
This should be supplemented by equations for the ratios of individual
radii $R_i; \prod_{i=1}^{10} R_i =1 $. The latter take the form of
geodesic motion with friction
on the manifold of $R_i$ (which we parametrize {\it
e.g.} by the first nine ratios)
\eqn{nlsigma}{ \partial_t^2 R_i +
\Gamma^i_{jk} \partial_t R_j \partial_t R_k + \partial_t ({\rm
ln} V) \partial_t R_i}
$\Gamma$ is the Christoffel symbol of the metric $G_{ij}$ on the
unimodular moduli space. We write the equation in this general form
because many of our results remain valid when the rest of the variables
are restored to the moduli space, and even
generalize to the less supersymmetric moduli
spaces discussed in section 4. By introducing a new time variable
through
$V(t) \partial_t = - \partial_s$ we convert this equation into
nondissipative geodesic motion on moduli space. Since the
``energy'' conjugate to the variable $s$ is conserved, the energy
of
the nonlinear model in cosmic time (the negative term in the Wheeler
De~Witt equation) satisfies
\eqn{en} {\partial_t E = - 2 \partial_t (\mbox{ln\,} V) E}
whose solution is
\eqn{ensol}{ E = {E_0 \over V^2}}
Plugging this into the Wheeler De~Witt equation we find that $V \sim t$
(for solutions which
expand as $t \rightarrow \infty$). Thus, for this class of solutions we
can choose
the volume as the monotonic variable $c_0$ which defines the time in the
quantum theory.
For the Kasner moduli space, we find that the solution of the equations
for
individual radii
are
\eqn{kassoln}{R_i (t) = L_{planck} (t/t_0 )^{p_i}}
where
\eqn{kascond}{\sum p_i^2 = \sum p_i = 1}
Note that the equation (\ref{kascond}) implies that at least one of the
$p_i$ is
negative
(we have again restricted attention to the case where the volume expands
as
time goes to
infinity).
It is well known that all of these solutions are singular
at both infinite and zero time.
Note that if we add a matter or radiation energy density to
the system
then it dominates the system in the infinite volume limit and changes
the
solutions for the
geometry there. However, near the singularity at vanishing volume both
matter and radiation
become negligible (despite the fact that their densities are
becoming infinite)
and the solutions retain their Kasner form.
All of this is true in 11D SUGRA. In M-theory we know that many regions
of moduli space which are apparently singular in 11D SUGRA can be
reinterpreted as living in large spaces described by weakly coupled Type
II string theory or a dual version of 11D SUGRA. The vacuum Einstein
equations are of course invariant under these U-duality transformations.
So one is lead to believe that many apparent singularities of the Kasner
universes are perfectly innocuous.
Note however that phenomenological matter and radiation densities which
one might add to the equations are not invariant under duality. The
energy density truly becomes singular as the volume goes to zero. How
then are we to understand the meaning of the duality symmetry? The
resolution is as follows. We know that when radii go to zero, the
effective field theory description of the universe in 11D SUGRA becomes
singular due to the appearance of new low frequency states. We also know
that the singularity in the energy densities of matter and radiation
implies that scattering cross sections are becoming large. Thus, it seems
inevitable that phase space considerations will favor the rapid
annihilation of the existing energy densities into the new light degrees
of freedom. This would be enhanced for Kaluza-Klein like modes, whose
individual energies are becoming large near the singularity.
Thus, near a singularity with a dual interpretation, the contents of the
universe will be rapidly converted into new light modes, which have a
completely different view of what the geometry of space is\footnote{After
this work was substantially complete, we received a paper, \cite{riotto},
which proposes a similar view of certain singularities. See also
\cite{rama}.}. The
most
effective description of the new situation is in terms of the transformed
moduli and the new light degrees of freedom. The latter can be described
in terms of fields in the reinterpreted geometry. We want to emphasize
strongly the fact that the moduli do not change in this transformation,
but are merely reinterpreted. This squares with our notion that they are
exact concepts in M-theory. By contrast, the fields whose zero modes they
appear to be in a particular semiclassical regime, do not always make
sense. The momentum modes of one interpretation are brane winding modes
in another and there is no approximate way in which we can consider both
sets of local fields at the same time. Fortunately, there is also no
regime in which both kinds of modes are at low energy simultaneously, so
in every regime where the time dependence is slow enough to make a low
energy approximation, we can use local field theory.
This mechanism for resolving cosmological singularities leads naturally to
the question of precisely which noncompact regions of moduli space can be
mapped into what we will call the {\it safe domain} in which the theory
can be interpreted as either 11D SUGRA or Type II string theory with radii
large in the appropriate units. The answer to this question is, we
believe, more interesting than the idea which motivated it. We now turn
to the study of the moduli space.
\section{The Moduli Space of M-Theory on Rectangular Tori}
In this section, we will study the structure of the moduli space
of M-theory compactified on various tori $T^k$ with $k\leq 10$. We
are especially interested in noncompact regions of this space which
might represent either singularities or large universes. As above,
the three-form potential $A_{MNP}$ will be
set to zero and the circumferences of the cycles of the torus
will be expressed as the exponentials
\eqn{radiiexp}{ {L_i \over L_{planck}} = t^{p_i} ,\qquad
i=1,2, \dots, k.}
The remaining coordinates $x^0$ (time) and $x^{k+1}\dots x^{10}$ are
considered to be infinite and we never dualize them.
So the radii are encoded in the logarithms $p_i$. We will study limits of
the moduli space in various directions which correspond to keeping
$p_i$ fixed and sending $t\to\infty$ (the change to $t\to 0$
is equivalent to $p_i\to -p_i$ so we do not need to study
it separately).
We want to emphasize that our discussion of asymptotic domains of moduli
space is complete, even though we restrict ourselves to rectilinear tori
with vanishing three form. Intuitively this is because the moduli we
leave out are angle variables. More formally, the full moduli space is
a homogeneous space. Asymptotic domains of the space correspond to
asymptotic group actions, and these can always be chosen in the Cartan
subalgebra. The $p_i$ above can be thought of as parametrizing a
particular Cartan direction in $E_{10}$.\footnote{We thank E.Witten for a
discussion of this point.}
\subsection{The 2/5 transformation}
M-theory has dualities which allows us to identify the vacua with
different $p_i$'s. A subgroup of this duality group is the $S_k$ which
permutes the $p_i$'s.
Without loss of generality, we can assume that
$p_1\leq p_2\leq \dots \leq p_9$. We will assume this in most of
the text.
The full group that leaves invariant rectilinear tori with
vanishing three form is the Weyl group of the noncompact $E_k$ group
of SUGRA. We will denote it by $\Rut_k$. We will give an elementary
derivation of the properties of this group for the convenience of
the reader. Much of this is review, but our results about the
boundaries of the fundamental domain of the action of $\Rut_k$ with $k =
9,10$ on the moduli space, are new.
$\Rut_k$ is generated
by the permutations, and one other transformation which acts as follows:
\eqn{rutdef}{(p_1,p_2,\dots, p_k)
\mapsto
(p_1-{2s\over 3},
p_2-{2s\over 3},
p_3-{2s\over 3},
p_4+{s\over 3},
\dots,
p_k+{s\over 3}).}
where $s=(p_1+p_2+p_3)$.
Before explaining why this transformation is a
symmetry of M-theory, let us point out several of its properties
(\ref{rutdef}).
\begin{itemize}
\item The total sum $S=\sum_{i=1}^k p_i$ changes to $S\mapsto
S+(k-9)s/3$.
So if $s<0$, the sum increases for $k<9$, decreases for $k>9$
and is left invariant for $k=9$.
\item If we consider all $p_i$'s to be integers which are
equal modulo 3, this property will hold also after
the 2/5 transformation. The reason is that, due to the assumptions, $s$ is a multiple
of three and the coefficients $-2/3$ and $+1/3$ differ by an integer.
As a result, from any initial integer $p_i$'s we get $p_i$'s
which are multiples of $1/3$ which means that all the matrix elements
of matrices in $\Rut_{k}$ are integer multiples of $1/3$.
\item The order of $p_1,p_2,p_3$ is not changed (the difference
$p_1-p_2$ remains constant, for instance). Similarly,
the order of $p_4,p_5,\dots, p_k$ is unchanged. However the
ordering between $p_{1...3}$ and $p_{4...k}$ can change in general.
By convention, we will follow each 2/5 transformation{} by a permutation which places
the $p_i$'s in ascending order.
\item The bilinear quantity $I= (9 - k) \sum (p_i^2) + (\sum p_i )^2 = (10
- k) \sum(p_i^2) +
2 \sum_{i < j} p_i p_j$ is left invariant by $\Rut_k$.
\end{itemize}
The fact that 2/5 transformation{} is a symmetry of M-theory can be proved as follows.
Let us interpret $L_1$ as the M-theoretical circle of a type IIA string
theory. Then the simplest duality which gives us a theory of the same kind
(IIA) is the double T-duality. Let us perform it on the circles $L_2$
and $L_3$. The claim is that if we combine this double T-duality
with a permutation of $L_2$ and $L_3$ and interpret the new $L_1$ as the
M-theoretical circle again, we get precisely (\ref{rutdef}).
Another illuminating way to view the transformation 2/5 transformation{} is to
compactify M-theory on a three torus. The original M2-brane and the
M5-brane wrapped on the three torus are both BPS membranes in eight
dimensions. One can argue that there is a duality transformation
exchanging them \cite{ofer}. In the limit in which one of the cycles of
the $T^3$ is small, so that a type II string description becomes
appropriate, it is just the double T-duality of the previous paragraph.
The fact that this transformation plus permutations generates $\Rut_k$ was
proven by the authors of \cite{elitzur} for $k \leq 9$, see also
\cite{pioline}.
\subsection{Extreme Moduli}
There are three types of boundaries of the toroidal moduli space which
are amenable to detailed analysis. The first is the limit in which
eleven-dimensional supergravity becomes valid. We will
denote this limit as 11D. The other two limits are weakly coupled
type IIA and type IIB theories in 10 dimensions. We will call the domain
of asymptotic moduli space which can be mapped into one of these limits,
the safe domain.
\begin{itemize}
\item For the limit 11D, all the radii must be greater than $L_{planck}$.
Note that for $t\to\infty$ it means that all the radii are much greater
than $L_{planck}$. In terms of the $p_i$'s , this is the inequality $p_i>0$.
\item For type IIA, the dimensionless coupling constant
$g_s^{IIA}$ must be smaller than 1 (much smaller for $t\to\infty$)
and all the remaining radii must be greater than $L_{string}$ (much
greater for $t\to\infty$).
\item For type IIB, the dimensionless coupling constant
$g_s^{IIB}$ must be smaller than 1 (much smaller for $t\to\infty$)
and all the remaining radii must be greater than $L_{string}$ (much
greater for $t\to\infty$), including the extra radius whose momentum
arises as the number of wrapped M2-branes on the small $T^2$ in the
dual 11D SUGRA picture.
\end{itemize}
If we assume the canonical ordering of the radii, i.e. $p_1\leq p_2\leq
p_3\leq \dots \leq p_k$, we can simplify these requirements as follows:
\begin{itemize}
\item 11D: $0<p_1$
\item IIA: $p_1<0<p_1+2p_2$
\item IIB: $p_1+2p_2<0<p_1+2p_3$
\end{itemize}
To derive this, we have used the familiar relations:
\eqn{fama}{ {L_1\over L_{planck}}=(g_s^{IIA})^{2/3}=
\left({L_{planck}\overL_{string}}\right)^2=
\left({L_1\over L_{string}}\right)^{2/3}}
for the 11D/IIA duality ($L_1$ is the M-theoretical circle) and similar
relations for the 11D/IIB case ($L_1<L_2$ are the parameters of the
$T^2$ and $L_{IIB}$ is the circumference of the extra circle):
\begin{eqnarray}
{L_1\over L_2}=g_s^{IIB},\quad
1={L_1L_{string}^2\overL_{planck}^3}={g_s^{IIB}L_2L_{string}^2\overL_{planck}^3}=
{L_{IIB}L_1L_2\overL_{planck}^3},\\
\frac{1}{g_s^{IIB}}\left(\frac{L_{planck}}{L_{string}}
\right)^4=\frac{L_1L_2}{L_{planck}^2}=\frac{L_{planck}}{L_{IIB}}=
(g_s^{IIB})^{1/3}\left(L_{string}\over L_{IIB}\right)^{4/3}\label{famb}
\end{eqnarray}
Note that the regions defined by the inequalities above cannot overlap,
since the regions are defined by $M,M^c\cap A,A^c\cap B$ where
$A^c$ means the complement of a set.
Furthermore, assuming $p_i<p_{i+1}$ it is easy to show that
$p_1+2p_3<0$ implies $p_1+2p_2<0$ and $p_1+2p_2<0$ implies
$3p_1<0$ or $p_1<0$.
This means that (neglecting the boundaries where
the inequalities are saturated) the region outside
$\mbox{11D}\cup\mbox{IIA}\cup\mbox{IIB}$ is defined simply by
$p_1+2p_3<0$. The latter characterization of the safe domain
of moduli space will simplify our discussion considerably.
\vspace{3mm}
The invariance of the bilinear form defined above gives an important
constraint on the action of $\Rut_k$ on the moduli space. For $k=10$ it
is easy to see that, considering the $p_i$ to be the coordinates of a ten
vector, it defines a Lorentzian metric on this ten dimensional space.
Thus the group $\Rut_{10}$ is a discrete subgroup of $O(1,9)$. The
direction in this space corresponding to the sum of the $p_i$ is timelike,
while the hyperplane on which this sum vanishes is spacelike. We can
obtain the group $\Rut_9$ from the group $\Rut_{10}$ by taking $p_{10}$ to
infinity and considering only transformations which leave it invariant.
Obviously then, $\Rut_9$ is a discrete subgroup of the transverse Galilean
group of the infinite momentum frame. For $k \leq 8$ on the other hand,
the bilinear form is positive definite and $\Rut_k$ is contained in
$O(k)$. Since the latter group is compact, and there is a basis in which
the $\Rut_k$ matrices are all integers divided by $3$, we conclude that in
these cases $\Rut_k$ is a finite group. In a moment we will show that
$\Rut_9$ and {\it a fortiori} $\Rut_{10} $ are infinite. Finally we note
that the 2/5 transformation{} is a spatial reflection in $O(1,9)$. Indeed it squares to
$1$ so its determinant is $\pm 1$. On the other hand, if we take all but
three coordinates very large, then the 2/5 transformation{} of those coordinates is very
close to the spatial reflection through the plane $p_1 + p_2 + p_3 = 0$,
so it is a reflection of a single spatial coordinate.
\FIGURE[l]{\epsfig{file=t-two.eps}\caption{The structure of the
moduli space for $T^2$.}\label{myfigure}}
\vspace{3mm}
We now prove that $\Rut_9$ is infinite.
Start with the first vector of $p_i$'s given below and iterate
(\ref{rutdef}) on the three smallest radii (a strategy which we will use
all the time) -- and sort $p_i$'s after each step, so that their index
reflects their order on the real line. We get
\eqn{ninefinite}{
\begin{array}{lcr}
(-1,-1,-1,&-1,-1,-1,&-1,-1,-1)\\
(-2,-2,-2,&-2,-2,-2,&+1,+1,+1)\\
(-4,-4,-4,&-1,-1,-1,&+2,+2,+2)\\
(-5,-5,-5,&-2,-2,-2,&+4,+4,+4)\\
{ }&\vdots&{ }\\
(3\times (2-3n),&3\times (-1),&3\times (3n-4))\\
(3\times (1-3n),&3\times (-2),&3\times (3n-2))
\end{array}
}
so the entries grow (linearly) to infinity.
\subsection{Covering the Moduli Space}
We will show that there is a useful strategy which can be used to
transform any point $\{p_i\}$ into the safe domain in the case
of $T^k$, $k<9$. The strategy
is to perform iteratively 2/5 transformation{s} on the three smallest radii.
Assuming that $\{p_i\}$ is outside the safe domain, i.e.
$p_1+2p_3<0$ ($p_i$'s are sorted so that $p_i\leq p_{i+1}$),
it is easy to see that $p_1+p_2+p_3<0$ (because $p_2\leq p_3$).
As we said below the equation (\ref{rutdef}),
the 2/5 transformation{} on $p_1,p_2,p_3$ always increases the total
sum $\sum p_i$ for $p_1+p_2+p_3<0$. But this sum cannot increase
indefinitely because the group $\Rut_k$ is finite for
$k<9$. Therefore the iteration proccess must terminate at some
point. The only way this can happen is that
the assumption $p_1+2p_3<0$ no longer holds, which means that
we are in the safe domain. This completes the proof for $k<9$.
\vspace{3mm}
For $k=9$ the proof is more difficult. The group $\Rut_9$ is infinite
and furthermore, the sum of all $p_i$'s does not change. In fact
the conservation of $\sum p_i$ is the reason that only points with
$\sum p_i>0$ can be dualized to the safe domain.
The reason is that if $p_1+2p_3\geq 0$, also $3p_1+6p_3\geq 0$
and consequently
\eqn{ninep}{p_1+p_2+p_3+p_4+p_5+p_6+p_7+p_8+p_9 \geq
p_1 +p_1+p_1 + p_3+p_3+p_3+p_3+p_3+p_3\geq 0.}
This inequality is saturated only if all $p_i$'s are equal
to each other. If their sum vanishes, each $p_i$ must then vanish.
But we cannot obtain a zero vector from a nonzero vector
by 2/5 transformation{s} because they are nonsingular. If the sum $\sum p_i$ is
negative, it is also clear that we cannot reach the safe domain.
However, if $\sum_{i=1}^9 p_i>0$, then we can map the region of moduli space
with $t \rightarrow \infty $ to the safe domain.
We will prove it for rational $p_i$'s only. This assumption compensates
for the fact that the order of
$\Rut_9$ is infinite.
Assuming $p_i$'s rational is however sufficient
because we will see that a finite product of 2/5 transformation{s} brings us
to the safe domain. But a composition of a finite number
of 2/5 transformation{s} is a continuous map from ${\mathbb R}^9$ to ${\mathbb R}^9$ so there must be
at least a
``ray'' part of a neighborhood which can be also dualized to the
safe domain. Because ${\mathbb Q}^9$ is dense in ${\mathbb R}^9$, our argument proves the
result
for general values of $p_i$.
From now on we assume that the
$p_i$'s are rational numbers. Everything is scale invariant so we
may multiply them by a common denominator to make integers. In fact, we choose
them to be integer multiples of three since in that case we will have integer
$p_i$'s even after 2/5 transformation{s}. The numbers $p_i$ are now integers equal
modulo 3 and their sum is positive. We will define a critical quantity
\eqn{cq}{C=\sum_{i<j}^{1...9} (p_i-p_j)^2.}
This is {\it a priori} an integer greater than or equal to zero
which is invariant
under permutations. What happens to $C$ if we make
a 2/5 transformation{} on the radii $p_1,p_2,p_3$? The differences
$p_1-p_2$, $p_1-p_3$, $p_2-p_3$ do not change and this holds for
$p_4-p_5$, \dots $p_8-p_9$, too. The only contributions to
(\ref{cq}) which are changed are from $3\cdot 6=18$ ``mixed'' terms like
$(p_1-p_4)^2$. Using (\ref{rutdef}),
\eqn{rutcq}{(p_1-p_4) \mapsto (p_1-\frac{2s}3) -(p_4+\frac{s}3)=
(p_1-p_4)-s}
so its square
\eqn{rs}{(p_1-p_4)^2\mapsto [(p_1-p_4)-s]^2=(p_1-p_4)^2 - 2s(p_1-p_4)
+s^2}
changes by $- 2s(p_1-p_4) +s^2$. Summing over all 18 terms we get
($s=p_1+p_2+p_3$)
\eqn{delta}{\Delta C= -2s[6(p_1+p_2+p_3)-3(p_4+\dots+p_9)]+18s^2
=6s^2 + 6\left((\sum_{i=1}^9 p_i)-s\right)=6s\sum_{i=1}^9 p_i.}
But this quantity is strictly negative because $\sum p_i$ is positive
and $s<0$ (we define the safe domain with boundaries, $p_1+2p_3\geq 0$).
This means that $C$ defined in (\ref{cq}) decreases after each
2/5 transformation{} on the three smallest radii. Since it is a non-negative integer,
it cannot decrease indefinitely.
Thus the assumption $p_1+2p_3<0$ becomes invalid after a finite number
of steps and we reach the safe domain.
The mathematical distinction between the two regions of the moduli space
according to the sign of the sum of nine $p_i$'s, has a satisfying
interpretation in terms of the holographic principle. In the safe
domain, the volume of space grows in the appropriate
Planck units, while in the region with
negative sum it shrinks to zero. The holographic principle tells us
that in the former region we are allowed to describe many of the states
of M-theory in terms of effective field theory while in the latter
region we are not. The two can therefore not be dual to each other.
Now let us turn to the fully compactified case. As we pointed out, the
bilinear form $I \equiv 2\sum_{i < j} p_i p_j$ defines a Lorentzian
signature metric on the vector space whose components are the $p_i$. The
2/5 transformation{} is a spatial reflection and therefore the group
$\Rut_{10}$ consists of orthochronous Lorentz transformations.
Now consider a vector in the safe domain. We can write it as
\eqn{safevec}{(-2, -2 + a_1, 1 + a_2, \ldots, 1+a_9 )S,
\qquad S\in{\mathbb R}^+}
where the $a_i$ are positive. It is easy to see that $I$ is positive
on this configuration. This means that only the inside of the light
cone can be mapped into the safe domain. Furthermore, since $\sum p_i$
is positive in the safe domain and the transformations are
orthochronous, only the interior of the
future light cone in moduli space can be mapped
into the safe domain.
We would now like to show that the entire interior of the forward light
cone can be so mapped. We use the same strategy of rational coordinates
dense in ${\mathbb R}^{10}$. If we start outside the safe domain, the sum of the
first three $p_i$ is negative. We again pursue the strategy of doing a
2/5 transformation{} on the first three coordinates and then reordering
and iterating. For the case of $\Rut_9$ the sum of the coordinates was
an invariant, but here it decreases under the 2/5 transformation{} of the three
smallest coordinates, if their sum is negative.
But $\sum p_i$ is (starting from rational values
and rescaling to get integers congruent modulo three as before) a
positive integer and must remain so after $\Rut_{10}$ operations.
Thus, after a finite number of iterations, the
assumption that the sum of the three smallest coordinates is negative
must fail, and we are in the safe domain. In fact, we generically enter
the safe domain before this point. The complement of the safe domain
always has negative sum of the first three coordinates, but there are
elements in the safe domain where this sum is negative.
It is quite remarkable that the bilinear form $I$ is proportional to
the Wheeler-De~Witt Hamiltonian for the Kasner solutions:
\eqn{wdI}{\frac{I}{t^2}=\left(\sum_i \frac{dL_i/dt}{L_i}\right)^2
- \sum_i\left(\frac{dL_i/dt}{L_i}\right)^2=\frac{2}{t^2}\sum_{i<j}p_ip_j.}
The solutions themselves thus lie precisely on the future light cone in
moduli space. Each solution has two asymptotic regions ($t \rightarrow
0,\infty$ in (\ref{metric})), one of which is in the past light cone and
the other in the future light cone of moduli space. The structure of the
modular group thus suggests a natural arrow of time for cosmological
evolution. The future may be defined as the direction in which the
solution approaches the safe domain of moduli space. All of the Kasner
solutions then, have a true singularity in their past, which cannot be
removed by duality transformations.
Actually, since the Kasner solutions are on the light cone, which is
the boundary of the safe domain, we must add
a small homogeneous energy density to the system in order to make this
statement correct. The condition that we can map into the safe domain
is then the statement that this additional energy density is positive.
Note that in the safe domain, and if the equation of state of this matter
satisfies (but does not saturate)
the holographic bound of \cite{lenwilly}, this energy density
dominates the evolution of the universe, while near the singularity, it
becomes negligible compared to the Kasner degrees of freedom. The assumption
of a homogeneous negative energy density is manifestly incompatible with
Einstein's equations in a compact flat universe so we see that the
spacelike domain of moduli space corresponds to a physical situation
which cannot occur in the safe domain.
The backward lightcone of the asymptotic moduli space is, as we have
said, visited by all of the classical solutions of the theory.
However, it violates the holographic principle of \cite{lenwilly} if we
imagine that the universe has a constant entropy density per comoving
volume. We emphasize that in this context, entropy means the logarithm
of the dimension of the Hilbert space of those states which can be given
a field theoretic interpretation and thus localized inside the volume.
Thus, there is again a clear physical reason why the unsafe domain of
moduli space cannot be
mapped into the safe domain.
Note again that matter obeying the holographic bound of
\cite{lenwilly} in the future, cannot alter the nature of the solutions
near the true singularities.
\vspace{3mm}
To summarize: the U-duality group $\Rut_{10}$ divides the asymptotic
domains of moduli space into three regions, corresponding to the
spacelike and future and past timelike regimes of a Lorentzian manifold.
Only the future lightcone can be understood in terms of weakly coupled SUGRA or
string theory. The group theory provides an exact M-theoretic meaning for the
Wheeler-De~Witt Hamiltonian for moduli. Classical solutions of the low
energy effective equations of motion with positive energy density for
matter distributions lie in the timelike region of moduli space and
interpolate between the past and future light cones.
We find it remarkable that the purely group theoretical considerations
of this section seem to capture so much of the physics of toroidal
cosmologies.
\section{Moduli Spaces With Less SUSY}
We would like to generalize the above considerations to situations
which preserve less
SUSY. This enterprise immediately raises some questions, the first of which is
what we mean by SUSY. Cosmologies with compact spatial sections have no
global symmetries in the standard sense
since there is no asymptotic region in which one can define the generators.
We will define a cosmology with a certain amount
of SUSY by first looking for Euclidean
ten manifolds and three form field configurations which are solutions of the
equations of 11D SUGRA and have a certain number of Killing spinors.
The first approximation to cosmology will
be to study motion on a moduli space of
such solutions.
The motivation for this is that at least
in the semiclassical approximation we are guaranteed
to find arbitrarily slow motions of the moduli.
In fact, in many cases, SUSY
nonrenormalization theorems guarantee that the semiclassical
approximation becomes valid for
slow motions because the low energy effective Lagrangian of the
moduli is to a large
extent determined by SUSY. There are however a number of
pitfalls inherent in our approach.
We know that for some SUSY algebras, the moduli space of
compactifications to four or six
dimensions is not a manifold. New moduli can appear at
singular points in moduli space
and a new branch of the space, attached to the old one at the
singular point, must
be added. There may be cosmologies which traverse from one branch to
the other in the
course of their evolution. If that occurs, there will be a point at
which the moduli space approximation breaks down.
Furthermore, there are many examples of SUSY vacua of M-theory which
have not yet been
continuously connected on to the 11D limit, even through
a series of ``conifold''
transitions such as those described above \cite{islands}.
In particular, it has been suggested that there
might be a completely isolated vacuum state of M-theory \cite{evadine}.
Thus it might not be possible to imagine that all cosmological solutions
which preserve a given amount of SUSY are continuously connected to the
11D SUGRA regime.
Despite these potential problems, we think it is worthwhile to
begin a study of compact,
SUSY preserving, ten manifolds. In this paper we will only study
examples where the
three form field vanishes. The well known local condition for a
Killing spinor,
$D_{\mu} \epsilon = 0$, has as a condition for local integrability
the vanishing
curvature condition
\eqn{killspin}{R_{\mu\nu}^{ab} \gamma_{ab} \epsilon = 0}
Thus, locally the curvature must lie in a subalgebra of the Lie
algebra of $Spin (10)$ which
annihilates a spinor. The global condition is that the holonomy
around any
closed path must lie in a subgroup which preserves a spinor.
Since we are dealing with
11D SUGRA, we always have both the ${\bf 16}$ and ${\bf\overline{16}}$
representations
of
$Spin (10)$ so SUSYs
come in pairs.
For maximal SUSY the curvature must vanish identically and the space
must be a torus.
The next possibility is to preserve half the spinors and this is
achieved
by manifolds
of the form $K3 \times T^7$ or orbifolds of them by freely acting
discrete
symmetries.
We now jump to the case of 4 SUSYs. To find examples, it is convenient to
consider the decompositions $Spin (10) \supseteq
Spin (k) \times Spin (10-k) $.
The ${\bf 16}$ is then a tensor product of two lower dimensional spinors.
For
$k=2$, the holonomy must be contained in $SU(4) \subseteq Spin (8)$ in
order to preserve a spinor, and it then preserves two (four once the
complex conjugate representation is taken into account). The corresponding
manifolds are products of Calabi-Yau fourfolds with two tori, perhaps
identified by the action of a freely acting discrete group. This moduli
space is closely related to that of F-theory compactifications to four
dimensions with minimal four dimensional SUSY. The three spatial
dimensions are then compactified on a torus. For $k=3$ the holonomy must
be in $G_2 \subseteq Spin (7)$. The manifolds are, up to discrete
identifications, products of Joyce manifolds and three tori. For $k=4$
the holonomy is in $SU(2) \times SU(3)$. The manifolds are free orbifolds
of products of Calabi-Yau threefolds and K3 manifolds. This moduli space
is that of the heterotic string compactified on a three torus and
Calabi-Yau three fold. The case $k=5$ does not lead to any more examples
with precisely 4 SUSYs.
It is possible that M-theory contains U-duality transformations which map
us between these classes. For example, there are at least some examples
of F-theory compactifications to four dimensional Minkowski space which
are dual to heterotic compactifications on threefolds. After further
compactification on three tori we expect to find a map between the $k=2$
and $k=4$ moduli spaces.
To begin the study of the cosmology of these moduli spaces we restrict the
Einstein Lagrangian to metrics of the form
\eqn{modmet}{ds^2 = - dt^2 +
g_{AB} (t) dx^A dx^B}
where the euclidean signature metric $g_{AB}$ lies
in one of the moduli spaces. Since all of these are spaces of solutions
of the Einstein equations they are closed under constant rescaling of the
metric. Since they are spaces of restricted holonomy, this is the only
Weyl transformation which relates two metrics in a moduli space. Therefore
the equations (\ref{cwde}) and (\ref{nlsigma}) remain valid, where
$G_{ij}$ is now the De~Witt metric on the restricted moduli space of unit
volume metrics.
It is clear that the metric on the full moduli space still has Lorentzian
signature in the SUGRA approximation. In some of these cases of lower
SUSY, we expect the metric to be corrected in the quantum theory.
However, we do not expect these corrections to alter the signature of the
metric. To see this note that each of the cases we have described has a
two torus factor. If we decompactify the two torus, we expect a low
energy field theoretic description as three dimensional gravity coupled to
scalar fields and we can perform a Weyl transformation so that the
coefficient of the Einstein action is constant. The scalar fields must
have positive kinetic energy and the Einstein term must have its
conventional sign if the theory is to be unitary. Thus, the
decompactified moduli space has a positive metric. In further
compactifying on the two torus, the only new moduli are those contained in
gravity, and the metric on the full moduli space has Lorentzian signature.
Note that as in the case of maximal SUSY, the region of the moduli space
with large ten volume and all other moduli held fixed, is in the future
light cone of any finite point in the moduli space. Thus we suspect that
much of the general structure that we uncovered in the toroidal moduli
space, will survive in these less supersymmetric settings.
The most serious obstacle to this generalization appears in the case
of 4 (or fewer) supercharges. In that case, general arguments do not
forbid the appearance of a potential in the Lagrangian for the moduli.
Furthermore, at generic points in the moduli space one would expect
the energy density associated with that potential to be of order the
fundamental scales in the theory. In such a situation, it is difficult
to justify the Born-Oppenheimer separation between moduli and high
energy degrees of freedom. Typical motions of the moduli on their
potential have frequencies of the same order as those of
the ultraviolet degrees of freedom.
We do not really have a good answer to this question. Perhaps the
approximation only makes sense in regions where the potential is small.
We know that this is true in extreme regions of moduli space in which
SUSYs are approximately restored.
However, it is notoriously difficult to stabilize the system
asymptotically far into such a region. This difficulty is particularly
vexing in the context of currently popular ideas
\cite{horwita}-\cite{horwitc} in
which the fundamental scale of M-theory is taken to be orders of
magnitude smaller than the Planck scale.
\section{Discussion}
We have demonstrated that the modular group of toroidally compactified
M-theory prescribes a Lorentzian structure in the moduli space which
precisely mirrors that found in the low energy effective Einstein
action.
We argued that a similar structure will be found for moduli
spaces of lower SUSY, although the precise details could not be worked
out because the moduli spaces and metrics on them generally receive
quantum corrections. As a consequence the mathematical structure of the
modular group is unknown. Nonetheless we were able to argue that it
will be a group of isometries of a Lorentzian manifold.
Thus, we argue that the generic mathematical structure discussed in
minisuperspace\footnote{A term which we have avoided up to this point
because it is confusing in a supersymmetric theory.}
approximations to quantum cosmology based on the low
energy field equations actually has an exact meaning in M-theory.
We note however that the detailed structure of the equations will be
different in M-theory, since the correct minisuperspace is a moduli
space of static, SUSY preserving static solutions.
The Lorentzian structure prescribes a natural arrow of time, with a
general cosmological solution interpolating between a singular past
where the holographic principle is violated and a future described by
11D SUGRA or weakly coupled string theory where low energy effective
field theory is a good approximation to the gross features of the
universe. Note that it is {\it not} the naive arrow of time of any given
low
energy gravity theory, which points from small volume to large volume.
Many of the safe regions of moduli space are singular from the point of
view of any given low energy effective theory. We briefly described how
those singularities are avoided in the presence of matter.
We believe that the connections we have uncovered are important and
suggest that there are crucial things to be learned from cosmological
M-theory even if we are only interested in microphysics. We
realize that we have only made a small beginning in
understanding the import of these observations.
Finally, we want to emphasize that our identification of moduli spaces
of SUSY preserving static solutions of SUGRA (which perhaps deserve
a more exact, M-theoretical characterization) as the appropriate
arena for early universe cosmology, provides a new starting point for
investigating old cosmological puzzles. We hope to report some progress
in the investigation of these modular cosmologies in the near future.
\acknowledgments
We are grateful to Greg Moore and Ed Witten
for valuable discussions. The work of Tom
Banks and Lubo\hacek s{} Motl was supported in part by the DOE under grant
number DE-FG02-96ER40559. The work of Willy Fischler was supported in part
by the Robert Welch Foundation and the NSF under grant number PHY-9219345.
| proofpile-arXiv_065-9283 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The plane optical wavefronts of a distant background light source become rippled
when they cross a perturbation. For a distant perturbation,
the focusing of light at the observer changes with the curvature of
the ripples. It is the usual geometrical
scintillation effect. It accounts, {\it e.g.}, for the
twinkling of stars under the atmospheric turbulence.
\par The question whether gravitational waves can cause the light emitted by a
distant source to scintillate is an old problem.
In general relativity, it is well known from the early works by
Zipoy (1966), Zipoy \& Bertotti (1968) and Bertotti \& Trevese (1972)
that gravitational waves have no
focusing property to the first order in their amplitude.
\par However, it has been recently pointed out by Faraoni (1996) that a
first-order scintillation effect can be expected in scalar-tensor theories of
gravity \footnote{ On these theories, see, {\it e.g.}, Will (1993) and
Damour \& Esposito-Far\`ese (1992), and references therein.}.
Furthermore, some actual improvements of the observational
techniques renew the interest in the search of gravitational scintillation
(Labeyrie 1993) and related effects (Fakir 1995).
\par The aim of the present work is to make a detailed analysis of the
scintillation effect in monoscalar-tensor theories for a monochromatic
electromagnetic wave propagating in a weak gravitational field.
We adopt the point of view that the physical metric is the metrical tensor
$g_{\mu \nu}$ to which matter is universally coupled. This basic assumption
defines the usual "Jordan-Fierz" frame. We find a scintillation
effect proportional to the value of the scalar field perturbation at the
observer.
\par Our result contrasts with the zero effect found by Faraoni \& Gunzig
(1998) by using the "Einstein" conformal frame, in which the original physical
metric $g_{\mu \nu}$ is replaced by a conformal one \footnote{A clear
distinction between the "Einstein" frame and the "Jordan" frame, may be found,
{\it e.g.}, in Damour \& Nordverdt (1993)}.
However, their negative conclusion is seemingly due to the fact that the
authors do not take into account the changes in areas and other physical
variables induced by the conformal transformation (Damour \&
Esposito-Far\`ese 1998).
\par The paper is organized as follows. In Sect.2, we give the notations and we
recall the fundamental definitions. In Sect.3, we construct the theory of
gravitational scintillation for a very distant light source emitting quasi
plane electromagnetic waves. Our calculations are valid for any metric theory
of gravity in the limit of the geometrical optics approximation. We obtain the
variation with respect to time of the photon flux received by a freely falling
observer as a sum of two contributions: a change in the scalar amplitude of the
electromagnetic waves, that we call a geometrical scintillation, and a change
in the spectral shift. We express each of these contributions in the form of an
integral over the light ray arriving to the observer. In Sect.4, we study the
scintillation within the linearized weak-field approximation. We show that the
geometrical scintillation is related to the Ricci tensor only. Thus we recover
as a parti\-cular case the conclusions previously drawn by Zipoy and Zipoy \&
Bertotti for gravitational waves in general relativity. Moreover, we show that
the contribution due to the change in the spectral shift is entirely determined
by the curvature tensor. In Sect.5, we apply the results of Sect.4 to the
scalar-tensor theories of gravity. We prove that these theories predict a
scintillation effect of the first order, proportional to the amplitude of the
scalar perturbation. Furthermore, we find that this effect has a local
character: it depends only on the value of the scalar field at the observer.
Finally, we briefly examine the possibility of observational tests in Sect.6.
\section {Notations and definitions}
The signature of the metric tensor $g_{\mu \nu}(x)$ is assumed to be
{\scriptsize $(+~-~-~-)$}. Indices are lowered with $g_{\mu \nu}$ and raised
with $g^{\mu \nu}$.
\par Greek letters run from 0 to 3. Latin letters are
used for spatial coordinates only: they run from 1 to 3.
A comma (,) denotes an ordinary partial
differentiation. A semi-colon (;) denotes a covariant partial differentiation
with respect to the metric; so $g_{\mu \nu; \rho} = 0$.
Note that for any function $F(x)$, $F_{;\alpha}~=~F_{, \alpha}$.
\par Any vector field $w^{\rho}$ satisfies the following identity
\begin{equation}
w^{\rho}_{\verb+ +;\mu ;\nu} - w^{\rho}_{\verb+ +;\nu ;\mu} = -
R^{\rho}_{. \sigma \mu \nu} w^{\sigma}
\end{equation}
where $R^{\rho}_{. \sigma \mu \nu}$ is the Riemann curvature
tensor (note that this identity may be regarded as defining the curvature
tensor). The Ricci tensor is defined by
\begin{equation}
R_{\mu \nu} = R^{\lambda}_{. \mu \lambda \nu}
\end{equation}
\par Given a quantity $P$, $\overline{P}$ denotes its complex conjugate.
\par The subscripts $em$ and $obs$ in formulae stand respectively for emitter
and observer.
\par The constant $c$ is the speed of light and $\hbar$ is the Planck constant
divided by $2 \pi$.
\section{General theory of the gravitational scintillation}
In a region of spacetime free of electric charge, the propagation equations for
the electromagnetic vector potential $A_{\mu}$ are ({\it e.g.}, Misner {\it et
al.} 1973)
\begin{equation}
A^{\mu ;\alpha}_{\verb+ +;\alpha} - R_{\verb+ +\alpha}^{\mu} A^{\alpha}
= 0
\end{equation}
when $A^{\mu}$ is chosen to obey the Lorentz gauge condition
\begin{equation}
A^{\mu}_{\verb+ +;\mu} = 0
\end{equation}
It is convenient here to treat $A_{\mu}$ as a complex vector.
Hence the electromagnetic field tensor $F_{\mu \nu}$ is given by
\begin{equation}
F_{\mu \nu} = {\cal{R}}e(A_{\nu;\mu}- A_{\mu;\nu})
\end{equation}
The corresponding electromagnetic energy-momentum tensor is defined by
\begin{equation}
T^{\mu \nu} = \frac{1}{4 \pi} \left[-F^{\mu \rho} F^{\nu}_{.\rho}
+ \frac{1}{4} F^{\alpha \beta} F_{\alpha \beta} g^{\mu \nu} \right]
\end{equation}
where $F_{. \rho}^{\nu} = g^{\nu \lambda}~F_{\lambda \rho}$. The components of
this tensor satisfy the conservation equations
$T^{\mu \nu}_{\verb+ +;\nu} = 0$ as a consequence of Eqs.(3).
\par For an observer located at the spacetime point $x$ and moving with the
unit 4-velocity $u^{\alpha}$, the density of electromagnetic energy flux is
given by the Poynting vector
\begin{equation}
{\cal{P}}^{\mu}(x,u) = c T^{\mu \nu}(x) u_{\nu}(x)
\end{equation}
and the density of electromagnetic energy as measured by the observer is
\begin{equation}
\mu_{el}(x,u) = T^{\mu \nu}(x) u_{\mu} u_{\nu}
\end{equation}
In this paper, we use the geometrical optics approximation. So we assume
that there exist wave solutions to Eqs.(3) which admit a development of the form
\begin{equation}
A^{\mu}(x,\xi) = [a^{\mu}(x) + O(\xi)] \exp(\frac{i}{\xi} \hat{S}(x))
\end{equation}
where $a^{\mu}(x)$ is a slowly varying complex vector amplitude, $\hat{S}(x)$
is a real function and $\xi$ a dimensionless parameter which tends to zero as
the typical wavelength of the wave becomes shorter and shorter. A solution like
(9) represents a quasi plane, locally monochromatic wave of high frequency
(Misner {\it et al.} 1973).
\par Let us define the phase $S$ and the vector field $k_{\alpha}$ by the
relations
\begin{equation}
S(x,\xi) = \frac{1}{\xi} \hat{S}(x)
\end{equation}
and
\begin{equation}
k_{\alpha} = S_{, \alpha}
\end{equation}
Inserting (9) into Eqs.(3) and (4), then retaining only the leading terms of
order $\xi^{-2}$ and $\xi^{-1}$, yield the fundamental
equations of geometrical optics
\begin{equation}
k^{\alpha} k_{\alpha} = 0
\end{equation}
\begin{equation}
k^{\alpha}a^{\mu}_{; \alpha} = -\frac{1}{2} a^{\mu} k^{\alpha}_{; \alpha}
\end{equation}
with the gauge condition
\begin{equation}
k_{\alpha} a^{\alpha} = 0
\end{equation}
\par Light rays are defined to be the curves whose tangent vector field is
$k^{\alpha}$. So the parametric equations $x^{\alpha} = x^{\alpha}(v)$ of the
light rays are solutions to the differential equations
\begin{equation}
\frac{dx^{\alpha}}{dv} = k^{\alpha}(x^{\lambda}(v))
\end{equation}
where $v$ is an affine parameter. Differentiating Eq.(12) and noting that
\begin{equation}
k_{\alpha;\beta} =k_{\beta ;\alpha}
\end{equation}
follows from (11), it is easily seen that $k^{\alpha}$ satisfies the
propagation equations
\begin{equation}
k^{\alpha}k_{\beta;\alpha}=0
\end{equation}
These equations, together with (12), show that the light rays are null
geodesics.
\par Inserting (9) into (5) and (6) gives the approximate expression for
$F_{\mu \nu}$
\begin{equation}
F_{\mu \nu} = {\cal{R}}e[i(k_{\mu} a_{\nu} - k_{\nu} a_{\mu}) e^{iS}]
\end{equation}
and for $T^{\mu \nu}$ averaged over a period
\begin{equation}
T^{\mu \nu} = \frac{1}{8 \pi} a^2 k^{\mu} k^{\nu}
\end{equation}
where $a$ is the scalar amplitude defined by
\footnote{We introduce a minus sign in (20) because Eqs.(12) and (14)
imply that $a^{\mu}$ is a space-like vector when the electromagnetic field is
not a pure gauge field.}
\begin{equation}
a = (-a^{\mu}\overline{a}_{\mu})^{1/2}
\end{equation}
\par From (7) and (19), it is easily seen that the Poynting vector is
proportional to the null tangent vector $k^{\mu}$.
This means that the energy of the wave is transported
along each ray with the speed of light. Let us denote by ${\cal{F}}(x,u)$ the
energy flux received by an observer located at $x$ and moving with the
4-velocity $u^{\alpha}$: by definition, ${\cal{F}}(x,u)$ is the amount of
radiating energy flowing per unit proper time across a unit surface orthogonal
to the direction of propagation. It follows from (8) and (19) that
\begin{equation}
{\cal{F}}(x,u) = c \mu_{el} (x,u) = \frac{c}{8 \pi} a^2(x)
(u^{\mu}k_{\mu})^2_{obs}
\end{equation}
\par This formula enables us to determine the photon flux ${\cal{N}}(x,u)$
received by the observer located at $x$ and moving with the 4-velocity
$u^{\alpha}$. Since the 4-momentum of a
photon is $p^{\mu} = \hbar k^{\mu}$, the energy of the photon as measured by the
observer is $cp^{\mu} u_{\mu} = c\hbar (u^{\mu} k_{\mu})$. We have therefore
\begin{equation}
{\cal{N}}(x,u)=\frac{1}{8 \pi \hbar} a^2(x) (u^{\mu} k_{\mu})_{obs}
\end{equation}
The spectral shift $z$ of a light source (emitter) as measured by an observer
is given by ({\it e.g.} G.F.R. Ellis, 1971)
\begin{equation}
1+z = \frac{(u^{\mu} k_{\mu})_{em}}{(u^{\nu} k_{\nu})_{obs}}
\end{equation}
Consequently, the photon flux ${\cal{N}}(x,u)$ may be written as
\begin{equation}
{\cal{N}}(x,u)=\frac{1}{8 \pi \hbar} a^2(x)
\frac{( u^{\mu} k_{\mu})_{em}}{1+z}
\end{equation}
\par The scalar amplitude $a$ can be written in the form of an integral along
the light ray $\gamma$ joining the source to the observer located at $x$.
Multiplying Eq.(13) by $\overline{a}_{\mu}$ yields the propagation equation
for $a$
\begin{equation}
k^{\alpha} a_{; \alpha} \equiv \frac{da}{dv} = -\frac{1}{2} a k^{\alpha}_{;
\alpha}
\end{equation}
where $d/dv$ denotes the total differentiation of a scalar function along
$\gamma$. Then, integrating (25) gives
\begin{equation}
a_{|obs} = a_{|x_0} \exp \left( -\frac{1}{2} \int \limits_{v_{x_0}}^{v_{obs}}
k^{\alpha}_{; \alpha} \verb+ +dv \right)
\end{equation}
where $x_0$ is an arbitrary point on the light ray $\gamma$.
\par In the following, we consider that the light source is at spatial
infinity. We suppose the existence of coordinate systems $x^{\alpha}$ such
that on any
hypersurface $x^0 = const.$, $|g_{\mu \nu}~-~\eta_{\mu \nu}|~=~O(1/r)$ when
$r~=~[\sum_{i=1}^3~(x^i)^2]^{1/2}~\rightarrow~\infty$, with
$\eta_{\mu \nu}~=~diag(1,-1,-1,-1)$.
We require that in such coordinate systems the quantities $k_{\alpha ; \beta}$,
$k_{\alpha ; \beta ; \gamma}$ and $a_{;\alpha}$ respectively fulfill the
asymptotic conditions
\begin{equation}
\left\lbrace
\begin{array}{ccc}
k_{\alpha; \beta} (x_0) & = & O( 1/|v_{x_0}|^{1+p}) \\
& & \\
k_{\alpha; \beta ; \gamma} (x_0) & = & O(1/|v_{x_0}|^{2+p}) \\
& & \\
a_{;\alpha} (x_0) & = & O(1/|v_{x_0}|^{1+p})
\end{array}
\right.
\end{equation}
when $v_{x_0} \rightarrow - \infty$, with $p > 0$.
Moreover, we assume that the scalar amplitude
$a_{|x_0}$ in Eq.(26) remains bounded when $v_{x_0} \rightarrow - \infty$ and
we put
\begin{equation}
\lim_{v_{x_0} \rightarrow - \infty} a_{x_0} = a_0
\end{equation}
It results from these assumptions that $a_{|obs}$ may be written as
\begin{equation}
a_{|obs} = a_{0} \exp \left( -\frac{1}{2} \int \limits_{-\infty}^{v_{obs}}
k^{\alpha}_{; \alpha} \verb+ +dv \right)
\end{equation}
\par Now, let us differentiate $k^{\alpha}_{;\alpha}$ with respect to $v$
along $\gamma$. Applying (1) and (2), then taking (16) and (17) into
account, we obtain the relation (Sachs 1961)
\begin{equation}
\frac{d}{dv}(k^{\alpha}_{;\alpha}) = - k^{\alpha;\beta} k_{\alpha;\beta} -
R_{\alpha \beta} k^{\alpha} k^{\beta}
\end{equation}
As a consequence, we can write
\begin{equation}
\int \limits_{-\infty}^{v_{obs}} k^{\alpha}_{; \alpha} \verb+ +dv
= - \int \limits_{-\infty}^{v_{obs}} dv \int \limits_{-\infty}^{v}
[R_{\alpha \beta}(x^{\lambda} (v')) k^{\alpha} k^{\beta} +
k^{\alpha;\beta} k_{\alpha;\beta}] dv'
\end{equation}
The convergence of the integrals is ensured by conditions (27).
\par Equations (29) and (31) allow to determine the factor $a^2(x)$ in
${\cal{N}}(x,u)$ from the energy content of the regions crossed by the light
rays and from the geometry of the rays themselves.
\par It is well known that $1/(1+z)$ (or $(1+z)$) can also be obtained in the
form of an integral along the light ray $\gamma$ (see {\it e.g.} Ellis 1971 or
Schneider {\it et al.} 1992). However, the corresponding formula will not be
useful for our discussion and we will not develop it here.
\par In fact, the scintillation phenomenon consists in a variation of
${\cal{N}}$ with respect to time. For this reason, it is more convenient to
calculate the total derivative of ${\cal{N}}$ along the world-line
${\cal{C}}_{obs}$ of a given observer, moving at the point $x$ with the
4-velocity $u^{\alpha}$.
\par Given a scalar or tensorial quantity $F$, we denote by $\dot{F}$ the total
covariant differentiation along ${\cal{C}}_{obs}$ defined by
\begin{equation}
\dot{F} \equiv u^{\lambda} F_{;\lambda} = \frac{\nabla F}{ds}
\end{equation}
where $ds = (g_{\mu \nu} dx^{\mu} dx^{\nu})^{1/2}$ is the line element between
two events $x^{\mu}$ and $x^{\mu}+dx^{\mu}$ on ${\cal{C}}_{obs}$.
\par In Eq.(24), the quantity $c\hbar(u^{\mu}k_{\mu})_{em}$ is the energy of a photon
emitted by an
atom of the light source as measured by an observer comoving with this atom. So
$(u^{\mu}k_{\mu})_{em}$ is a constant which depends only on the nature of
the atom (this constant characterizes the emitted spectral line). Consequently,
the change in the photon flux with respect to time is simply due to the change
in the scalar amplitude $a$ and to the change in the spectral shift $z$.
From (24), we obtain at each point $x$ of ${\cal{C}}_{obs}$
\begin{equation}
\frac{\dot{{\cal{N}}}}{{\cal{N}}} = 2 \frac{\dot{a}}{a} + (1+z) \frac{d}{ds}
\left( \frac{1}{1+z} \right)
\end{equation}
\par Henceforth, we shall call the contribution $2\dot{a}/a$ in Eq.(33) the
geometrical scintillation because the variations in $a$ are related to the
focusing properties of light rays by gravitational fields
(see G.F.R.Ellis 1971 and references therein; see also Misner {\it et al.}
1973).
\par Let us now try to find expressions for $\dot{a}/a$ and $\frac{d}{ds}
(1+z)^{-1}$ in the
form of integrals along $\gamma$. In what follows, we assume that the ray
$\gamma$ hits at each of its points $x(v)$ a vector field $v^{\mu}$ which
satisfies the boundary condition
\begin{equation}
v^{\mu}(x_{obs}) = u^{\mu}_{obs}
\end{equation}
Let us emphasize that $v^{\mu}$ can be chosen arbitrarily at any point $x$
which does not belong to the world line ${\cal{C}}_{obs}$ (for example,
$v^{\mu}(x)$ could be the unit 4-velocity of an observer at $x$, an
assumption which is currently made in cosmology; however we shall make a more
convenient choice for $v^{\mu}$ in what follows).
\par It results from the boundary conditions (27) and (34) that $\dot{a}/a$ may
be written as
\begin{equation}
\left.\frac{\dot{a}}{a}\right|_{obs} = \int \limits_{-\infty}^{v_{obs}} \frac{d}{dv}
[v^{\mu}(\ln a)_{;\mu}] dv
\end{equation}
\par Thus we have to transform the expression
\begin{equation}
\frac{d}{dv} [v^{\mu}(\ln a)_{;\mu}] = k^{\alpha}( v^{\mu}
(\ln a)_{;\mu})_{;\alpha}
\end{equation}
taken along $\gamma$. Of course, we must take into account the propa\-gation
equation (25) which could be rewritten as
\begin{equation}
k^{\alpha}(\ln a)_{;\alpha} = - \frac{1}{2} k^{\alpha}_{;\alpha}
\end{equation}
Noting that
\begin{equation}
k^{\alpha}( v^{\mu} (\ln a)_{;\mu})_{;\alpha} = k^{\alpha} v^{\mu}
(\ln a)_{;\mu ;\alpha} + k^{\alpha} v^{\mu}_{;\alpha} (\ln a)_{;\mu}
\end{equation}
then using the relation
\begin{equation}
F_{;\alpha;\beta} = F_{;\beta;\alpha}
\end{equation}
which holds for any scalar $F$, we find
\begin{equation}
\frac{d}{dv} [v^{\mu}(\ln a)_{;\mu}] = v^{\mu} [k^{\alpha} (\ln
a)_{;\alpha}]_{;\mu} + [k,v]^{\mu} (\ln a)_{;\mu}
\end{equation}
where the bracket $[k,v]$ of $k^{\alpha}$ and $v^{\beta}$ is the vector defined
by
\begin{equation}
[k,v]^{\mu} \equiv k^{\alpha} v^{\mu}_{;\alpha} - v^{\alpha} k^{\mu}_{;\alpha}
\end{equation}
Taking (37) into account, it is easily seen that
\begin{equation}
\frac{d}{dv} [v^{\mu}(\ln a)_{;\mu}] = -\frac{1}{2} v^{\mu}
k^{\alpha}_{;\alpha ;\mu} + [k,v]^{\mu} (\ln a)_{;\mu}
\end{equation}
\par Now, using the identity (1) and the definition (2) yields
\begin{equation}
\frac{d}{dv} [v^{\mu}(\ln a)_{;\mu}] = \frac{1}{2} R_{\mu \nu} k^{\mu} v^{\nu}
- \frac{1}{2} v^{\mu} k^{\alpha}_{;\mu ;\alpha} +[k,v]^{\mu} (\ln a)_{;\mu}
\end{equation}
\par Let us try to write the term
$- \frac{1}{2} v^{\mu} k^{\alpha}_{;\mu ;\alpha}$ in the form of an integral
along $\gamma$. In agreement with (27), we have at any point
$x(v)$ of $\gamma$:
\begin{equation}
v^{\mu} k^{\alpha}_{;\mu ;\alpha} = \int \limits_{-\infty}^{v}
\frac{d}{dv} (v^{\mu} k^{\alpha}_{;\mu ;\alpha}) dv = \int
\limits_{-\infty}^{v} k^{\lambda}
(v^{\mu}k^{\alpha}_{;\mu ;\alpha})_{;\lambda}dv
\end{equation}
\par A tedious but straightforward calculation using (1), (2) and (17) leads
to the following result
\begin{eqnarray}
\lefteqn{- \frac{d}{dv}(v^{\mu} k^{\alpha}_{;\mu ;\alpha}) =
(R_{\rho \sigma ; \mu}-R_{\mu \rho; \sigma}) v^{\mu} k^{\rho} k^{\sigma}} \nonumber \\
& & \verb+ + + R_{\rho \sigma} k^{\rho} v^{\mu} k^{\sigma}_{;\mu} \\
& & \verb+ + + v^{\mu} (k^{\alpha; \beta} k_{\alpha ; \beta})_{;\mu}
- [k,v]^{\mu} k^{\alpha}_{;\mu;\alpha} \nonumber
\end{eqnarray}
\par In the above formulae $v^{\mu}$ is an arbitrary vector. So we can
choose $v^{\mu}$ so that the transport equations
\footnote{These equations mean that $v^{\mu} = \alpha \eta^{\mu}$, where
$\alpha=const.$ and $\eta^{\mu}$ is a connection vector of the
system of light rays associated with the phase function $S$ (see, {\it e.g.},
Schneider {\it et al.} 1992).}
\begin{equation}
[k,v]^{\mu} = 0
\end{equation}
are satisfied along the ray $\gamma$. Since (46) is a system of first order
partial differential equations in $v^{\mu}$, there exists one and only one
solution satisfying the boundary conditions (34). With this choice, $2
\dot{a}/a$ is given by the integral formula:
\begin{eqnarray}
\lefteqn{\left. 2\frac{\dot{a}}{a} \right|_{obs} = \int
\limits_{-\infty}^{v_{obs}}
R_{\mu \nu} k^{\mu} v^{\nu} dv + \int \limits_{-\infty}^{v_{obs}} dv
\int \limits_{-\infty}^{v}
[(R_{\rho \sigma ; \mu} - R_{\mu \rho ; \sigma})
v^{\mu} k^{\rho} k^{\sigma} } \nonumber \\
& & \verb+ + + R_{\rho \sigma} k^{\rho} v^{\mu} k^{\sigma}_{;\mu} \\
\nonumber \\
& & \verb+ + + v^{\mu} (k^{\alpha ; \beta} k_{\alpha ; \beta})_{;\mu}]dv'
\nonumber
\end{eqnarray}
\par Now we look for an integral form for the total derivative $\frac{d}{ds}
(1+z)^{-1}$ along ${\cal{C}}_{obs}$.
Henceforth, we suppose for the sake of simplicity that
the observer is freely falling, {\it i.e.} that ${\cal{C}}_{obs}$ is a timelike
geodesic. So we have
\begin{equation}
\dot{u}^{\alpha} = u^{\lambda} u^{\alpha}_{;\lambda} = 0
\end{equation}
\par Since $(u^{\mu} k_{\mu})_{em}$ is a constant characterizing the observed
spectral line (see above), it follows from (23) and (48) that
\begin{equation}
\frac{d}{ds}\left( \frac{1}{1+z} \right)_{obs} = \frac{1}
{(u^{\alpha}k_{\alpha})_{em}} (u^{\mu} u^{\nu} k_{\mu;\nu})_{obs}
\end{equation}
\par Given an arbitrary vector field $v^{\mu}$ fulfilling the boundary
condition (34), Eq.(49) may be written as
\begin{equation}
\frac{d}{ds}\left( \frac{1}{1+z} \right)_{obs} =
\frac{1}{(u^{\alpha}k_{\alpha})_{em}} \int \limits_{-\infty}^{v_{obs}}
k^{\lambda}(v^{\mu} v^{\nu} k_{\mu;\nu})_{;\lambda} dv
\end{equation}
\par Using (1), (17) and (41), a straightforward calculation gives the general
formula
\begin{eqnarray}
\lefteqn{\frac{d}{ds}\left( \frac{1}{1+z} \right)_{obs} =
\frac{1}{(u^{\alpha}k_{\alpha})_{em}}
\int \limits_{-\infty}^{v_{obs}} \{-R_{\mu \rho \nu \sigma} v^{\mu} v^{\nu}
k^{\rho} k^{\sigma}} \nonumber \\
\\
& & \verb+ + +(k^{\lambda} v^{\mu}_{;\lambda})(v^{\nu} k_{\mu;\nu}) +
v^{\mu}[k,v]^{\nu} k_{\mu;\nu} \} dv \nonumber
\end{eqnarray}
which holds for any freely falling observer.
\par Now let us choose for $v^{\mu}$ the vector field defined by (46) and (34).
We obtain
\begin{eqnarray}
\lefteqn{(1+z)\frac{d}{ds} \left( \frac{1}{1+z} \right)_{obs} =
\frac{1}{(u^{\lambda}k_{\lambda})_{obs}}
\int \limits_{-\infty}^{v_{obs}} [-R_{\mu \rho \nu \sigma} v^{\mu} v^{\nu}
k^{\rho} k^{\sigma}} \nonumber \\
\\
& &\verb+ + + v^{\mu} v^{\nu} k^{\alpha}_{;\mu} k_{\alpha;\nu}] dv
\nonumber
\end{eqnarray}
\section{Weak-field approximation}
Now we assume the gravitational field to be very weak. So we put
\begin{equation}
g_{\mu \nu} = \eta_{\mu \nu} + h_{\mu \nu}
\end{equation}
where $h_{\mu \nu}$ is a small perturbation of the flat spacetime
metric $\eta_{\mu \nu}$, and we systematically discard the
terms of order $h^2,h^3,...$ in the following calculations. Thereafter, we
suppose that any quantity $T$ (scalar or tensor) may be written as
\begin{equation}
T = T^{(0)} + T^{(1)} + O(h^2)
\end{equation}
where $T^{(0)}$ is the unperturbed quantity in flat spacetime and $T^{(1)}$
denotes the perturbation of first-order with respect to $h_{\mu \nu}$.
Henceforth, indices will be lowered with $\eta_{\mu \nu}$ and
raised with $\eta^{\mu \nu}=\eta_{\mu \nu}$.
\par We shall put for the sake of simplicity
\begin{equation}
K_{\mu} = k^{(0)}_{\mu} = S^{(0)}_{\verb+ +,\mu}
\end{equation}
\par Neglecting the first order terms in $h$, Eq.(12) gives
$K^{\alpha}K_{\alpha} = 0$, whereas Eq.(17) reduces to the equation of a null
geodesic in flat spacetime related to Cartesian coordinates
\begin{equation}
K^{\alpha} K_{\beta, \alpha} = 0
\end{equation}
\par In agreement with the assumptions made in Sect.3 to obtain Eqs.(29) and
(31), we
consider that at the zeroth order in $h_{\mu \nu}$, the light emitted by the
source is described by a plane monochromatic wave in a flat spacetime. So we
suppose that the quantities $K_{\mu}$, $a^{(0) \mu}$ and consequently $a^{(0)}$
are constants throughout the domain of propagation.
\par Moreover, we regard as negligible all the perturbations of gravitational
origin in the vicinity of the emitter (this hypo\-thesis is natural for a source
at spatial infinity) and the quantity $a_0$ in Eqs.(28) and (29) is given
consequently by
\begin{equation}
a_0 = a^{(0)} = const.
\end{equation}
Furthermore, it results from $K_{\mu}=const.$ that $k_{\alpha;\beta}~=~O(h)$.
Therefore, terms like $k^{\alpha}_{;\mu}k_{\alpha;\nu}$ or
$R_{\rho \sigma}k^{\rho} v^{\mu} k^{\sigma}_{;\mu}$ are of second order and
can be systematically disregarded.
\par According to our general assumption in this section, the unit 4-velocity
of the observer may be expanded as
\begin{equation}
u^{\alpha}_{obs} = U^{\alpha} + u^{(1)\alpha}_{obs} + O(h^2)
\end{equation}
at any point of ${\cal{C}}_{obs}$, with the definition
\begin{equation}
U^{\alpha} = u^{(0)\alpha}_{obs}
\end{equation}
It follows from (48) and from $g_{\alpha \beta} u^{\alpha} u^{\beta} = 1$ that
\begin{equation}
U^{\alpha} = const.
\end{equation}
and
\begin{equation}
\eta_{\alpha \beta} U^{\alpha} U^{\beta} = 1
\end{equation}
\par From these last equations, we recover the fact that the unperturbed motion
of a freely falling observer is a time-like straight line in Minkowski
space-time.
\par Now we have to know the quantities $v^{\mu}$ occurring in Eqs.(47) and (52)
at the lowest order. An elementary calculation shows that, in
Eqs.(46), the covariant differentiation may be replaced by the ordinary
differentiation. So we have to solve the system
\begin{equation}
\frac{dv^{\alpha}}{dv} = v^{\mu} k^{\alpha}_{,\mu}
\end{equation}
together with the boundary conditions (34).
\par Assuming the expansion
\begin{equation}
v^{\mu} = v^{(0)\mu} + v^{(1)\mu} + O(h^2)
\end{equation}
it is easily seen that the unique solution of (62) and (34) is such that at any
point of the light ray $\gamma$, the components $v^{(0)\mu}$ are constants
given by
\begin{equation}
v^{(0)\mu} = U^{\mu}
\end{equation}
\par Neglecting all the second order terms in (47) and (52), we finally obtain
\begin{eqnarray}
\lefteqn{\left. 2\frac{\dot{a}}{a} \right|_{obs} =
\int \limits_{-\infty}^{v_{obs}}
R^{(1)}_{\mu \nu} K^{\mu} U^{\nu} dv} \nonumber \\
\\
& & \verb+ + + \int \limits_{-\infty}^{v_{obs}} dv \int \limits_{-\infty}^{v}
(R^{(1)}_{\rho \sigma , \mu} - R^{(1)}_{\mu \rho , \sigma})
U^{\mu} K^{\rho} K^{\sigma} dv' \nonumber
\end{eqnarray}
and
\begin{equation}
(1+z)\frac{d}{ds}\left( \frac{1}{1+z} \right)_{obs} =
-\frac{1}{U^{\lambda}K_{\lambda}} \int \limits_{-\infty}^{v_{obs}}
R^{(1)}_{\mu \rho \nu \sigma} U^{\mu} U^{\nu} K^{\rho} K^{\sigma} dv
\end{equation}
all the integrations being performed along the unperturbed path of light.
\par In Eq.(66) $R^{(1)}_{\mu \rho \nu \sigma}$ denotes the linearized curvature
tensor of the metric $g_{\mu \nu} = \eta_{\mu \nu} + h_{\mu \nu}$, {\it i.e.}
\begin{equation}
R^{(1)}_{\mu \rho \nu \sigma} = - \frac{1}{2}(h_{\mu \nu,\rho \sigma} +h_{\rho
\sigma, \mu \nu} - h_{\mu \sigma , \nu \rho} - h_{\nu \rho , \mu \sigma})
\end{equation}
and $R^{(1)}_{\mu \nu}$ is the corresponding linearized Ricci tensor
\begin{equation}
R^{(1)}_{\mu \nu} = \eta^{\alpha \beta} R^{(1)}_{\alpha \mu \beta \nu}
\end{equation}
\par It is worthy to note that the components $R^{(1)}_{\mu \rho \nu \sigma}$
and $R^{(1)}_{\mu \nu}$ are gauge-invariant quantities. Indeed, under an
arbitrary infinitesimal coordinate transformation $x^{\alpha} \rightarrow
x'^{\alpha} = x^{\alpha}+ \xi^{\alpha}(x)$, $h_{\mu \nu}(x)$
transforms into
$h'_{\mu \nu}(x) = h_{\mu \nu}(x) -\xi_{\mu, \nu} - \xi_{\nu,
\mu}$, and it is easily checked from (67) and (68) that
\begin{equation}
R^{(1)}_{\mu \rho \nu \sigma}(h'_{\alpha \beta}) =
R^{(1)}_{\mu \rho \nu \sigma}(h_{\alpha \beta})
\end{equation}
\begin{equation}
R^{(1)}_{\mu \nu}(h'_{\alpha \beta}) =
R^{(1)}_{\mu \nu}(h_{\alpha \beta})
\end{equation}
This feature ensures that the right-hand sides of Eqs. (65) and (66) are
gauge-invariant quantities.
\par Equation (65) reveals that the first order geometrical scintillation
effect depends upon the gravitational field through the Ricci tensor only.
On the other side, it follows from (66) that the part of the scintillation due
to the spectral shift depends upon the curvature tensor.
\par These properties have remarkable consequences in general relativity.
Suppose that the light ray $\gamma$ travels
in regions entirely free of matter. Since the linearized Einstein equations are
in a vacuum
\begin{equation}
R^{(1)}_{\mu \nu} = 0
\end{equation}
it follows from Eq.(65) that
\begin{equation}
2\frac{\dot{a}}{a} = 0 + O(h^2)
\end{equation}
As a consequence, $\dot{{\cal{N}}}/{\cal{N}}$ reduces to the contribution of
the change in the spectral shift
\begin{equation}
\frac{\dot{{\cal{N}}}}{{\cal{N}}} =
-\frac{1}{U^{\lambda}K_{\lambda}} \int \limits_{-\infty}^{v_{obs}}
R^{(1)}_{\mu \rho \nu \sigma} U^{\mu} U^{\nu} K^{\rho} K^{\sigma} dv
\end{equation}
\par From (72), we recover the conclusion previously drawn by Zipoy (1966) and
Zipoy \& Bertotti (1968): within general rela\-tivity, gravitational waves
produce no first order geometrical scintillation.
\section{Application to the scalar-tensor theories.}
The general theory developed in the above sections is valid for any metric
theory of gravity. Let us now examine the implications of Eqs.(65) and (66)
within the scalar-tensor theories of gravity.
\par The class of theories that we consider here is described by the action
\footnote{For details see Will (1993) and
references therein. The factor $-(16 \pi c)^{-1}$ in the gravitational action
is due to the fact that we use the definition of the energy-momentum tensor
given in Landau \& Lifshitz (1975).}
\begin{eqnarray}
\lefteqn{
{\cal{J}} = -\frac{1}{16 \pi c} \int d^4x \sqrt{|g|} \left[\Phi R -
\frac{\omega(\Phi)}{\Phi} \Phi^{,\alpha} \Phi_{,\alpha} \right ]} \nonumber \\
\\
& & \verb+ + + {\cal{J}}_{m}(g_{\mu \nu},\psi_m) \nonumber
\end{eqnarray}
where $R$ is the Ricci scalar curvature ($R=g^{\mu \nu}R_{\mu \nu}$), $\Phi$ is
the scalar gravitational field, $g$ is the determinant of the metric components
$g_{\mu \nu}$, $\omega(\Phi)$ is an arbitrary function of the scalar field $\Phi$, and
${\cal{J}}_{m}$ is the matter action. We assume that ${\cal{J}}_{m}$ is a
functional of the metric and of the matter fields $\psi_m$ only. This means
that ${\cal{J}}_{m}$ does not depend explicitly upon the scalar field $\Phi$
(it is the assumption of universal coupling between matter and metric).
\par We consider here the weak-field approximation only. So we assume that the
scalar field $\Phi$ is of the form
\begin{equation}
\Phi = \Phi_0 + \phi
\end{equation}
where $\phi$ is a first order perturbation of an averaged constant value
$\Phi_0$. Consequently, the field equations deduced from
(74) reduce to the following system:
\begin{equation}
R^{(1)}_{\mu \nu} = 8 \pi \Phi^{-1}_0 (T^{(0)}_{\mu \nu} -\frac{1}{2}
T^{(0)}\eta_{\mu
\nu}) + \Phi_0^{-1} (\phi_{,\mu \nu} + \frac{1}{2} \Box \phi \eta_{\mu
\nu})
\end{equation}
\begin{equation}
\Box \phi = \frac{8 \pi}{2\omega (\Phi_0) + 3} T^{(0)}
\end{equation}
where $T^{(0)}_{\mu \nu}$ is the energy-momentum tensor of the matter fields
$\psi_m$ at the lowest order, $T^{(0)}=\eta^{\alpha
\beta}T^{(0)}_{\alpha \beta}$ and $\Box$ denotes the
d'Alembertian operator on Minkowski spacetime: $\Box \phi = \eta^{\alpha
\beta} \phi_{,\alpha \beta}$.
\par It is easily seen that any solution $h_{\mu \nu}$ to the field equations
(76) is given by \footnote{This transformation can be suggested by the
conformal transformation of the metric which passes from the Jordan-Fierz frame to
the Einstein frame.}
\begin{equation}
h_{\mu \nu} = h^{E}_{\mu \nu} - \frac{\phi}{\Phi_0} \eta_{\mu \nu}
\end{equation}
where $h^{E}_{\mu \nu}$ is a solution to the equations
\begin{equation}
R^{(1)}_{\mu \nu}(h^{E}_{\alpha \beta})
= 8 \pi \Phi^{-1}_0 (T^{(0)}_{\mu \nu} - \frac{1}{2} T^{(0)}\eta_{\mu \nu})
\end{equation}
which are simply the linearized Einstein equations with an effective
gravitational constant $G_{eff} = c^4 \Phi_0^{-1}$. Indeed, inserting (78) in
(67) yields the following expression for the curvature tensor
\begin{eqnarray}
\lefteqn{R^{(1)}_{\mu \rho \nu \sigma}(h_{\alpha \beta}) =
R^{(1)}_{\mu \rho \nu \sigma}(h^{E}_{\alpha \beta})}
\nonumber \\
\\
& & \verb+ + + \frac{1}{2} \Phi_0^{-1}
(\eta_{\mu \nu} \phi_{,\rho \sigma} + \eta_{\rho \sigma} \phi_{,\mu \nu}
- \eta_{\mu \sigma} \phi_{,\nu \rho} - \eta_{\nu \rho} \phi_{,\mu \sigma})
\nonumber
\end{eqnarray}
from which one deduces the Ricci tensor
\begin{equation}
R^{(1)}_{\mu \nu}(h_{\alpha \beta}) = R^{(1)}_{\mu \nu}(h^{E}_{\alpha \beta})
+\Phi_0^{-1} (\phi_{,\mu \nu} + \frac{1}{2} \eta_{\mu \nu} \Box \phi)
\end{equation}
Then substituting for $R^{(1)}_{\mu \nu}(h_{\alpha \beta})$ from its expression
(81) into the field equations (76) gives Eqs.(79), thus proving the proposition.
\par The decomposition (78) of the gravitational perturbation $h_{\mu \nu}$
implies that each term contributing to $\dot{{\cal{N}}}/{\cal{N}}$ can be split
into a part built from the Einsteinian perturbation $h_{\mu \nu}^{E}$ only
and into an other part built from the scalar field $\phi$ alone. In what
follows, we use the superscript ${ST}$ for a functional of a solution $(h_{\mu
\nu},\phi)$ to the field equations (76)-(77) and the superscript ${E}$ for the same
kind of functional evaluated only with the corresponding solution $h_{\mu
\nu}^{E}$.
\par In order to perform the calculation of the integrals (65) and (66), we note
that $K^{\mu} F_{,\mu}$ is the usual total derivative of the quantity $F$ along
the unperturbed ray path, which implies that
\begin{equation}
\int \limits_{-\infty}^{v_{obs}} K^{\mu} F_{,\mu} dv = F_{obs} - F_{(-\infty)}
\end{equation}
\par The 4-vector $K^{\mu}$ (supposed here to be future oriented, {\it i.e}
such that $K^0 > 0$) gives the direction of propagation of the light coming from
the observed source. For a given observer moving with the 4-velocity $U^{\mu}$,
let us put
\begin{equation}
N^{\mu} = (\eta^{\mu \nu} - U^{\mu} U^{\nu})
\frac{K_{\nu}}{(U^{\lambda}K_{\lambda})} = \frac{K^{\mu}}
{(U^{\lambda}K_{\lambda})}- U^{\mu}
\end{equation}
We have $\eta^{\mu \nu} N_{\mu}N_{\nu} = -1$. Since $N^{\mu}$ is orthogonal to
$U^{\mu}$ by construction, $N^{\mu}$ can be identified to the unit 3-vector
$\vec{N}$ giving the direction of propagation of the light ray in the usual
3-space of the observer.
\par Using (76), (77), (80) and the assumption $\phi_{(-\infty)} = 0$, it is
easily seen that (65) and (66) can be respectively written as
\begin{equation}
\left. 2\frac{\dot{a}}{a} \right|^{ST}_{obs} =
\left. 2\frac{\dot{a}}{a} \right|^{E}_{obs} + \left.
\frac{\dot{\phi}}{\Phi_0}\right|_{obs}
\end{equation}
and
\begin{eqnarray}
\lefteqn{(1+z)\frac{d}{ds}\left( \frac{1}{1+z} \right)^{ST}_{obs} =
(1+z)\frac{d}{ds}\left( \frac{1}{1+z} \right)^{E}_{obs}} \nonumber \\
\\
& &\verb+ ++ \frac{1}{2\Phi_0} (\dot{\phi} - \vec{N}.\vec{\nabla} \phi)_{obs} \nonumber
\end{eqnarray}
where
\begin{equation}
\dot{\phi} = U^{\mu} \phi_{,\mu}
\end{equation}
and
\begin{equation}
\vec{N}.\vec{\nabla} \phi=N^{\mu}\phi_{,\mu}
\end{equation}
\par As a consequence, the rate of variation in the photon flux as received by
the observer is given by the general formula
\begin{equation}
\left.\frac{\dot{{\cal{N}}}}{{\cal{N}}}\right|^{ST}_{obs} =
\left.\frac{\dot{{\cal{N}}}}{{\cal{N}}}\right|^{E}_{obs} +
\frac{1}{2 \Phi_0}(3 \dot{\phi} - \vec{N}.\vec{\nabla} \phi)_{obs}
\end{equation}
\par In a vacuum $(T^{(0)}_{\mu \nu} = 0)$, the metric $h^{E}_{\mu \nu}$
satisfies the linearized Einstein field equations (71) and Eq.(84) reduces to
\begin{equation}
2\left.\frac{\dot{a}}{a} \right|^{ST}_{obs} =
\left. \frac{\dot{\phi}}{\Phi_0}\right|_{obs}
\end{equation}
\par In Eq.(88), $({\dot{\cal{N}}}/{\cal{N}})^E_{obs}$ is reduced to the term
given by Eq.(73), where $R^{(1)}_{\mu \rho \nu \sigma}$ is constructed with
$h^{E}_{\mu \nu}$.
\par It follows from (89) that contrary to general relativity, the scalar-tensor
theories (defined by (74)) predict the existence of a first-order geometrical
scintillation effect produced by gravitational waves. This effect is
proportional to the amplitude of the scalar perturbation. It should be noted
that an effect of the same order of magnitude is also due to the change in the
spectral shift.
\par To finish, let us briefly examine the case where the scalar wave $\phi$ is
locally plane (it is a reasonable assumption if the source of gravitational
wave is far from the observer). Thus we can put in the vicinity of the observer
located at the point $x_{obs}$
\begin{equation}
\phi = \phi(u)
\end{equation}
where $u$ is a phase function which admits the expansion
\begin{equation}
u(x) = u(x_{obs}) + L_{\mu}(x^{\mu}-x^{\mu}_{obs})
+ O(|x^{\mu}-x^{\mu}_{obs}|^2)
\end{equation}
with
\begin{equation}
L_{\mu} = const.
\end{equation}
\par It follows from Eq.(77) with $T^{(0)} = 0$ that $L_{\mu}$ is a null vector
of Minkowski spacetime.
\par Replacing $K_{\mu}$ by $L_{\mu}$ in (83) defines the
spacelike vector $P^{\mu}$, which can be identified with the unit 3-vector
$\vec{P}$ giving the direction of propagation of the scalar wave in the
3-space of the observer. Then introducing the angle $\theta$ between
$\vec{N}$ and $\vec{P}$, a simple calculation yields
\begin{equation}
\left.\frac{\dot{{\cal{N}}}}{{\cal{N}}}\right|^{ST}_{obs} =
\left.\frac{\dot{{\cal{N}}}}{{\cal{N}}}\right|^{E}_{obs} +
(1 + \cos^2\frac{\theta}{2}) \left. \frac{\dot{\phi}}{\Phi_0}
\right|_{obs}
\end{equation}
\par This formula shows that the contribution of the scalar wave to the
scintillation cannot be zero, whatever be the direction of observation of the
distant light source.
\section{Are observational tests possible?}
\par It follows from our formulae that the scintillation effects specifically
predicted by scalar-tensor theories are proportional to the amplitude of the
scalar field perturbation at the observer.
This {\it local} character casts a serious doubt on the detectability of these
effects, since the scalar field perturbation is very small for a localized
source of gravitational waves.
\par Indeed, one can put in most cases $\phi/\Phi_0 \sim \alpha^2 h$, where
$\alpha^2$ is a dimensionless constant coupling the scalar field with the
metric gravitational field (see, {\it e.g.}, Damour \& Esposito-Far\`ese 1992).
Experiments in the solar system and observations of binary pulsars like
PSR 1913+16 indicate that $\alpha^2<10^{-3}$.
Consequently, setting
$h \sim 10^{-22}$ for gravitational waves emitted by localized sources gives
$\phi/\Phi_0 < 10^{-25}$ and the effect is much too weak to be detected.
The authors would like to acknowledge V.Faraoni for useful
discussions. A helpful comment of the referee on the orders of magnitude is
also acknowledged. Finally, one of the authors (C.B.) would like to thank
G.Esposito-Far\`ese for stimulating discussions.
| proofpile-arXiv_065-9294 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In recent years a number of part-of-speech taggers have been developed
for German. \cite{Lezi96} list 6 taggers (all of which work with
statistical methods) and provide comparison figures. They report that
for a ``small'' tagset the accuracy of these 6 taggers varies from
92.8\% to 97\%. But these figures do not tell us much about the
comparative behavior of the taggers since the figures are based on
different tagsets, different training corpora, and different test
corpora. A more rigorous approach to comparison is necessary to obtain
valid results. Such an approach has been presented by \cite{Teuf96}.
They have developed an elaborate methodology for comparing taggers
including tagger evaluation, tagset evaluation and text type evaluation.
\begin{description}
\item[Tagger evaluation]
Tests allowing to assess the impact of different tagging methods,
by comparing the performance of different taggers on the same training
and test data, using the same tagset.
\item[Tagset evaluation]
Tests allowing to assess the impact of tagset modifications on the
results, by using different versions of a given tagset on the same
texts.
\item[Text type evaluation]
Tests allowing to assess the impact of linguistic differences
between training texts and application texts, by using texts from
different text types in training and testing, tagsets and taggers being
unchanged otherwise.
\end{description}
In this paper we will focus on ``Tagger evaluation'' for the most part,
and only in section \ref{TextTypeEval} will we briefly sidestep to
``Text type evaluation''.
\cite{Teuf96} used their methodology only on two statistical taggers for
German, the Xerox HMM tagger \cite{Cutt92} and the TreeTagger
\cite{Schm95,Schm96}. On contrast, we will compare one of these
statistical taggers, the TreeTagger, to a rule-based tagger for German,
the Brill-Tagger \cite{Bril92,Bril94a}. Such a comparison is worthwhile
since \cite{Samu97} have shown for English that their rule-based tagger,
a constraint grammar tagger, outperforms any known statistical tagger.
\section{Our Tagger Evaluation}
For our evaluation we used a manually tagged corpus of around 70'000
tokens which we obtained from the University of
Stuttgart.\footnote{Thanks to Uli Heid for making this corpus available
to us.} The texts in that corpus are taken from the Frankfurter
Rundschau, a daily newspaper. We split the corpus into a $7/8$ training
corpus (60'710 tokens) and a $1/8$ test corpus (8'887 tokens) using a
tool supplied by Eric Brill that divides a corpus sentence by sentence.
The test corpus then contains sentences from many different sections of
the corpus. The average rate of ambiguity in the test corpus is 1.50.
That means that on average for any token in the test corpus there is a
choice of 1.5 tags in the lexicon, if the token is in the lexion. 1342
tokens from the test corpus are not present in the training corpus and
are therefore not in the lexicon (these are called ``lexicon gaps'' by
\cite{Teuf96}).
The corpus is tagged with the STTS, the Stuttgart-T\"ubingen TagSet
\cite{Schi95,Thie96}. This tagset consists of 54 tags, including 3 tags
for punctuation marks. We modified the tagset in one little aspect. The
STTS contains one tag for both digit-sequence numbers (e.g.\ {\it 2, 11,
100}) and letter-sequence numbers ({\it two, eleven, hundred}). The tag
is called CARD since it stands for all cardinal numbers. We added a new
tag, CARDNUM, for digit-sequence numbers and restricted the use of CARD
to letter-sequence numbers. The assumption was that this move makes it
easier for the taggers to recognize unknown numbers, most of which will
be digit-sequence numbers.
\subsection{Training the TreeTagger}
In a first phase we trained the TreeTagger with its standard parameter
settings as given by the author of the tagger.\footnote{These parameters
are explained in the README file that comes with the tagger.} That is,
it was trained with
\begin{enumerate}
\item Context length set to 2 (number of preceding words forming the
tagging context). Context length 2 corresponds to a trigram context.
\item Minimal decision tree gain set to 0.7. If the information gain at
a leaf node of the decision tree is below this threshold, the node is
deleted.
\item Equivalence class weight set to 0.15. This weight of the
equivalence class is based on probability estimates.
\item Affix tree gain set to 1.2. If the information gain at a leaf of
an affix tree is below this threshold, it is deleted.
\end{enumerate}
The training took less than 2 minutes and created an output file of 630
kByte. Using the tagger with this output file to tag the test corpus
resulted in an error rate of 4.73\%. Table \ref{SchmidResults} gives an
overview of the errors.
\begin{table*}\begin{center}
\begin{tabular}{|r|rr||rr|rr|rr|}
\hline
ambiguity & tokens & in \% & correct & in \% & LE & in \% & DE & in \%
\\
\hline \hline
0 & 1342 & 15.10 & 1128 & 84.05 & 214 & 15.95 & 0
& 0.00 \\
1 & 5401 & 60.77 & 5330 & 98.69 & 71 & 1.31 & 0
& 0.00 \\
2 & 993 & 11.17 & 929 & 93.55 & 3 & 0.30 & 61
& 6.14 \\
3 & 795 & 8.95 & 757 & 95.22 & 0 & 0.00 & 38
& 4.78 \\
4 & 260 & 2.93 & 240 & 92.31 & 0 & 0.00 & 20
& 7.69 \\
5 & 96 & 1.08 & 83 & 86.46 & 0 & 0.00 & 13
& 13.54 \\
\hline
total & 8887 & 100.00 & 8467 & 95.27 & 288 & 3.24 & 132
& 1.49 \\
\hline
\end{tabular}\end{center}
\caption{Error statistics of the TreeTagger}
\label{SchmidResults}
\end{table*}
Column 1 lists the ambiguity rates, i.e.\ the number of tags available
to a token according to the lexicon. Note that the lexicon was built
solely on the basis of the training corpus. From columns 1 and 2 we
learn that 1342 tokens from the test corpus were not in lexicon, 5401
tokens in the test corpus have exactly one tag in the lexicon, 993
tokens have two tags in the lexicon and so on. Column 3, labelled
`correct', displays the number of tokens correctly tagged by the
TreeTagger. It is obvious that the correct assignment of tags is most
difficult for tokens that are not in the lexicon (84.05\%) and for
tokens that are many ways ambiguous (86.46\% for tokens that are 5-ways
ambiguous).
The errors made by the tagger can be split into lexical errors (LE;
column 4) and disambiguation errors (DE; column 5). Lexical errors occur
when the correct tag is not available in the lexicon. All errors for
tokens not in the lexicon are lexical errors (214). In addition there
are a total of 74 lexical errors in the ambiguity rates 1 and 2 where
the correct tag is not in the lexicon. On the contrary, disambiguation
errors occur when the correct tag is available but the tagger picks the
wrong one. Such errors can only occur if the tagger has a choice among
at least two tags. Thus we get a rate of 3.24\% lexical errors and
1.49\% disambiguation errors adding up to the total error rate of
4.73\%.
It should be noted that this error rate is higher than the error rate
given for the TreeTagger in \cite{Teuf96}. There, the TreeTagger had
been trained over 62'860 tokens and tested over 13'416 tokens of a
corpus very similar to ours (50'000 words from the Frankfurter Rundschau
plus 25'000 words from the Stuttgarter Zeitung). \cite{Teuf96} report on
an error rate of only 3.0\% for the TreeTagger. It could be that they
were using different training parameters, these are not listed in the
paper. But more likely they were using a more complete lexicon. They
report on only 240 lexicon gaps among the 13'416 test tokens.
\subsection{Training the Brill-Tagger}
In parallel with the TreeTagger we trained the Brill-Tagger with our
training corpus using the following parameter settings. Since we had
some experience with training the Brill-Tagger we set the parameters
slightly different from the Brill's suggestions.\footnote{The
suggestions for the tagging parameters of the Brill-Tagger are given in
the README file that is distributed with the tagger.}
\begin{enumerate}
\item The threshold for the best found lexical rule was set to 2. The
learner terminates when the score of the best found rule drops below
this threshold. (Brill suggests 4 for a training corpus of 50K-100K
words.)
\item The threshold for the best found contextual rule was set to 1. The
learner terminates when the score of the best found rule drops below
this threshold. (Brill suggests 3 for a training corpus of 50K-100K
words.)
\item The bigram restriction value was set to 500. This tells the rule
learner to only use bigram contexts where one of the two words is among
the 500 most frequent words. A higher number will increase the accuracy
at the cost of further increasing the training time. (Brill suggests
300.)
\end{enumerate}
Training this tagger takes much longer than training the TreeTagger. Our
training step took around 30 hours (!!) on a Sun Ultra-Sparc
workstation. It resulted in:
\begin{enumerate}
\item a fullform lexicon with 14'147 entries (212 kByte)
\item a lexical-rules file with 378 rules (9 kByte)
\item a context-rules file with 329 rules (8 kByte)
\item a bigram list with 42'279 entries (609 kByte)
\end{enumerate}
Using the tagger with this training output to tag the test corpus
resulted in an error rate of 5.25\%. Table \ref{BrillResults} gives an
overview of the errors.
\begin{table*}\begin{center}
\begin{tabular}{|r|rr||rr|rr|rr|}
\hline
ambiguity & tokens & in \% & correct & in \% & LE & in \% & DE & in \%
\\
\hline \hline
0 & 1342 & 15.10 & 1094 & 81.52 & 248 & 18.48 & 0
& 0.00 \\
1 & 5401 & 60.77 & 5330 & 98.69 & 71 & 1.31 & 0
& 0.00 \\
2 & 993 & 11.17 & 906 & 91.24 & 3 & 0.30 & 84
& 8.46 \\
3 & 795 & 8.95 & 758 & 95.35 & 0 & 0.00 & 37
& 4.65 \\
4 & 260 & 2.93 & 245 & 94.23 & 0 & 0.00 & 15
& 5.77 \\
5 & 96 & 1.08 & 87 & 90.62 & 0 & 0.00 & 9
& 9.38 \\
\hline
total & 8887 & 100.00 & 8420 & 94.75 & 322 & 3.62 & 145
& 1.63 \\
\hline
\end{tabular}\end{center}
\caption{Error statistics of the Brill-Tagger}
\label{BrillResults}
\end{table*}
It is striking that the overall result is very similar to the
TreeTagger. A closer look reveals interesting differences. The
TreeTagger is clearly better than the Brill-Tagger in dealing with
unknown words (i.e.\ tokens not in the lexicon). There, the TreeTagger
reaches 84.05\% correct assignments which is 2.5\% better than the
Brill-Tagger. On the opposite side of the ambiguity spectrum the
Brill-Tagger is superior to the TreeTagger in disambiguating between
highly ambiguous tokens. For 4-way ambiguous tokens it reaches 94.23\%
correct assignments (a plus of 1.9\% over the TreeTagger) and even for
5-way ambiguous tokens it still reaches 90.62\% correct tags which is
4.1\% better than the TreeTagger.
\subsection{Error comparison}
We then compared the types of errors made by both taggers. An error type
is defined by the tuple {\tt (correct tag, tagger tag)}, where {\tt
correct tag} is the manually assigned tag and {\tt tagger tag} is the
automatically assigned tag. Both taggers produce about the same number
of error types (132 for the TreeTagger and 131 for the Brill-Tagger).
Table \ref{ErrorTypes} lists the most frequent error types for both
taggers. The biggest problem for both taggers is the distinction between
proper nouns (NE) and common nouns (NN). This corresponds with the
findings in \cite{Teuf96}. The distribution of proper and common nouns
is very similar in German and is therefore difficult to distinguish by
the taggers.
\begin{verbatim}
er wollte auch Weber/NN?/NE? einstellen
\end{verbatim}
\begin{table*}\begin{center}
\begin{tabular}{|r|l|l||r|l|l|}
\hline
\multicolumn{3}{|c||}{TreeTagger errors} &
\multicolumn{3}{|c|}{Brill-Tagger errors} \\
\hline
number & correct tag & tagger tag & number & correct tag & tagger tag \\
\hline
48 & NE & NN & 54 & NE & NN \\
21 & VVINF & VVFIN & 31 & NN & NE \\
20 & NN & NE & 19 & VVFIN & VVINF \\
17 & VVFIN & VVINF & 19 & VVFIN & ADJA \\
10 & VVPP & VVFIN & 17 & VVINF & VVFIN \\
10 & VVFIN & VVPP & 15 & VVPP & VVFIN \\
8 & CARDNUM & VMPP & 11 & VVPP & ADJD \\
7 & ADJD & VVFIN & 11 & ADJD & VVFIN \\
7 & ADJD & ADV & 8 & VVINF & ADJA \\
\hline
\end{tabular}\end{center}
\caption{Most frequent error types}
\label{ErrorTypes}
\end{table*}
The second biggest problem results from the distinction between
different forms of full verbs: finite verbs (VVFIN), infinite verbs
(VVINF), and past participle verbs (VVPP). This problem is caused by the
limited `window size' of both taggers. The TreeTagger uses trigrams for
its estimations, and the Brill-Tagger can base its decisions on up to
three tokens to the right and to the left. This is rather limited if we
consider the possible distance between the finite verb (in second
position) and the rest of the verb group (in sentence final position) in
German main clauses. In addition, the taggers cannot distinguish between
main and subordinate clause structure.
\begin{verbatim}
... weil wir die Probleme schon kennen/VVFIN.
Wir sollten die Probleme schon kennen/VVINF.
\end{verbatim}
A third frequent error type arises between verb forms and adjectives
(ADJA: adjective used as an attribute, inflected form; ADJD: adjective
in predicative use, typically uninflected form). It might be surprising
that the Brill-Tagger has so much difficulty to tell apart a finite verb
and an inflected adjective (19 errors). But this can be explained by
looking at the lexical rules learned by this tagger. These rules are
used by the Brill-Tagger to guess a tag for unknown words
\cite{Bril94a}. And the first lexical rule learned from our training
corpus says that a word form ending in the letter {\tt e} should be
treated as an adjective (ADJA). Of course this assignment can be
overridden by other lexical rules or contextual rules, but these
obviously miss some 19 cases.
On the other hand it is surprising that the TreeTagger gets mixed up 8
times by past participle modal verbs (VMPP) which should be
digit-sequence cardinal numbers (CARDNUM). There are 10 additional cases
where a digit-sequence cardinal number was interpreted as some other tag
by the TreeTagger. But there are only 3 similar errors for the
Brill-Tagger since its lexical rules are well suited to recognize
unknown digit-sequence numbers.
\section{Using an external lexicon}
Let us sum up the results of the above comparison and see if we can
improve tagging accuracy by using an external lexicon. The above
comparison showed that:
\begin{enumerate}
\item The Brill-Tagger is better in recognizing special symbol items
such as digit-sequence cardinal numbers, and it is better in
disambiguating tokens which are many-ways ambiguous in the lexicon.
\item The TreeTagger is better in dealing with unknown word forms.
\end{enumerate}
At first sight it seems easiest to improve the Brill-Tagger by reducing
its unknown word problem. We employed the Gertwol system \cite{Ling94} a
wide-coverage morphological analyzer to fill up the tagger lexicon
before tagging starts. That means we extracted all unknown word
forms\footnote{Unknown word forms in the test corpus are all tokens not
seen in the training corpus.} from the test corpus and had Gertwol
analyse them. From the 1342 unknown tokens we get 1309 types which we
feed to Gertwol. Gertwol is able to analyse 1205 of these types.
Gertwol's output is mapped to the respective tags, and every word form
with all possible tags is added temporarily to the tagger lexicon. In
this way the tagger starts tagging the test corpus with an almost
complete lexicon. The remaining lexicon gaps are the few words Gertwol
cannot analyse. In our test corpus 109 tokens remain unanalysed.
Our experiments showed a slight improvement in accuracy (about 0.5\%),
but by far not as much as we had expected. The alternative of filling up
the tagger lexicon by training over the whole corpus resulted in an
improvement of around 3.5\%, an excellent tagger accuracy of more than
98\%. Note that we only used the lexicon filled in this way but the
rules as learned from the training corpus alone. But, of course, it is
an unrealistic scenario to know in advance (i.e.\ during tagger
training) the text to be tagged.
The difference between using a large external `lexicon' such as Gertwol
and using the internal vocabulary is due to two facts. First, Gertwol
increases the average ambiguity of tokens since it gives every possible
tag for a word form. The internal vocabulary will only provide the tag
occuring in the corpus. Second, in case of multiple tags for a word form
the Brill-Tagger needs to know the most likely tag. This is very
important for the Brill-Tagger algorithm. But Gertwol gives all possible
tags in an arbitrary order. One solution is to sort Gertwol's output
according to overall tag probabilities. These can be computed from the
frequencies of every tag in the training corpus irrespective of the word
form. Using these rough probabilities improved the results in our
experiments by about 0.2\%. This means that the best result for
combining Gertwol with the Brill-Tagger is at 95.45\% accuracy.
In almost the same way we can use the external lexicon with the
TreeTagger. We add all types as analysed by Gertwol to the TreeTagger's
lexicon. Then, unlike the Brill-Tagger, the TreeTagger is retrained with
the same parameters and input files as above except for the extended
lexicon. The Brill-Tagger loads its lexicon for every tagging process,
and the lexicon can therefore be extended without retraining the tagger.
The TreeTagger, on the other hand, integrates the lexicon during
training into its 'output file'. It must therefore be retrained after
each lexicon extension.
Extending the lexicon improves the TreeTagger's accuracy by around 1\%
to 96.29\%. Table \ref{ExtendedLexResults} gives the results for the
TreeTagger with the extended lexicon.
\begin{table*}\begin{center}
\begin{tabular}{|r|rr||rr|rr|rr|}
\hline
ambiguity & tokens & in \% & correct & in \% & LE & in \% & DE & in \%
\\
\hline \hline
0 & 109 & 1.23 & 72 & 66.06 & 37 & 33.94 & 0
& 0.00 \\
1 & 6307 & 70.97 & 6209 & 98.45 & 98 & 1.55 & 0
& 0.00 \\
2 & 1224 & 13.77 & 1119 & 91.42 & 10 & 0.82 & 95
& 7.76 \\
3 & 852 & 9.59 & 805 & 94.48 & 2 & 0.23 & 45
& 5.28 \\
4 & 296 & 3.33 & 266 & 89.86 & 0 & 0.00 & 30
& 10.14 \\
5 & 99 & 1.11 & 86 & 86.87 & 0 & 0.00 & 13
& 13.13 \\
\hline
total & 8887 & 100.00 & 8557 & 96.29 & 147 & 1.65 & 183
& 2.06 \\
\hline
\end{tabular}\end{center}
\caption{Error statistics of the TreeTagger with an extended lexicon}
\label{ExtendedLexResults}
\end{table*}
The recognition of the remaining unknown words is very low (66.06\%),
but this does not influence the result much since only 1.23\% of all
tokens are left unknown. Also the rate of disambiguation errors
increases from 1.49\% to 2.06\%. But at the same time the rate of
lexical error drops from 3.24\% to 1.65\%, which accounts for the
noticeable increase in overall accuracy.
\section{The best of both worlds?}
In the previous sections we observed that the statistical tagger and the
rule-based tagger show complementary strengths. Therefore we
experimented with combining the statistical and the rule-based tagger in
order to find out whether a combination would yield a result superior to
any single tagger.
First, we tried to employ the TreeTagger and the Brill-Tagger in this
order. Tagging the test corpus now works in two steps. In step one, we
tag the test corpus with the TreeTagger. We then add all unknown word
forms to the Brill-Tagger's lexicon with the tags assigned by the
TreeTagger. In step two, we tag the test corpus with the Brill-Tagger.
In this way we can increase the Brill-Tagger's accuracy to 95.13\%. But
the desired effect of combining the strengths of both taggers in order
to build one tagger that is better than either of the taggers alone was
not achieved. The reason is that the wrong tags of the TreeTagger were
carried over to the Brill-Tagger (together with the correct tags) and
all of the new lexical entries were on the ambiguity level one or two,
so that the Brill-Tagger could not show its strength in disambiguation.
In a second round we reduced the export of wrong tags from the
TreeTagger to the Brill-Tagger. We made sure that on export all
digit-sequence ordinal and cardinal numbers were assigned the correct
tags. We used a regular expression to check each word form. In addition,
we checked for all other unknown word forms if the tag assigned by the
TreeTagger was permitted by Gertwol (i.e.\ if the TreeTagger tag was one
of Gertwol's tags). If so, the TreeTagger tag was exported. If the
TreeTagger tag was not allowed by Gertwol, we checked how many tags
Gertwol proposes. If Gertwol proposes exactly one tag this tag was
exported, in all other cases no tag was exported. In this way we
exported 1171 types to the Brill-Tagger's lexicon and we obtained a
tagging accuracy of 95.90\%. The algorithm for selecting TreeTagger tags
was further modified in one little respect. If Gertwol did not analyse a
word form and the TreeTagger identified it as a proper noun (NE), then
the tag was exported. We then export 1212 types and we obtain a tagging
accuracy of 96.03\%, which is still slightly worse than the TreeTagger
with the external lexicon.
Second, we tried to employ the taggers in the reverse order:
Brill-Tagger first, and then the TreeTagger, using the Brill-Tagger's
output. In this test we extended the TreeTaggers lexicon with the tags
assigned by the Brill-Tagger and we extended the training corpus with
the test corpus tagged by the Brill-Tagger. We retrained the TreeTagger
with the extended lexicon and the extended corpus. We then used the
TreeTagger to tag the test corpus, which resulted in 95.05\% accuracy.
This means that the combination of the taggers results in a worse result
than the TreeTagger by itself (95.27\%).
>From these tests we conclude that it is not possible to improve the
tagging result by simply sequentialising the taggers. In order to
exploit their respective strengths a more elaborate intertwining of
their tagging strategies will be necessary.
\section{Text type evaluation}\label{TextTypeEval}
So far, all our tests were performed over the same test corpus. We
checked whether the general tendency will also carry over to other test
corpora. Besides the corpus used for the above evaluation we have a
second manually tagged corpus consisting of texts about the
administration at the University of Zurich (the university's annual
report; guidelines for student registration etc.). This corpus currently
consists of 38'007 tokens. We have applied the taggers, trained as above
on $7/8$ of the 'Frankfurter Rundschau' corpus, to this corpus and
compared the results. In this way we have a much larger test corpus but
we have a higher rate of unknown words (10'646 tokens, 28.01\%, are
unknown). The TreeTagger resulted in an accuracy rate of 92.37\%,
whereas the Brill-Tagger showed an accuracy rate of 91.65\%. These
results correspond very well with the above findings. The figures are
close to each other with a small advantage for the TreeTagger. It should
be noted that the much lower accuracy rates compared to the test corpus
are in part due to inconsistencies in tagging decisions. E.g.\ the word
`Management' was tagged as a regular noun (NN) in the training corpus
but as foreign material (FM) in the University of Zurich test corpus.
\section{Conclusions}
We have compared a statistical and a rule-based tagger for German. It
turned out that both taggers perform on the same general level, but the
statistical tagger has an advantage of about 0.5\% to 1\%. A detailed
analysis shows that the statistical tagger is better in dealing with
unknown words than the rule-based tagger. It is also more robust in
using an external lexicon, which resulted in the top tagging accuracy of
96.29\%. The rule-based tagger is superior to the statistical tagger in
disambiguating tokens that are many-ways ambiguous. But such tokens do
not occur frequently enough to fully get equal with the statistical
tagger. A sequential combination of both taggers in either order did not
show any improvements in tagging accuracy.
The statistical tagger is easier to handle in that its training time is
3 magnitudes shorter than the rule-based tagger (minutes vs.\ days). But
it has to be retrained after lexicon extension, which is not necessary
with the rule-based tagger. The rule-based tagger has the additional
advantage that rules (i.e.\ lexical and contextual rules) can be
manually modified. As a side result our experiments show that a
rule-based tagger that learns its rules like the Brill-Tagger does not
match the results of the constraint grammar tagger (a manually built
rule-based tagger) described in \cite{Samu97}. That tagger is described
as performing with an error rate of less than 2\%. Constraint grammar
rules are much more powerful than the rules used in the Brill-Tagger.
| proofpile-arXiv_065-9304 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Many extensions to the Standard Model incorporating non-zero neutrino
mass predict the existence of neutral heavy leptons
(NHL). See Refs.~[\ref{bib:grl}]~and~[\ref{bib:pdg}] for discussions
and references concerning massive neutrinos.
The model considered in this paper is that of Ref.~[\ref{bib:grl}] in
which the NHL is an iso-singlet particle that mixes with the Standard
Model neutrino. Figure~\ref{feynman} shows the Feynman diagrams for
the production and decay of such an NHL.
The upgraded NuTeV detector includes a Decay Channel designed
specifically to search for NHL's and provides a significant
increase in sensitivity over previous searches.
\begin{figure}[hbt]
\centerline{\psfig{figure=nhl_prod.eps,width=3.0in,bbllx=66pt,bblly=262pt,bburx=410pt,bbury=443pt}}
\centerline{\psfig{figure=nhl_w.eps,width=3.0in,bbllx=25pt,bblly=200pt,bburx=508pt,bbury=442pt}}
\caption[]{Feynman diagrams showing the production (from meson decay) and
decay of neutral heavy leptons (L$_\mu$). Decay via the
Z$^{0}$ boson is also allowed, but not shown.}
\label{feynman}
\end{figure}
\section{The Experiment}
The NuTeV calorimeter is described elsewhere~\cite{detector}; only
the features essential to this analysis are described here. The
calorimeter consists of 84 layers of 10~cm steel plates and
scintillating oil counters. A multi-wire gas drift chamber is positioned
at every 20~cm of iron for particle tracking and shower location.
The decay channel is an instrumented decay space
upstream of the calorimeter. The channel is 30~m long and filled with
helium using 4.6~m diameter plastic bags. The helium was used to
reduce the number of neutrino interactions in the channel. Drift
chambers are positioned at three stations in the decay channel
to track the NHL decay products.
Figure~\ref{fig:dkchannel} shows a schematic diagram of the decay
channel. A 7~m $\times$ 7~m scintillating plastic ``veto wall'' was
constructed upstream of the decay channel in order to veto on any
charged particles entering the experiment.
\begin{figure*}[hbt]
\centerline{\psfig{figure=channel.eps,width=6.8in}}
\caption[]{A schematic diagram of the NuTeV decay channel. The beam
enters from the left, and at the far right is the NuTeV neutrino
target. An example of an NHL decay to $\mu \pi$ is also
shown. The event appears as two tracks in the decay channel,
a long muon track in the calorimeter and a hadronic shower.}
\label{fig:dkchannel}
\end{figure*}
\section{Event Selection}
Figure~\ref{fig:dkchannel} also shows an example of the event signature
for which we are searching. The characteristics of an NHL event are
a neutral particle entering the channel and decaying in the helium region to
two charged (and possibly an additional neutral) particles. The
charged particles must project to the calorimeter and at least one
must be identified as a muon.
To select events for this analysis we triggered on energy deposits of
at least 2.0~GeV in the calorimeter and required no veto wall signal.
We then require that there be
two well-reconstructed tracks in the decay channel that form a vertex
in the helium well away from the edges of the channel and the tracking
chambers. The event vertex was required to be at least 3$\sigma$
away from the fiducial volume edges, where $\sigma$ is the resolution
of the vertex position measurement. By requiring two tracks and
separation from the tracking chambers we greatly reduce the number
of background events from neutrinos interacting in the decay channel materials.
For all the cuts a vertex constrained fit is used in which the two tracks
are required to come from a single point in space. The vertex resolution
depends on the opening angle of the tracks, but it is typically 25~cm
along on the beam axis and 2.5~cm transverse.
The two decay tracks are required to project to the calorimeter and to
match (in position) with particles identified in the calorimeter.
At least one of the two particles must be identified as a muon, because
for this analysis we only consider decay modes
with at least one muon. In order to insure good particle
identification and energy measurement, we require all muons in the
event to have energy greater than 2.0~GeV and all electrons or hadrons
to have energy greater than 10.0~GeV. These energy cuts also reduce
backgrounds from cosmic rays and neutrino interactions.
To further reduce acceptance for background events, additional kinematic
cuts are applied. NHL decays are expected to have a small opening
angle; therefore, the decay particles are required to have slopes $p_x
/ p_z$ and $p_y / p_z$ less than 0.1 ($p_z$ is the momentum component
along the direction of the incoming beam, $p_x$ and $p_y$ are the
transverse components). We are only considering NHL's produced by
kaon and charmed meson decays in this analysis; therefore, NHL's with
mass above 2.0~GeV are not considered. We require the transverse
mass\footnote{The transverse mass is $p_T + \sqrt{p_T^2 + m_V^2}$,
where $p_T$ is the component of the total momentum of the two charged tracks
perpendicular to the beam direction
(i.e. the ``missing transverse momentum''), and $m_V$ is the invariant mass
of the two charged tracks.}
of the event to be less than 5.0~GeV in order to restrict ourselves
to this lower mass region. Finally, in order to reduce neutrino-induced
events even further we form the quantities $x_{\rm eff}$ and $W_{\rm eff}$
by assuming that: i) the event is a neutrino charged current interaction
($\nu N \rightarrow \mu N' X$), ii) that the highest energy muon comes
from the neutrino-W vertex, and iii) the missing transverse momentum
in the event is carried by the final state nucleon. We require
$x_{\rm eff} < 0.1$ and $W_{\rm eff} > 2.0$~GeV.
\section{NHL Monte Carlo}
Figure~\ref{fig:beamline} shows a schematic of the NuTeV beamline.
The experiment took $2.5 \times 10^{18}$ 800~GeV protons from the
Fermilab Tevatron on a BeO target. Secondaries produced from the
target are focused in the decay pipe with a central momentum of
250~GeV. The decay pipe is 0.5~km long, and the center of the
decay pipe is 1.5~km from the center of the decay channel.
Non-interacting protons, wrong-sign and neutral secondaries are
dumped into beam dumps just beyond the BeO target.
NHL's would be produced in decays of kaons and pions in the decay
pipe, as well as from charmed hadron decays in the primary proton
beam dumps. Pion decays do not contribute significantly to this
analysis, as they cannot produce NHL's in the mass range of our
search.
\begin{figure}[hbt]
\centerline{\psfig{figure=my_beamline.eps,width=3.4in}}
\caption[]{A schematic diagram of the NuTeV beamline. The 800~GeV proton
beam from the Fermilab Tevatron enters from the left.
NHL's are produced from the decays of kaons and pions in the
decay pipe and from the decays of charm hadrons in the beam
dumps.}
\label{fig:beamline}
\end{figure}
The production of kaons is simulated using the
Decay Turtle~\cite{turtle} program. The simulation of kaon decays
to NHL's includes the effects of mass both in decay phase space and
in helicity suppression. The production of charmed hadrons in the
beam dump are simulated using a Monte Carlo based on the production
cross sections reported in Ref.~[\ref{bib:charm}]. For this analysis
we only generate muon flavored NHL's. Figure~\ref{fig:nhlp} shows
examples of the momentum distribution of NHL's produced by the NuTeV
beamline. For a 1.45~GeV mass NHL, the average momentum is
$\sim$140~GeV. For a 0.35~GeV mass NHL the average momentum is
$\sim$100~GeV.
\begin{figure}[hbt]
\centerline{\psfig{figure=nhl_p.eps,width=3.4in}}
\caption[]{The upper plot shows the energy distributions for Monte Carlo
NHL's with mass 1.45~GeV and 0.35~GeV. The lower plot shows
the energy of the decay products of the NHL.}
\label{fig:nhlp}
\end{figure}
The simulation of NHL decays uses the model of Ref.~[\ref{bib:tim}]. The
polarization of the NHL is also included in the decay matrix
element~\cite{joe}.
The decay products of the NHL are run through a full Geant detector
simulation to produce simulated raw data which is then run through our
analysis software.
\section{Results}
We observe no events which pass our event selection cuts.
The number of expected background events are approximately 0.5. The
largest background is 0.4 events expected from neutrino interactions
in the decay channel helium. This estimate was made using the Lund
Monte Carlo~\cite{lund} to simulate neutrino--nucleon interactions.
In order to present a conservative
limit, we assume an expected background of zero events (this is only
a small change in the resulting limits).
In order to demonstrate the acceptance and reconstruction efficiency
of the experiment, we loosened several cuts in order to
examine the neutrino interactions in the decay channel material. We
removed the cuts on the event vertex position (allowing events at the
positions of the chambers), and allow events with more than 2 tracks.
No calorimeter cuts (matching to particles, or energy cuts) were
applied, and no $x_{eff}$ or $W_{eff}$ cuts were applied.
Figure~\ref{fig:zvert} shows the distribution of the event vertex
along the beam axis. The peaks correspond to the positions of the
tracking chambers. The plot also shows the neutrino
interactions in the helium gas between the chambers. The number
of events seen is consistent with expectations. This study demonstrates
that the channel and our tracking reconstruction are working well.
\begin{figure}[hbt]
\centerline{\psfig{figure=zvert.eps,width=3.4in}}
\caption[]{The Z vertex distribution for neutrino interaction events in
the NuTeV decay channel. The points are data and the lines are
Monte Carlo. The peaks correspond to the positions of the
drift chambers.}
\label{fig:zvert}
\end{figure}
Figure~\ref{fig:limits} shows our limits on the NHL--neutrino coupling,
$U_{2\mu}^2$, as a function of the mass of the NHL. The results of previous
experiments~\cite{prev_ccfr,prev_bebc,prev_charm,prev_kek,prev_lbl}
are shown for comparison. Our
result is a significant increase in sensitivity in the range from 0.3~GeV
to 2.0~GeV. These limits are for muon flavored NHL's and only
include their decay modes containing a muon. The limits do not yet
include the effects of systematic uncertainties.
\begin{figure}[hbt]
\centerline{\psfig{figure=nhl_limits.ps,width=3.4in}}
\caption[]{Preliminary limits from NuTeV on the coupling, U$^2_{2\mu}$,
of neutral heavy leptons (NHL) to the Standard Model left-handed
muon neutrino as a function of NHL mass. Only the $\mu$X decay
modes of the NHL are included in this first search. The limits
are 90\% confidence and are based on zero observed events with
zero expected background events. The limits do not yet
include effects from systematic uncertainties}
\label{fig:limits}
\end{figure}
\section{Conclusions}
We have shown new preliminary limits from a search for muon flavored
neutral heavy leptons from the NuTeV experiment at Fermilab. In
the future we plan to expand our search to include masses greater than
2.0~GeV as well as masses less than 0.3~GeV (perhaps to a final
range of $\sim 0.020$~GeV to $\sim 10.0$~GeV). We will also expand
our search to include electron flavored NHL's and all NHL decay modes
($\mu\mu \nu$, $\mu e \nu$, $\mu \pi$, $e \pi$, and $e e \nu$).
\section*{Acknowledgements}
This research was supported by the U.S. Department of Energy and the
National Science Foundation. We would also like to thank the staff
of Fermliab for their substantial contributions to the construction
and support of this experiment during the 1996--97 fixed target run.
\section*{References}
| proofpile-arXiv_065-9306 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The data used to make a quantitative
estimate of
the global star-formation history typically
consist of optical multi-color imaging combined with
comprehensive spectroscopy (Lilly et al. 1996).
These data are then used to infer the distances
of individual galaxies and (with the aid
of stellar synthesis models) determine
their intrinsic spectral energy distributions.
In this manner active star-formation rates
can be calculated (Madau et al. 1996, Connelly et al. 1997).
The primary limitation of this technique
is its sensitivity to dust obscuration, the effects of
which in evolving galaxy populations
are both uncertain and controversial (Madau et al. 1996, Calzetti
1997, Meuer et al. 1997).
An alternative method
for the study of star-forming galaxies
is to observe directly the reprocessed UV light
emitted by the dust. Depending on the
dust temperature (30 K - 60 K), this emission peaks
between 50-100 $\mu$m (restframe) and in many galaxies
comprises the bulk of the bolometric luminosity. Additionally,
the thermal far infrared radiation (FIR)
is not subject to further obscuration, so uncertain
extinction corrections are avoided. At high
redshifts, the FIR emission will be
shifted into the sub-mm band.
A consequence of the steep Rayleigh-Jeans
tail of the thermal dust emission coupled with a
positive dust emissivity index, $\alpha$ (where the emissivity,
$\epsilon \propto \nu ^{\alpha}$)
is to bring higher
flux into the sub-mm band so the cosmological
dimming effect is offset. Thus a starburst galaxy of a given
luminosity should have essentially the same observed flux
density between $z$ = 1 and 10 (Blain et al. 1993).
Closely related to the FIR emission
in starburst galaxies is the radio continuum.
In normal galaxies (i.e., without a powerful AGN),
the centimeter radio luminosity is dominated
by diffuse synchrotron emission believed to be
produced by relativistic electrons accelerated
in supernovae remnants.
At shorter wavelengths free-free
emission from HII regions may contribute
substantially. Although the radio emission
is linked to active star-formation by
different physical mechanisms than that
of the FIR, there is a tight correlation
between the FIR and radio luminosity of
a starburst (Helou et al. 1985, Condon et al. 1991).
Radio observations are only sensitive to
recent starburst activity in a galaxy
(and the formation of its O and B
stellar populations) since the thermal and
synchrotron radiation dissipate on physical
time scales of $10^7-10^8$ yr. In
this sense the radio luminosity of a
starburst is a true measure of the
instantaneous rate of star-formation
in a galaxy, uncontaminated by older stellar
populations. Because galaxies and the inter-galactic
medium are transparant at centimeter wavelengths,
radio emission is a sensitive measure of
star-formation in distant galaxies.
The current deep Very
Large Array (VLA) radio surveys
with sensitivites of a few microjansky are
capable of detecting star-forming galaxies
to $z\sim$1.5 (Richards et al. 1998).
An 850 $\mu$m survey of the HDF with the SCUBA detector
on the James Clerk
Maxwell Telescope (JCMT) detected five sources
in a confusion limited image (Hughes et al. 1998)
with 15\arcsec ~resolution.
Tentative optical identifications
are all with putative starbursts at $z$ {$_ >\atop{^\sim}$} 1
and with star-formation rates (SFR)
of 400-1200{\ $M_{\odot}$} yr$^{-1}$ (we assume $h$ = 0.5, $q_o$ = 0.5)
In this letter we
compare our deep radio images of the Hubble
Deep Field with the SCUBA images and suggest alternate
optical counterparts to the sub-mm sources.
\section{Radio Observations}
A 5.4\arcmin ~(FWHM) field containing the HDF
has been imaged at 8.5 GHz with the VLA
with an rms sensitivity of 1.8 $\mu$Jy. The observing
technique and data reduction are discussed in Richards
et al. (1998). We collected 40 additional hours
of data at 8.5 GHz in June 1997. The new combined image
has a rms sensitivity near the field center of about
1.6 $\mu$Jy with a resolution of 3.5\arcsec .
During November 1996, we obtained
42 hours of VLA data in its A-array on the HDF
at 1.4 GHz. The subsequent 1.4
GHz image of the HDF has an effective resolution
of 1.9\arcsec ~ and an rms sensitivity
of 7.5 $\mu$Jy with a firm detection limit of
40 $\mu$Jy. A total of 381 radio sources at 1.4 GHz
have been catalogued within 20\arcmin ~of the HDF and
are reported on elsewhere (Richards 1998).
Within the Hubble Deep Field, there are
nine radio sources detected in complete samples
at either 1.4 GHz and/or 8.5 GHz, while an additional
seven 8.5 GHz sources comprise a supplementary sample
as described by Richards et al. (1998). The 8.5 and
1.4 GHz images of the HDF are available at the
world-wide web site: www.cv.nrao.edu/~jkempner/vla-hdf/.
\section{Association of Sub-mm and Radio Sources}
We inspected our radio images to determine
if any of the SCUBA sources have possible radio
counterparts. Our first step was to align the
radio and SCUBA position frames.
The VLA coordinate grid
is within 0.1\arcsec ~of the J2000/FK5 reference frame
at both 8.5 and 1.4 GHz (Richards et al. 1998).
In order to tie the JCMT coordinate grid to
this system, we have assumed
that the radio and sub-mm sources are
associated with the same galaxy, as discussed
in the introduction.
The relative rms positional errors
for the sub-mm sources should be of order 1-2\arcsec ~
(based on the 15\arcsec ~SCUBA beam size
and the signal to noise ratio of individual
detections), howver the uncertain effects of
source confusion in the sub-mm images likely makes
this an underestimate. In addition, the
absolute registration of the SCUBA image
is unknown a priori, although Hughes et al.
(1998) quote a value of 0.5\arcsec ,
while Smail et al. (1998) report typical values
of 3\arcsec ~for their SCUBA images.
Thus we chose to search for any radio object either
in the 1.4 GHz complete sample or in the 8.5 GHz
catalog of Richards et al. (1998) within a
10\arcsec ~error circle around each of the
sub-mm source positions.
Possible radio associations
were apparent for HDF850.1 (3651+1226 and 3651+1221), HDF850.2
(3656+1207), and HDF850.4 (3649+1313). Two of these
radio sources were detected at both 1.4 and 8.5 GHz as
part of independent and complete samples (3651+1221 and
3649+1313). These two radio sources in particular
also have high signifigance ISO 15 $\mu$m counterparts
(Aussel et al. 1998) indicating these systems
may contain substantial amounts of dust and hence
be luminous FIR galaxies. Based on the
association of 3651+1221 and 3649+1313
with HDF850.1 and HDF850.4, respectively, we suggest
a SCUBA coordinate frame shift of 4.8\arcsec
~west and 3.8\arcsec ~south.
The position shifts, within
1.2\arcsec ~in the (VLA+ISO) vs. SCUBA positions
for both HDF850.1 and HDF850.4, suggests that the
VLA/ISO/SCUBA source alignment is not accidental.
However this large registration offset of $\sim$6\arcsec~
is much greater than the 0.5\arcsec ~registraion accuracy
quoted in Hughes et al. (1998). Either the radio/ISO emission
is not associated withe the same galaxies as the SCUBA sources,
or the SCUBA observations have a large registration error.
\section{Optical Identification of Radio/Sub-mm Sources}
Since the radio source positions are much
more accurate (0.1-0.2\arcsec ) than the sub-mm positions
and as the HDF contains a high surface density of optical
objects (typically 20 per SCUBA beam), we now use the radio
data to make the secure identifications with optical
counterparts.
Table 1 presents plausible radio counterparts
to the sub-mm sources of Hughes et al. (1998).
The first line gives the position of the SCUBA source
after translation to the radio coordinate frame
with plausible radio counterparts and their
suggested optical identification given in following
lines.
{\bf HDF850.1 :} We present in Figure 1, the 1.4 GHz
overlay of the Hubble Deep Field centered on the
shifted SCUBA position.
The precise 1.4 GHz radio position
suggests the optical identification is with the
faint low-surface brightness
object to the immediate north of
the brighter foreground disk system. Richards et al.
(1998) suggested that this optical feature might be
associated with the $z = 0.299$ galaxy 3-659.1.
However, we note the presence of a separate low
surface brightness galaxy at $z$ = 1.72
(3-633.1; Fernandez-Soto et al. 1998) located approximately
2\arcsec ~to the northwest.
If radio source 3651+1221 is the most obscured part of
a larger galaxy 3-633.1 at $z$ = 1.72 (Fernandez-Soto et al.
1998),
the implied radio luminosity (L$_{1.4}$ = 10$^{25}$ W/Hz)
is substantially higher than that of the most extreme
local starbursts (e.g., ARP 220) and suggests that
this object may contain an AGN.
{\bf HDF850.2 :} Based on the optical/radio positional
coincidence, this 3.5 $\sigma$ radio source
has a 90\% reliability of being associated
with a I=23.7 distorted galaxy (Barger et al. 1998),
according to
the analysis of Richards et al. (1998).
We identify the 850 $\mu$m detection with this
galaxy of unknown redshift. The $UGR$ band
photometry (Hogg 1997) on this galaxy suggests that it is likely
at $z < 3$. Figure 2 shows the 1.4 GHz overlay of the optical
field.
{\bf HDF850.3 :} The radio data does little to
clarify the identification of the sub-mm source
in this field. Figure 3 shows a 4 $\sigma$ radio
source located
4\arcsec ~ from the position of the 850 $\mu$m
detection. However, before the shift the
SCUBA source position is in good agreement with
the position of the bright disk system 1-34.0,
which is also included in the supplemental ISO catalog of
Aussel et al. (1998). This galaxy also has a weak
3$\sigma$ radio detection. The 0.485 redshift of
this object implies a star-formation rate of 80 {\ $M_{\odot}$} yr$^{-1}$
from the radio luminosity (Salpeter initial mass function
integrated over 0.1-100{\ $M_{\odot}$} ), although the presence of an
AGN cannot be ruled out.
At present the data cannot discriminate between these
two possible radio/sub-mm associations.
{\bf HDF850.4 :} The radio source 3649+1313 is
associated with the spiral galaxy 2-264.1
at a redshift of 0.475. ISO sources from the
complete catalogs of Aussel et al. (1998)
and Rowan-Robinson et al. (1997) have been
assoicated with this radio source.
This galaxy is
likely part of the larger structure at 0.475
which contains 16 galaxies with spectroscopic
redshifts (Cohen et al. 1996). At least one of
these galaxies (2-264.2) lies at a small
projected distance (less than 30 kpc) and
suggests dynamic interactions may be triggering
the radio/sub-mm activity. Although the SCUBA
detection may be a blend of emission from
several galaxies in this crowded field (see Figure 4), the radio
emission is clearly confined to the central galaxy
2-264.1. Richards et al. (1998) estimate a
SFR = 150{\ $M_{\odot}$} yr$^{-1}$.
Can the SCUBA source HDF850.4
plausibly be associated instead
with the HDF optical galaxy 2-399.0 as claimed by Hughes
{\em et al.}? If we take those authors' estimate of the FIR luminosity
log$_{10}$L$_{60 \mu m}$ = 12.47 {$L_{\odot}$} for this galaxy, the
FIR-radio relation (Condon et al. 1991) predicts an observed 1.4 GHz
flux density of about 300 $\mu$Jy, clearly in excess
of our upper limit of 23 $\mu$Jy (3 $\sigma$).
We find the identification of SCUBA source HDF850.4
with HDF galaxy 2-399 to be dubious and instead
identify HDF850.4 with 2-264.1.
{\bf HDF850.5 :} There is no 1.4 GHz radio emission apparent to
the 2 $\sigma$ limit of 15 $\mu$Jy in this
field. Optically there are only two plausible
identifications for the 850 $\mu$m source, HDF galaxies 2-395.0
and 2-349.0 (see Figure 5). The redshift of 2-395.0 is 0.72 (Fernandez-Soto
et al. 1998).
The radio flux
limit on this galaxy can exclude a SFR $\geq$ 60 {\ $M_{\odot}$} /yr.
The other possible identification (2-349.0) is at an
unknown redshift.
The non-detection of this sub-mm
source at radio or optical wavelengths coupled
with the fact that this is the weakest source in the
Hughes et al. (1998) catalog suggests
this source may be spurious. We also note that
this sub-mm source is located only 12$\arcsec$ ~from
sub-mm detection HDF850.4
and hence may suffer from confusion.
\section{Conclusions}
Of the five 850 $\mu$m sources in the HDF,
two are solidly detected at radio wavelengths,
while two are probable detections. Two of these
identifications are possibly with $z \sim$ 0.5
starbursts (HDF850.3 and HDF850.4). The other
two detections (HDF850.1 and HDF850.2) must have
redshifts less than 1.5 or be contaminated by
AGN based on radio luminosity arguments.
This radio analysis suggests the
claim, based on sub-mm observations alone,
that the optical surveys underestimate the
$z>2$ global star-formation rate are premature.
On the other hand, the $z < 1$ star-formation
history may have been underestimated if a significant
fraction of the sub-mm population lies at relatively
low redshift.
In the absence of high resolution
sub-mm imaging capability, it is necessary
to rely on plausible radio counterparts
of sub-mm sources in order to provide the
astrometric accuracy needed to make the
proper optical identifications.
Only complete redshift samples of the sub-mm
population coupled with diagnostic spectroscopy
and high resolution radio data will allow
for calculation of the epoch dependent
sub-mm luminosity function and its implication
for the star-formation history of the universe.
\acknowledgements
We thank our collaborators K. Kellermann, E. Fomalont,
B. Partridge, R. Windhorst, and D. Haarsma
for a critical reading of an earlier version of this work.
We appreciate useful conversations with J. Condon
and A. Barger.
Support for part of
this work was provided by NASA through grant AR-6337.*-96A from the
Space Telescope Science Institute, which is operated by the Association
of Universities for Research in Astronomy, Inc., under NASA contract
NAS5-2655, and by NSF grant AST 93-20049.
\newpage
\noindent\parshape 2 0.0 truein 06.5 truein 0.4 truein 06.1 truein Aussel, H. et al. A\& A, in press, 1998
\noindent\parshape 2 0.0 truein 06.5 truein 0.4 truein 06.1 truein Barger em et al. 1998, AJ, submitted
\noindent\parshape 2 0.0 truein 06.5 truein 0.4 truein 06.1 truein Blain, A. W. \& Longair, M. S. 1993, MNRAS, 264, 509
\noindent\parshape 2 0.0 truein 06.5 truein 0.4 truein 06.1 truein Calzetti, D. 1997, AJ, 113
\noindent\parshape 2 0.0 truein 06.5 truein 0.4 truein 06.1 truein Cohen, J. G. et al. 1996, ApJL, 471, 5
\noindent\parshape 2 0.0 truein 06.5 truein 0.4 truein 06.1 truein Condon, J. J., Anderson, M. L. \& Helou, G. 1991,
ApJ, 376, 95
\noindent\parshape 2 0.0 truein 06.5 truein 0.4 truein 06.1 truein Connelly, A. J., Szalay, A. S., Dickinson, M., SubbaRao, M. V.,
\& Brunner, R. J. 1997, ApJL, 486, 11
\noindent\parshape 2 0.0 truein 06.5 truein 0.4 truein 06.1 truein Fernandez-Soto, A., Lanzetta, K. M., \& Yahill, A. 1998,
AJ, submitted
\noindent\parshape 2 0.0 truein 06.5 truein 0.4 truein 06.1 truein Helou, G., Soifer, B. T.,\& Rowan-Robinson, M. 1985,
ApJL, 298, 11
\noindent\parshape 2 0.0 truein 06.5 truein 0.4 truein 06.1 truein Hogg, D. 1998, Ph.D. thesis (Caltech)
\noindent\parshape 2 0.0 truein 06.5 truein 0.4 truein 06.1 truein Hughes et al. 1998, Nature, 393, 241
\noindent\parshape 2 0.0 truein 06.5 truein 0.4 truein 06.1 truein Lilly, S. J., Le Fevre,O., Hammer, F., Crampton, D.
1996, ApJL, 460, 1
\noindent\parshape 2 0.0 truein 06.5 truein 0.4 truein 06.1 truein Lowenthal, J., et al. 1997, ApJ, 481, 673.
\noindent\parshape 2 0.0 truein 06.5 truein 0.4 truein 06.1 truein Madau, P. et al. 1996, MNRAS, 283, 1388
\noindent\parshape 2 0.0 truein 06.5 truein 0.4 truein 06.1 truein Meurer, G. R., Heckman, T., Lehnert, M. D., Leitherer, C.
\& Lowenthal, J. 1997, AJ, 114, 54
\noindent\parshape 2 0.0 truein 06.5 truein 0.4 truein 06.1 truein Richards, E. A. 1998, in preparation
\noindent\parshape 2 0.0 truein 06.5 truein 0.4 truein 06.1 truein Richards, E. A., Kellermann, K. I., Fomalont, E. B., Windhorst, R. A.
\& Partridge, R. B. 1998, AJ, 116, 1039
\noindent\parshape 2 0.0 truein 06.5 truein 0.4 truein 06.1 truein Rowan-Robinson, M. et al. 1997, MNRAS, 289, 490
\noindent\parshape 2 0.0 truein 06.5 truein 0.4 truein 06.1 truein Williams et al. 1996, AJ, 112, 1335
\newpage
\section*{Figure Captions}
\figcaption{The greyscale shows a 20\arcsec $\times$ 20\arcsec ~HDF
I-band image (Williams et al. 1996) containing the
SCUBA detection HDF850.1. The contours correspond to
1.4 GHz emission at the -2, 2, 4 and 6 $\sigma$ level ($\sigma$ = 7.5 $\mu$Jy).
The three sigma position error circle for HDF850.1 is shown
after shifting to the VLA coordinate frame. The original
position of HDF850.1 taken from Hughes et al. (1998) is
denoted by the diamond. The ISO detection is marked with
a cross with three sigma position errors (Aussel et al. 1998).
The radio emission is clearly confined to the
optical feature north of the bright spiral.
Radio source 3651+1221 may be the most obscured part of
a larger galaxy 3-633.1 at $z$ = 1.72 (Fernandez-Soto et al.
1998).}
\figcaption{The greyscale shows a 20\arcsec $\times$ 20\arcsec ~
ground-based I-band image taken from Barger et al. (1998)
centered on the position of SCUBA source HDF850.2. The 1.4 GHz radio
contours are drawn at -2, 2, 3 and 5 $\sigma$.
The symbols are the
same as for Figure 1. We identify HDF850.2 with the 3.5 $\sigma$
radio source 3657+1159.}
\figcaption{The greyscale corresponds to optical I-band
emission in the field of SCUBA source HDF850.3 (20\arcsec ~on a side)
as taken from the HDF. Radio contours at 1.4 GHz are drawn at the
-2, 2 and 4 $\sigma$ level. Intriguingly, there is a 4.2 $\sigma$
radio 'source' located 4\arcsec ~from HDF850.3. The probability
of this being a chance coincidence is 20\% based on the surface
density of 4$\sigma$ radio sources in the field. If
HDF850.3 is associated with the radio source then this is
a blank field object to I$_{AB}$ = 27 (object lies in the less
sensitive PC). However, an ISO source from the supplemental
catalog of Aussel et al. (1998) is also in the field
and associated with the bright disk galaxy 1-34.0 (Williams
et al. 1996). It is difficult to discrimante between these
two possible sub-mm identifications with the present data.
The symbols are the
same as for Figure 1.
}
\figcaption{Radio 1.4 GHz contours drawn at the
-2, 2, 4 and 6 $\sigma$ level are overlaid on the
HDF I-band image centered on the position of
HDF850.4 (20\arcsec ~on a side). A 15 $\mu$m detection
from the complete catalog of
Aussel et al. (1998) has been associated with this
radio source and suggests likely starburst
activity in the disk galaxy. The symbols are the
same as for Figure 1.
}
\figcaption{Radio 1.4 GHz contours drawn at the
-2 and 2 $\sigma$ level are overlaid on the
HDF I-band image in the field of HDF850.5 (20\arcsec
~on a side). This is the one sub-mm source in the HDF
which has no plausible radio counterpart. The symbols
are the same as Figure 1.}
\end{document}
| proofpile-arXiv_065-9307 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\partial{\partial}
\def\frac{1}{2}{\frac{1}{2}}
\defA^{+}_0{A^{+}_0}
\def\psi_+{\psi_+}
\def\psi_-{\psi_-}
\def\psi^{\dagger}_+{\psi^{\dagger}_+}
\def\psi^{\dagger}_-{\psi^{\dagger}_-}
\def\overline{\psi}{\overline{\psi}}
\def\psi^{\dag}{\psi^{\dag}}
\def\chi^{\dag}{\chi^{\dag}}
\def\sla#1{#1\!\!\!/}
\defx^{+}{x^{+}}
\defx^{-}{x^{-}}
\defy^{-}{y^{-}}
\newcommand{\newcommand}{\newcommand}
\newcommand{\intgl}{\int\limits_{-L}^{+L}\!{{dx^-}\over\!2}}
\newcommand{\intgly}{\int\limits_{-L}^{+L}\!{{dy^-}\over\!2}}
\newcommand{\zmint}{\int\limits_{-L}^{+L}\!{{dx^-}\over{\!2L}}}
\def\begin{equation}{\begin{equation}}
\def\end{equation}{\end{equation}}
\def\begin{eqnarray}{\begin{eqnarray}}
\def\end{eqnarray}{\end{eqnarray}}
\begin{document}
\title{ Large Gauge Transformations and the Light-Front Vacuum Structure}
\medskip
\author{{\sl L$\!\!$'ubom\'{\i}r Martinovi\v c} \\
Institute of Physics, Slovak Academy of Sciences \\
D\'ubravsk\'a cesta 9, 842 28 Bratislava, Slovakia \thanks{permanent address}\\
and\\
International Institute of Theoretical and Applied Physics\\
Iowa State University, Ames, Iowa 50011, USA}
\date{}
\maketitle
\begin{abstract}
A residual gauge symmetry, exhibited by light-front gauge theories
quantized in a finite volume, is analyzed at the quantum level. Unitary
operators, which implement the symmetry, transform the trivial Fock vacuum
into an infinite set of degenerate coherent-state vacua. A fermionic component
of the vacuum emerges naturally without the need to introduce a Dirac sea. The
vacuum degeneracy along with the derivation of the theta-vacuum is discussed
within the massive Schwinger model. A possible generalization
of the approach to more realistic gauge field theories is suggested.
\end{abstract}
\section{Introduction}
Hamiltonian quantum field theory formulated in the light front (space-time and
field) variables \cite{Dir,Sus,Leutw,LKS,Rohr} has often been considered as a
conceptually very attractive theoretical scheme. Vacuum aspects of the dynamics
seem to simplify remarkably (Fock vacuum is to a very good approximation an
eigenstate of the full Hamiltonian) at the same time causing problems with
understanding chiral properties, vacuum degeneracy and symmetry breaking
phenomena. For example, it is not clear how one could reproduce the axial
anomaly and the chiral condensate \cite{Schw,LSw} in the light-front Schwinger
model. These and related difficulties \cite{Dave} have been usually explained
by the ``peculiarities'' of the quantization on the characteristic
surface $x^+=0$ \cite{McC,Rgf}.
In the present work, one of the so far missing components of the light-front
(LF) gauge field theory, namely the non-trivial vacuum structure, is found to
be directly related to a residual ``large" gauge symmetry present in the
finite-box formulation \cite{Mant} of the theory. The general idea is of
course not new. Gauge transformations with non-trivial topological properties
have been shown to be responsible for the vacuum degeneracy in \cite{tH,CDG,JR,
RotheS,Stroc,IsoM,Adam}, e.g.. Their role has been studied also in the
light-front literature \cite{Franke,Rgb,Rgf,KPP,Alex}.
The novel feature in our approach is the quantum-mechanical implementation of
large gauge transformations by unitary operators in the context of the
``trivial'' non-perturbative Fock vacuum of the LF field theory. The unitary
operators act on the fields as well as on states in Hilbert space. As a
consequence, the ``trivial" LF Fock vacuum transforms into an infinite set of
non-trivial vacua. They are basically coherent states of both the dynamical
gauge-field zero mode and an effective boson field bilinear in dynamical fermi
field operators. The multiple vacua can be superimposed to form a unique gauge
invariant vacuum. This will be shown with the example of the (massive)
Schwinger model, which is known to exhibit in a tractable form many of
non-perturbative features expected in QCD. We will argue however that the
mechanism could in principle work also for more complicated gauge theories.
\section{LF Quantization of the Massive Schwinger Model}
Due to specific light-front constraints, it is inevitable to adopt the
Dirac-Bergmann (DB) \cite{DB} or other similar method to properly quantize
the LF massive Schwinger model \cite{Hara,LMlong}. Here we quote only those
results of the DB analysis which are relevant for our approach to the vacuum
problem.
In terms of the LF variables, the Lagrangian density of the two-dimensional
spinor field $\Psi$ of mass $m$ interacting with the gauge field $A^\mu$ takes
the form
\begin{eqnarray}
{\cal L}_{LF} = i\psi^{\dagger}_+\stackrel {\leftrightarrow} {\partial_+}\psi_+ + i\psi^{\dagger}_-\delrlm\psi_- +
\frac{1}{2}(\partial_+ A^{+} - \partial_- A^{-})^2 - \nonumber \\- m(\psi^{\dagger}_+\psi_- +
\psi^{\dagger}_-\psi_+) - {e \over 2}j^{+}A^{-} - {e \over 2}j^{-}A^{+} .
\label{lflagr}
\end{eqnarray}
We choose $x^+ = x^0 + x^1$ and $x^{-} = x^0 - x^1$ as the LF time and space
variable, correspondingly. The dynamical ($\psi_+$) and dependent ($\psi_-$)
projections of the fermi field are defined as $\psi_{\pm} = \Lambda_{\pm}\Psi$,
where $\Lambda_{\pm}=\frac{1}{2}\gamma^0\gamma^{\pm}, \gamma^{\pm}=\gamma^0 \pm \gamma^1, \gamma^0 =\sigma^1,
\gamma^1 = i\sigma^2$ and $\sigma^1, \sigma^2$ are the Pauli matrices. At the quantum
level, the vector current will be represented by normal-ordered product of the
fermi operators, $j^{\pm}=2:\psi^{\dag}_{\pm}\psi_{\pm}:$.
A suitable finite-interval formulation of the model is achieved by imposing
the restriction $-L \le x^- \le L$ and by prescribing antiperiodic boundary
conditions for the fermion field and periodic ones for the gauge field. The
latter imply a decomposition of the gauge field into the zero-mode (ZM) part
$A_0^{\mu}$ and the part $A^{\mu}_{n}$ containing only normal Fourier modes.
We will work in the usual gauge $A^+_n=0, A^-_0=0$, which completely
eliminates gauge freedom with respect to {\it small} gauge transformations.
In a finite volume with periodic gauge field, the ZM $A^{+}_0$ becomes a physical
variable \cite{Franke,Mant,HH,Lenz91,Rgb} since it cannot be gauged away.
In quantum theory, it satisfies the commutation relation
\begin{equation}
\left[A^{+}_0(x^{+}),\Pi_{A^{+}_0}(x^{+})\right] = {i\over{L}},
\label{zmcr}
\end{equation}
where $\Pi_{A^{+}_0}=\partial_+A^{+}_0$ is the momentum conjugate to $A^{+}_0$. The DB
procedure yields the anticommutator for the independent fermi field component
\begin{equation}
\{\psi_+(x^{-},x^{+}),\psi^{\dagger}_+(y^{-},x^{+})\} = \frac{1}{2} \Lambda^+ \delta_{a}(x^{-}-y^{-})
\label{acr}
\end{equation}
with the antiperiodic delta function $\delta_a(x^{-}-y^{-})$ \cite{AC} being
regularized by a LF momentum cutoff $N$. The fermi-field Fock operators are
defined by
\begin{equation}
\psi_+(x^{-}) = {1 \over{\sqrt{2L}}} \left(\matrix{0 \cr 1}\right)
\!\sum_{n=\frac{1}{2}}^{N} \! \left(
b_ne^{-{i \over 2}k^+_nx^{-}} + d_n^{\dagger}e^{{i \over 2}k^+_nx^{-}}\right),
\label{fermexp}
\end{equation}
\begin{equation}
\{b_n,b^{\dagger}_{n^{\prime}}\} = \{d_n,d^{\dagger}_{n^{\prime}}\} =
\delta_{n,n^{\prime}},\;\;n=\frac{1}{2},{3 \over 2},\dots,\;k_n^+ = {2\pi \over L}n.
\end{equation}
While the LF momentum operator $P^+$ depends only on $\psi_+$, the gauge
invariant (see below) LF Hamiltonian of the model is expressed
in terms of the both unconstrained variables $\psi_+$ and $A^{+}_0$ as
\begin{eqnarray}
P^- = L\Pi^2_{A^{+}_0} - {e^2 \over 4}\intgl\intgly j^+(x^{-})
{\cal G}_2
(x^{-} - y^{-}) j^+(y^{-}) + \nonumber \\
+ m^2\intgl\intgly\left[\psi^{\dag}(x^{-}){\cal G}_a(x^{-} - y^{-};A^{+}_0)\psi(y^{-})
+ h.c. \right] .
\label{lfham}
\end{eqnarray}
The Green's functions
\begin{equation}
{\cal G}_2(x^{-} - y^{-}) = \!-{4\over{L}}\sum_{m=1}^{M}
{1 \over{{p^+_m}^2}}\left(e^{-{i\over {2}}p^+_m(x^{-} - y^{-})} +
e^{{i\over{2}}p^+_m(x^{-} - y^{-})}
\right),\;p_m^+ = {2\pi \over L}m,
\end{equation}
\begin{equation}
{\cal G}_a(x^{-}\!-y^{-};A^{+}_0) = {1 \over {4i}}\left[\epsilon(x^{-} \!- y^{-}) +
i \tan({{eL}\over{2}}A^{+}_0)\right]\!\exp{\left(-{ie\over 2}(x^{-} \!- y^{-})A^{+}_0
\right)}
\end{equation}
have been used to eliminate the constrained variables $A^{-}_n$ and $\psi_-$,
respectively, with $\epsilon(x^{-})$ being twice the sign function,
$\partial_- \epsilon(x^{-}) = 2\delta_a(x^{-})$.
The final consequence of the DB analysis is the condition (a first-class
constraint) of electric neutrality of the physical states,
$Q\vert phys \rangle = 0.$
\section{Large Gauge Transformations and Theta-Vacuum}
It is well known that gauge theories quantized in a finite volume exhibit
an extra symmetry not explicitly present in the continuum approach \cite{Mant,
Lenz91,Rgb,IsoM,Lenzqm,Lenzax}. In the LF formulation, the corresponding gauge
function is linear in $x^{-}$ with a coefficient, given by a specific
combination of constants. These simple properties follow from the requirement
to maintain boundary conditions for the gauge and matter field, respectively.
The above symmetry is the finite-box analogue \cite{Franke,Rgb} of topological
transformations familiar from the continuum formulation. Note that in the LF
theory they are restricted to the $+$ gauge field component even in $3+1$
dimensions. This simplifies their implementation at the quantum level.
For the considered $U(1)$ theory, the corresponding gauge function has the form
$\Lambda_\nu = {\pi\over L}\nu x^{-}$, is non-vanishing at $\pm L$ and defines a
winding number $\nu$:
\begin{equation}
\Lambda_\nu(L) - \Lambda_\nu(-L) = 2\pi\nu,\;\;\nu \in Z.
\end{equation}
Thus, the residual gauge symmetry of the Hamiltonian (\ref{lfham}) is
\cite{Hara}
\begin{equation}
A^{+}_0 \rightarrow A^{+}_0 - {2\pi\over{eL}}\nu,\;\;
\psi_+(x^{-}) \rightarrow e^{i{\pi \over L}\nux^{-}}\psi_+
(x^{-}).
\label{zmshift}
\end{equation}
Let us discuss the ZM part of the symmetry first. At the quantum
level, it is convenient to work with the rescaled ZM operators \cite{AD}
$\hat{\zeta}$ and $\hat{\pi}_0$ :
\begin{equation}
A^{+}_0 = {2\pi \over{eL}}\hat{\zeta},\;\;\Pi_{A^{+}_0} = {e \over{2\pi}}\hat{\pi}_0,\;\;
\left[\hat{\zeta},\hat{\pi}_0\right] = i .
\label{qmcr}
\end{equation}
Note that the box length dropped out from the ZM commutator. The shift
transformation of $A^{+}_0$ is for $\nu=1$ implemented by the
unitary operator $\hat{Z}_1$:
\begin{equation}
\hat{\zeta} \rightarrow \hat{Z}_1 \hat{\zeta} \hat{Z}^{\dagger}_1 = \hat{\zeta} - 1,
\;\; \hat{Z}_1 = \exp(-i\hat{\pi}_0) .
\end{equation}
The transformation of the ZM operator $\hat{\zeta}$ is accompanied by
the corresponding transformation of the vacuum state (see e.g. \cite{IZ} for
a related example), which we define by $a_0\vert 0 \rangle = 0$.
$a_0(a_0^\dagger)$ is the usual annihilation (creation) operator of a boson
quantum:
\begin{equation}
a_0 = {1 \over{\sqrt{2}}}\left(\hat{\zeta} + i\hat{\pi}_0\right),\;\;
a_0^{\dagger} = {1 \over{\sqrt{2}}}\left(\hat{\zeta} - i\hat{\pi}_0\right),\;\;
\left[a_0,a^{\dagger}_0\right] = 1.
\end{equation}
$\hat{Z}_1$ is essentially the displacement operator $\hat{D}(\alpha)$
of the theory of coherent states \cite{Glau,Zhang}. In our case, the
complex parameter $\alpha$ is replaced by the integer $\nu$ and the operator
$\hat{Z}(\nu) = (\hat{Z}_1)^\nu$ takes the form
\begin{equation}
\hat{Z}(\nu) = \exp\left[{\nu\over{\sqrt{2}}}\left(a^{\dagger}_0 - a_0\right)
\right]
\end{equation}
with the properties
\begin{equation}
\hat{Z}(\nu)a_0\hat{Z}^{\dagger}(\nu) = a_0 - {\nu\over{\sqrt{2}}},\;\;\;
a_0\vert \nu;z \rangle = {\nu\over{\sqrt{2}}}\vert \nu;z \rangle,\;\;\;\vert
\nu;z \rangle \equiv \hat{Z}(\nu)\vert 0 \rangle .
\label{FCS}
\end{equation}
The transformed (displaced) vacuum expressed in terms of the harmonic
oscillator Fock states $\vert n \rangle$ and the corresponding amplitudes
$C_n$ \cite{Glau,Zhang}
\begin{equation}
\vert \nu;z \rangle = \sum_{n=0}^{\infty}C_n(\nu)\vert n \rangle
\end{equation}
can be understood as describing the condensate of zero-mode gauge bosons.
Alternatively, one may consider the problem in quantum mechanical coordinate
representation, where $\hat{\pi}_0 = -i{d\over{d\zeta}}$ and the vacuum
wavefunction $\psi_0(\zeta)$ transforms as
\begin{equation}
\psi_0(\zeta) = \pi^{-{1\over{4}}}\exp(-{\frac{1}{2}}\zeta^2)
\rightarrow \psi_{\nu}(\zeta) = \exp(-\nu {d\over{d\zeta}})
\psi_0(\zeta)\ = \pi^{-{1\over{4}}}\exp(-{\frac{1}{2}}(\zeta-\nu)^2).
\end{equation}
The ZM kinetic energy term of the LF Hamiltonian (\ref{lfham}) is given by
\begin{equation}
P^-_{0} = -\frac{1}{2}{{e^2L}\over{2\pi^2}} {d^2\over{d\zeta^2}}.
\end{equation}
Usually, a Schr\"odinger equation with the above $P_0^-$ (or its equal-time
counterpart) is invoked to find
the vacuum energy and the corresponding wave functions subject to a
periodicity condition at the boundaries of the fundamental domain $0 \le \zeta
\le 1$ \cite{Mant,KPP,ADQED}. Here we are led by simple symmetry
arguments to consider instead of the lowest-energy eigenfunction of
$P^-_{0}\sim \hat{\pi}^2_0$ the eigenstates of $a_0$ with a non-vanishing
eigenvalue $\nu$ -- the ZM coherent states. The corresponding LF energy
\begin{equation}
E_0 = \int\limits_{-\infty}^{+\infty}d\zeta\psi_{\nu}(\zeta)P^-_{0}\psi_{\nu}
(\zeta) = {e^2L\over{8\pi^2}} \;,
\label{zmvacen}
\end{equation}
is independent of $\nu$, thus the infinite set of vacuum states
$\psi_\nu(\zeta),\;\nu \in Z$, is degenerate in the LF energy. In addition, they
are not invariant under $\hat{Z}_1$,
\begin{equation}
\hat{Z}_1\psi_{\nu}(\zeta) = \psi_{\nu + 1}(\zeta)
\label{psishift}
\end{equation}
and those $\psi_\nu(\zeta)$ which differ by unity in the value of $\nu$ have a
non-zero overlap. The latter property resembles tunnelling due to instantons
in the usual formulation. Note however that in our picture one did not
consider minima of the {\it classical} action. The lowest energy states have
been obtained within the quantum mechanical treatment of the residual
symmetry consisting of the c-number shifts of an operator.
Implementation of large gauge transformations for the dynamical fermion
field $\psi_+(x^{-})$ is based on the commutator
\begin{equation}
\left[\psi_+(x^{-}),j^+(y^{-})\right] = \psi_+(y^{-})\delta_a(x^{-} - y^{-})
\label{psijcr}
\end{equation}
which follows from the basic anticommutation relation (\ref{acr}). The unitary
operator $\hat{F}(\nu)=(\hat{F}_1)^{\nu}$ that implements the phase
transformation (\ref{zmshift}) is
\begin{equation}
\psi_+(x^{-}) \rightarrow \hat{F}(\nu)\psi_+(x^{-})\hat{F}^{\dagger}(\nu),\;\;\;
\hat{F}(\nu) = \exp{\left[-i{\pi \over L}\nu\intglx^{-} j^+(x^{-})\right]}.
\end{equation}
The Hilbert space transforms correspondingly. But since physical states are
states with zero total charge and the pairs of operators $b_k^{\dagger}d_l^
{\dagger}$, which create these states, are gauge invariant, it is only the
vacuum state that transforms:
\begin{equation}
\vert 0 \rangle \rightarrow \hat{F}(\nu) \vert 0 \rangle = \exp{\left[
-\nu\sum_{m=1}^M
{(-1)^m \over m}(A^{\dagger}_m - A_m)\right]} \vert 0 \rangle \equiv \vert \nu;
f \rangle .
\label{fermivac}
\end{equation}
The boson Fock operators $A_m, A^{\dagger}_m$ \cite{EP}
\begin{eqnarray}
A_m & = & \sum_{k=\frac{1}{2}}^{m-\frac{1}{2}}d_{m-k}b_k + \sum_{k=\frac{1}{2}}^N
\left[b^{\dagger}_k b_{m+k} - d^{\dagger}_k d_{m+k}\right],
\nonumber \\
A^{\dagger}_m & = & \sum_{k=\frac{1}{2}}^{m-\frac{1}{2}}b^{\dagger}_{k}d^
{\dagger}_{m-k} + \sum_{k=\frac{1}{2}}^N
\left[b^{\dagger}_{m+k}b_k - d^{\dagger}_{m+k}d_k\right],
\end{eqnarray}
satisfying $\left[A_m,A_{m^\prime}^{\dagger}\right] = \sqrt{m m^{\prime}}
\delta_{m,m^{\prime}}$ emerge naturally after taking a Fourier transform of
$j^+(x^{-})$ expressed in terms of fermion modes. This yields
\begin{equation}
j^+(x^{-}) = {1 \over{L}}\sum_{m=1}^{M}\left[A_m e^{-{i\over{2}}p^+_m
x^{-}} + A^{\dagger}_m e^{{i\over{2}}p^+_mx^{-}}\right]
\end{equation}
as well as the exponential operator in Eq.(\ref{fermivac}). The states
$\vert \nu;f \rangle$ are not invariant under $\hat{F}_1$: $\vert \nu;f
\rangle \rightarrow \vert \nu + 1;f \rangle $, in analogy with the
Eq.(\ref{psishift}).
To construct the physical vacuum state of the massive Schwinger model, one
first defines the operator of the full large gauge transformations $\hat{T}_1$
as a product of commuting operators $\hat{Z}_1$ and $\hat{F}_1$. The
requirement of gauge invariance of the physical ground state then leads to the
$\theta$-vacuum, which is obtained by diagonalization, i.e. by summing the
degenerate vacuum states $\vert \nu \rangle = \vert \nu;z \rangle \vert \nu;f
\rangle$ with the appropriate phase factor:
\begin{equation}
\vert \theta \rangle = \sum_{\nu=-\infty}^{+\infty}\!\!e^{i\nu\theta}\vert\nu
\rangle = \sum_{\nu=-\infty}^{+\infty}\!\!e^{i\nu\theta}\left(
\hat{T}_1\right)^\nu \vert 0
\rangle,\;\;\hat{T}_1 \vert \theta \rangle = e^{-i\theta}\vert \theta \rangle,
\;\;\hat{T}_1 \equiv \hat{Z}_1\hat{F}_1,
\label{theta}
\end{equation}
$(\vert 0 \rangle$ here denotes both the fermion and gauge boson Fock vacuum).
Thus we see that the $\theta$-vacuum $\vert \theta \rangle$ is an eigenstate
of $\hat{T}_1$ with the eigenvalue $\exp(-i\theta)$. In other words, it is
invariant up to a phase, which is the usual result \cite{LSw,Stroc}.
The physical meaning of the vacuum angle $\theta$ as the constant background
electric field \cite{Colm} can be found by a straightforward calculation:
$\langle \theta \vert \Pi_{A^{+}_0} \vert \theta \rangle = {e\theta \over{2\pi}}$, where
the infinite normalization factor $\langle \theta \vert \theta \rangle$ has been
divided out.
The $\vert \nu \rangle$-vacuum expectation values of $P^-$ are degenerate
due to gauge invariance of the latter. Subtracting the value (\ref{zmvacen})
as well as another constant coming from the normal-ordering of the mass term
\cite{LMlong}, this vacuum expectation value can be set to zero. Then one has
$\langle \theta \vert P^- \vert \theta \rangle = 0$, while $\langle \theta \vert P^+
\vert \theta \rangle = 0$ and $Q\vert \theta \rangle = 0 $ automatically
\cite{LMlong}.
Finally, we would like to point out that the fermion component of the
theta-vacuum (\ref{theta}), described in terms of the exponential of the
effective boson operators $A_m, A^{\dagger}_m$, introduces a possibility of
obtaining a non-vanishing fermion condensate in the LF massive Schwinger model.
\section{LF Vacuum in Other Gauge Theories}
Let us consider briefly the application of the above ideas to more complicated
gauge theories. The first example is the two-dimensional $SU(2)$ Yang-Mills
theory with colour massive fermion field $\Psi_i(x),i=1,2$, in the fundamental
representation. The gauge field is defined by means of the Pauli matrices
$\sigma^a, a=1,2,3$, as $A^{\mu a}(x) = A^{\mu a}(x) {\sigma^a \over 2}$.
The gauge fixing in the model can be performed analogously to the massive
Schwinger model by setting $A^{+a}_n = 0, A^{-a}_0 = 0$. In the finite
volume, the residual gauge symmetry, represented by constant $SU(2)$ matrices,
permits to diagonalize $A^+_0$. Consequently, there is only one dynamical
gauge field ZM for the $SU(2)$ theory, namely $A_0^{+3}={2\pi\over{gL}}\hat{\zeta
}$, where $g$ is the gauge coupling constant. The LF Hamiltonian, which is a
$SU(2)$ generalization of the expression
(\ref{lfham}), is invariant under residual large gauge transformations
\begin{equation}
A_0^+ \rightarrow A_0^+ - {2\pi\over{gL}}{\sigma^3 \over 2},\;\;\psi_+^i(x^{-})
\rightarrow e^{i{\pi\over L}{\sigma^3 \over 2}x^{-}}\psi_+^i(x^{-}),\;i=1,2.
\end{equation}
Their implementation in coordinate representation is analogous to the abelian
case with one important difference \cite{Lenzax,Alex}: in order to correctly
define the ZM momentum and kinetic energy operators, one has to take into
account the Jacobian $J(\zeta)$, which is induced by the curvature of
the $SU(2)$ group manifold:
\begin{equation}
P^-_{0} = -\frac{1}{2}{{e^2L}\over{2\pi^2}}{1 \over J}{d\over{d\zeta}}J{d \over{d\zeta}},
\;\hat{\Pi}_0 = -i{1 \over{\sqrt{J}}}{d \over{d\zeta}}\sqrt{J}=
-i{d \over {d\zeta}}-i\pi\cot\pi\zeta,\;J = \sin^2\pi\zeta.
\end{equation}
The presence of the Jacobian has a profound impact on the structure of the ZM
vacuum wave functions. Defining again the vacuum state as $(\hat{\zeta} + i\hat
{\Pi}_0)\Psi_0 = 0$, one finds
\begin{equation}
\Psi_0(\zeta) = \pi^{-{1 \over 4}}{e^{-\frac{1}{2}\zeta^2}\over{\vert \sin\pi\zeta \vert}}
\rightarrow \Psi_{\nu}(\zeta) = e^{-i\nu\hat{\Pi}_0}\Psi_0(\zeta) =
\pi^{-{1 \over 4}}{e^{-\frac{1}{2}(\zeta - \nu)^2}\over{\vert \sin\pi\zeta \vert}}.
\end{equation}
Thus, each wave function is divided into pieces separated by singular points
at integer multiples of $\pi$ and individual states are just shifted copies
of $\Psi_0(\zeta)$ with no overlap. Consequently, the $\theta$-vacuum cannot be
constructed \cite{Wit,Engelh}. Further details will be given separately
\cite{LJSU2}.
It is rather striking that the generalization of the present approach to the
vacuum problem for the case of the LF QED(3+1), quantized in the (generalized)
LC gauge and in a finite volume $-L \le x^{-} \le L,\; -L_{\perp} \le x^j\le
L_{\perp},j=1,2$, appears to be straightforward. The crucial point is that
in spite of two extra space dimensions, there is still only one dynamical ZM,
namely $A^{+}_0$ (the subscript 0 indicates the $(x^{-},x^j)$-independent
component). Indeed, $A_0^-$ can be gauged away (see below) and $A_0^j$ are
constrained. Proper zero modes, i.e. the gauge field components $a^+, a^-, a^j$
that have $p^+=0,p^j \neq 0$, are not dynamically independent variables either
\cite{ADQED} in contrast with the situation in the equal-time
Hamiltonian approach \cite{Lenzqm}.
Residual gauge transformations, which are the symmetry of the theory even after
all redundant gauge degrees of freedom have been completely eliminated by the
gauge-fixing conditions $A_n^+=0, A_0^-=0, \partial_+ a^+ + \partial_j a^j = 0$
\cite{ADQED}, are characterized by the same gauge function $\Lambda_\nu$ as in
the Schwinger model, since constant shifts of constrained $A_0^j$ in
$j$ directions are not allowed.
In this way, we are led to consider essentially the same unitary operators
implementing the residual symmetry as in the Schwinger model. For example,
defining the dimensionless quantities $\hat{\zeta}$ and $\hat{\pi}_0$ by
\begin{equation}
A^{+}_0 = {2 \pi \over{eL}}\hat{\zeta},\;\;\Pi_{A^{+}_0} = {1 \over{(2L_{\perp})^2}}
{e \over {2\pi}}\hat{\pi}_0,
\end{equation}
one again recovers the commutator (\ref{qmcr}), the shift operator $\hat{Z}
(\nu)$, etc.
Before being able to make conclusions about the $\theta$-vacuum of the
light-front QED(3+1) \cite{Rub}, one needs to better understand the role of
constrained zero modes. Let us emphasize only one point here: the fermion part
of the transformed vacuum state acquires again the simple form of
Eq.(\ref{fermivac}) with
generalized boson operators $\tilde{A}_m, \tilde{A}_m^\dagger$ ($\sigma$ is the
spin projection and $k_\perp \equiv k^j = \pm 1,\pm 2,\dots$)
\begin{eqnarray}
\tilde{A}_m^\dagger & = & \sum_{k=\frac{1}{2}}^M \sum_{k_\perp=-M_\perp}^{M_\perp}
\sum_{\sigma=
\pm \frac{1}{2}}\left[b^{\dagger}_{m+k,k_\perp,\sigma}b_{k,k_\perp,\sigma} - d^{\dagger}_
{m+k,k_\perp,\sigma}d_{k,k_\perp,\sigma}\right] \\
& + & \sum_{k=\frac{1}{2}}^{m-\frac{1}{2}} \sum_{k_\perp =
-M_\perp}^{M_\perp} \sum_{\sigma=\pm \frac{1}{2}}b^{\dagger}_{k,k_\perp,\sigma}d^{\dagger}_
{m-k,-k_\perp,-\sigma}.
\end{eqnarray}
The vacua $\vert \nu;f\rangle$ (\ref{fermivac}) with $\tilde{A}_m^\dagger,
\tilde{A}_m$ as given above satisfy $\langle \nu;f \vert P^+ \vert \nu;f
\rangle = 0$, $Q\vert \nu;f \rangle = 0$, as should.
\section{Discussion}
The main result of the present work is the demonstration that, despite the
apparent ``triviality" of the LF vacuum in the sector of normal modes, it is
possible to recover the necessary vacuum structure of light-front gauge
theories. The principal elements of the approach were the infrared
regularization achieved by quantizing in a finite volume and a systematic
implementation of the residual large gauge symmetry (specific to the
compactified formulation) in terms of unitary operators. An infinite set
of non-trivial non-perturbative vacuum states then emerges as the transformed
``trivial" Fock vacuum. The requirement of gauge invariance
(as well as of the cluster property \cite{KSus}) of the ground state yields
the $\theta$-vacuum in the case of the massive Schwinger model.
Zero-mode aspects of the LF Schwinger model quantized at $x^+ = 0$ have been
discussed in the literature before \cite{Rgb,Rgf,AD,LM0}. The massive case has
been studied in \cite{Hara,LJ}. Fermionic aspects of the residual symmetry are
usually analyzed within the model (rather ad hoc `N-vacua') for the LF
fermionic vacuum \cite{Rgf,Hara}. Our construction avoids the introduction
of the Dirac sea in a natural way.
The new insight is that fermion degrees of freedom are inevitably present in
the LF ground state -- though outside the usual Fock-state description -- as
a {\it consequence} of the residual symmetry under large gauge transformations.
It remains to be seen if other non-perturbative features like fermion
condensate and axial anomaly can be (at least in the continuum limit)
reproduced correctly in this approach,
which uses only fields initialized on one characteristic surface. Also, we
believe that the physics of the massless model will be recovered in the
$m \rightarrow 0$ limit of the massive theory.
Furthermore, a possible generalization of the latter to the LF $SU(2)$ gauge
theory in two dimensions has been suggested. Structure of the vacuum wave
functions, changed by a presence of the non-trivial Jacobian, indicates that
no $\theta$-vacuum can be formed in this case, in agreement with previous
conclusions \cite{Wit,Engelh}. Although the extension of our approach to the
vacuum problem of a realistic abelian gauge theory, namely QED(3+1),
appeared to be rather straightforward, difficulties related to the
renormalization and the presence of non-dynamical zero modes obeying the
complicated operator constraints \cite{ADQED} are to be expected. On the
other hand, a more general method
\cite{Lenzqm,Lenzax,JAR} of elimination of redundant gauge degrees
of freedom by unitary transformations may become a useful alternative to
the conventional gauge-fixed formulation of the light-front quantization.
\section{Acknowledgements}
I would like to thank A. Harindranath and J. Vary
for many profitable discussions and the
International Institute of Theoretical and Applied Physics at the
Iowa State University for support and hospitality.
This work has been supported by the Slovak Grant
Agency and by the NSF grant No. INT-9515511.
| proofpile-arXiv_065-9329 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
One of the most important limits on particle properties is the limit on the
number of light particle degrees of freedom at the the time of big bang
nucleosynthesis (BBN) \cite{ssg}. This is commonly computed as a limit on
the number of light neutrino flavors, $\nnu$. Recently, we \cite{oth2}
used a model-independent likelihood method (see also \cite{fo,fkot}) to
simultaneously constrain the value of the one true parameter in {\em
standard} BBN, the baryon-to-photon ratio $\eta$, together with $\nnu$.
For similar approaches, see \cite{cst2}. In that work \cite{oth2}, we based
our results on the best estimate of the observationally determined
abundance of \he4, $Y_p = 0.234 \pm 0.002 \pm 0.005$ from \cite{ostsk},
and of \li7, \li7/H $= (1.6 \pm 0.1) \times 10^{-10}$, from \cite{mol}.
While these determinations can still be considered good ones today, there
is often discussion of higher abundance for \he4 as perhaps indicated by
the data of \cite{iz} and higher abundances of \li7 due to the effects of
stellar depletion (see e.g.
\cite{pinn}). Rather than be forced to continually update the limit on
$\nnu$ as the observational situation evolves, we generalize our previous
work here and compute the upper limit on $\nnu$ for a wide range of
possible observed abundances of \he4 and \li7. Because the
determinations of D/H in quasar absorption system has not dramatically
improved, we can only comment on the implications of either the high or
low D/H measurements.
One of the major obstacles in testing BBN using the observed abundances of the
light element isotopes rests on our ability to infer from these observations a
primordial abundance. Because \he4, in extragalactic
HII regions, and \li7, in the atmospheres of old halo dwarf stars, are both
measured in very low metallicity systems (down to 1/50th solar for \he4 and 1/1000th
solar for \li7), very little modeling in the way of galactic chemical
evolution is required to extract a primordial abundance for these
isotopes. Of course systematic uncertainties, such as underlying stellar
absorption, in determining the \he4 abundance and the effects of stellar
depletion of \li7 lead to uncertainties in the primordial abundances of these
isotopes, and it is for that reason we are re-examining the limits to $\nnu$.
Nevertheless, the problems in extracting a primordial \he4 and \li7 abundance
pale in comparison with those for D and \he3, both of which are subject to
considerable uncertainties not only tied to the observations, but to galactic
chemical evolution. In fact, \he3 also suffers from serious uncertainties
concerning its fate in low mass stars \cite{orstv}. \he3 is both
produced and destroyed in stars making the connection to BBN very
difficult.
Deuterium is totally destroyed in the star formation process. As such, the
present or solar abundance of D/H is highly dependent on the details of a
chemical evolution model, and in particular the galactic star formation rate.
Unfortunately, it is very difficult at the present time to gain insight on the
primordial abundance of D/H from chemical evolution given present and solar
abundances since reasonably successful models of chemical evolution can be
constructed for primordial D/H values which differ by nearly an order of
magnitude\footnote{There may be some indication from studies of the
luminosity density at high redshift which implies a steeply decreasing
star formation rate \cite{cce}, and that at least on a cosmic scale,
significant amounts of deuterium has been destroyed \cite{cova}.}
\cite{scov}.
Of course much of the recent excitement surrounding deuterium concerns the
observation of D/H in quasar absorption systems
\cite{quas1}-\cite{quas4}. If a single value for the D/H abundance in
these systems could be established\footnote{It is not possible that all
disparate determinations of D/H represent an inhomogeneous primordial
abundance as the corresponding inhomogeneity in $\eta$ would lead to
anisotropies in the microwave background in excess of those observed
\cite{cos2}.}, then one could avoid all of the complications concerning
D/H and chemical evolution, and because of the steep monotonic dependence
of D/H on $\eta$, a good measurement of D/H would alone be sufficient to
determine the value of $\eta$ (since D/H is nearly independent of $\nnu$).
In this case, the data from \he4 and \li7 would be most valuable as a
consistency test on BBN and in the case of \he4, to set limits on
particle properties. In the analysis that follows, we will discuss the
consequences of the validity of either the high or low D/H determinations.
Using a likelihood analysis based on \he4 and \li7
\cite{fkot}, a probable range for the baryon-to-photon ratio, $\eta$ was
determined.
The \he4 likelihood distribution has a single
peak due to the monotonic dependence of \he4 on
$\eta$. However, because the dependence on $\eta$ is relatively flat,
particularly at higher values of $Y_p$, this peak may be very broad,
yielding little information on $\eta$ alone. On the other hand, because
\li7 is not monotonic in
$\eta$, the BBN prediction has a minimum at $\eta_{10}
\simeq 3$ ($
\eta_{10} = 10^{10} \eta$) and as a result, for an observationally determined value
of \li7 above the minimum, the \li7 likelihood distribution will show two peaks.
The total likelihood distribution
based on \he4 and \li7 is simply the product of the two individual
distributions.
In \cite{fkot}, the best fit value for
$\eta_{10}$ based on the quoted observational abundances was found to be
1.8 with a 95\% CL range
\begin{equation}
1.4 < \eta_{10} < 4.3
\label{etar}
\end{equation}
when restricting the analysis to the
standard model, including $\nnu = 3$.
In determining (\ref{etar}) systematic errors were treated as Gaussian
distributed.
When D/H from quasar absorption systems (those showing a
high value for D/H \cite{quas1,quas3}) is included in the analysis this
range is cut to $1.50 < \eta_{10} < 2.55$.
In \cite{oth2}, the maximum likelihood analysis of \cite{fo,fkot} which
utilized a likelihood function ${\cal L}(\eta)$ for fixed $\nnu = 3$ was
generalized to allow for variability in $\nnu$. There a more general likelihood
function ${\cal L}(\eta, \nnu)$ was applied to the current best estimates of
the primordial \he4 and \li7 abundances. Based on the analysis in \cite{ostsk},
we chose $\hbox{$Y_{\rm p}$} = 0.234 \pm 0.002 \rm (stat.) \pm 0.005 (syst.)$ as well as the
lower value $\hbox{$Y_{\rm p}$} = 0.230 \pm 0.003 \rm (stat.) \pm 0.005 (syst.)$ based on a
low metallicity subset of the data. Using these values of $\hbox{$Y_{\rm p}$}$ along with the
value (Li/H)$_p = (1.6 \pm 0.07) \times 10^{-10}$ from \cite{mol}, we found
peak likelihood values $\eta_{10} = 1.8$ and $\nnu = 3.0$ with a 95\% CL range
of $1.6\le\nnu\le4.0,
1.3\le\eta_{10}\le 5.0 $ for the higher \he4 value and similar results for
the lower one. More recent data from Izotov and Thuan
\cite{iz2} seems to indicate a still higher value for $\hbox{$Y_{\rm p}$}$, and for this
reason as well as wishing to be independent of the ``{\em
current}" best estimate of the abundances, we derive our results for a
wide range of possible values for $Y_p$ and (Li/H)$_p$ which will account
for the possibility of stellar depletion for the latter \cite{pinn}.
Finally, in \cite{oth2}, we considered only the effect of the high D/H
value from quasar absorption systems. Since there was virtually no
overlap between the likelihood functions based on the low D/H value and
the other two elements, there was little point in using that value in our analysis.
Since then, the low D/H value has been raised somewhat, and that together
with our present consideration of higher $\hbox{$Y_{\rm p}$}$ and (Li/H)$_p$ values
makes the exercise worth while.
In this paper, we follow the approach of \cite{oth2} -- \cite{fkot} in
constraining the theory
on the basis of the \he4 and \li7 data and to a lesser extent D/H, by constructing
a likelihood function ${\cal L}(\eta,\nnu)$. We discuss the current status of
the data in section 2, and indicate what range of values for the
primordial abundances we consider. In section 3, we display the likelihood
functions we use. As this was discussed in more detail in \cite{oth2,fkot}, we
will be brief here. Our results are given in section 4, and we draw conclusions
in section 5.
\section{Observational Data}
Data pertinent to the primordial \he4 abundance is obtained from observations
of extragalactic HII regions. These regions have low
metallicities (as low as 1/50th solar), and thus are presumably more primitive than
similar regions in our own Galaxy. The \he4 abundance used to extract a
primordial value spans roughly an order of magnitude in metallicity (e.g. O/H).
Furthermore, since there have been a considerable number of such systems observed
with metallicities significantly below solar, modeling plays a relatively
unimportant role in obtaining the primordial abundance of \he4 (see e.g.
\cite{fdo2}).
The \he4 data based on observations in \cite{p,iz} were discussed in
detail in \cite{ostsk}. There are over 70 such regions observed with
metallicities ranging from about 2--30\% of solar metallicity. This data led
to the determination of a primordial \he4 abundance of
$\hbox{$Y_{\rm p}$} = 0.234 \pm 0.002 \rm (stat.) \pm 0.005 (syst.)$
used in \cite{oth2}. That the statistical error is small is due to the large
number of regions observed and to the fact that the \he4 abundance in these
regions is found to be very well correlated to metallicity.
In fact, as can be understood from the remarks which follow, the primordial
\he4 abundance is dominated by systematic rather than statistical
uncertainties.
The compilation in \cite{ostsk} included the data of \cite{iz}. Although
this data is found to be consistent with other data on a point by point
basis, taken alone, it would imply a somewhat higher primordial \he4
abundance.
Furthermore, the resulting value of $\hbox{$Y_{\rm p}$}$ depends on the method of data
analysis. When only
\he4 data is used to self-consistently determine the \he4 abundance (as
opposed to using other data such as oxygen and sulphur to determine the
parameters which characterize the HII region and are needed to convert an
observation of a
\he4 line strength into an abundance), a value of $\hbox{$Y_{\rm p}$}$ as high as $0.244 \pm
0.002$ can be found\footnote{We note that this method has been criticized as it
relies on some \he4 data which is particularly uncertain, and these
uncertainties have not been carried over into the error budget in the \he4
abundance \cite{ostsk}.}
\cite{iz}.
The problem concerning \he4 has been accentuated recently with new data from
Izotov and Thuan \cite{iz2}. The enlarged data set from \cite{p,iz2} was
considered in \cite{fdo2}. The new resulting value for $\hbox{$Y_{\rm p}$}$ is
\begin{equation}
\hbox{$Y_{\rm p}$} = 0.238 \pm 0.002 \rm (stat.) \pm 0.005 (syst.)
\label{eq:he4}
\end{equation}
The new data taken alone gives $\hbox{$Y_{\rm p}$} = 0.2444 \pm 0.0015$
when using the method based on a set of 5 helium recombination lines
to determine all of the H II region
parameters. By more conventional methods, the same data gives $\hbox{$Y_{\rm p}$} =
0.239 \pm 0.002$. As one can see, the \he4 data is clearly dominated by
systematic uncertainties.
There has been considerably less variability in the \li7 data over the last
several years. The \li7 abundance is determined by the observation of Li
in the atmospheres of old halo dwarf stars as a function of metallicity (in
practice, the Fe abundance). The abundance used in \cite{oth2} from the work in
\cite{mol} continues to lead to the best estimate of the \li7 abundance in the
so called Spite plateau
\begin{equation}
y_7 \equiv \frac{\li7}{\rm H} = (1.6 \pm 0.07) \times 10^{-10}
\label{eq:li7}
\end{equation}
where the error is statistical, again due to the large number of stars observed.
If we employ the basic chemical evolution conclusion that metals
increase linearly with time, we may infer this value to be indicative of the
primordial Li abundance.
In \cite{oth2}, we noted that there are considerable systematic uncertainties
in the plateau abundance. It is often questioned as to whether the Pop II
stars have preserved their initial abundance of Li.
While the detection of the more fragile isotope \li6 in two of
these stars may argue against a strong depletion of \li7
\cite{sfosw,pinn}, it is difficult to exclude depletion of the order of a
factor of two. Therefore it seems appropriate to allow for a wider
range in \li7 abundances in our likelihood analysis than was done in
\cite{oth2}.
There has been some, albeit small, change in the D/H data from quasar absorption
systems. Although the re-observation of the high D/H in \cite{rh1} has been
withdrawn, the original measurements \cite{quas1} of this object still
stand at the high value. More recently, a different system at the relatively
low redshift of $z = 0.7$ was observed to yield a similar high value
\cite{quas3}
\begin{equation}
y_2 \equiv {\rm D/H} = (2 \pm 0.5) \times 10^{-4}.
\label{dhigh}
\end{equation}
The low
values of D/H in other such systems reported in \cite{quas2} have since been
refined to show slightly higher D/H values \cite{quas4}
\begin{equation}
y_2 \equiv {\rm D/H} = (3.4 \pm 0.3) \times 10^{-5}.
\label{dlow}
\end{equation}
Though this value is still significantly lower than the high D/H value
quoted above, the low value is now high enough that it contains
sufficient overlap with the ranges of the other light elements considered
to warrant its inclusion in our analysis.
\section{Likelihood Functions}
Monte Carlo and likelihood analyses have been discussed at great length in the
context of BBN \cite{kr,skm,kk1,kk2,hata1,hata2,fo,fkot,oth2}.
Since our likelihood analysis follows that of \cite{fkot} and \cite{oth2},
we will be very brief here.
The likelihood function for \he4, $L_4(\nnu, \eta)$ is determined from a
convolution of a theory function
\begin{equation}
L_{\rm 4,Theory}(Y, \nnu, \eta) =
{1\over\sqrt{2\pi}\sigma_{Y}(\nnu,\eta)}
\exp{\left({-(Y-\hbox{$Y_{\rm p}$}(\nnu,\eta))^{2}\over2\sigma_{Y}^{2}(\nnu,\eta)}\right)}
\end{equation}
(where $\hbox{$Y_{\rm p}$}(\nnu,\eta)$ and $\sigma_{Y}(\nnu,\eta)$ represent the results
of the theoretical calculation) and an observational function
\begin{equation}
L_{\rm 4,Obs}(Y) =
{1\over\sqrt{2\pi}\sigma_{Y0}}
\exp{\left({-(Y-Y_0)^{2}\over2\sigma_{Y0}^{2}}\right)}
\end{equation}
where $Y_0$ and $\sigma_{Y0}$ characterize the observed
distribution and are taken from Eqs. (\ref{eq:he4}) and (\ref{eq:li7}).
The full likelihood function for \he4
is then given by
\begin{equation}
L_{4}(\nnu, \eta) =
\int dY\, L_{\rm 4,Theory}(Y, \nnu, \eta) L_{\rm 4, Obs}(Y)
\end{equation}
which can be integrated (assuming Gaussian errors as we have done) to give
\begin{equation}
L_{4}(\nnu, \eta) =
{1\over\sqrt{2\pi(\sigma_Y^2(\nnu,\eta)+\sigma_{Y0}^2)}}
\exp\left({-(\hbox{$Y_{\rm p}$}(\nnu,\eta)-Y_0)^2\over
2(\sigma_Y^2(\nnu,\eta)+\sigma_{Y0}^2)}\right)
\end{equation}
The likelihood functions for \li7 and D are constructed in a similar
manner. The quantities of interest in constraining the $\nnu$---$\eta$ plane
are the combined likelihood functions
\begin{equation}
L_{47} = L_4\times L_7
\end{equation}
and
\begin{equation}
L_{247} = L_{2}\times L_{47}.
\end{equation}
Contours of constant $L_{47}$ (or $L_{247}$ when we include D in the analysis)
represent equally likely points in the
$\nnu$--$\eta$ plane. Calculating the contour containing 95\% of
the volume under the $L_{47}$ surface gives us the 95\% likelihood
region. From these contours we can then read off ranges of $\nnu$ and $\eta$.
\section{Results}
Using the abundances in eqs (\ref{eq:he4},\ref{eq:li7}) and adding
the systematic errors to the statistical errors in quadrature we have
a maximum likelihood distribution, $L_{47}$, which is shown in
Figure 1a. This is very similar to our previous result based on
the slightly lower value of $\hbox{$Y_{\rm p}$}$. As one can see, $L_{47}$ is double peaked.
This is due to the minimum in the predicted lithium abundance as a function of
$\eta$, as was discussed earlier.
We also show in Figures 1b and 1c, the resulting likelihood distributions,
$L_{247}$, when
the high and low D/H values from Eqs. (\ref{dhigh}) and (\ref{dlow}) are
included.
The peaks of the distribution as well as the allowed ranges of
$\eta$ and
$\nnu$ are more easily discerned in the contour plots of Figure 2 which shows
the 50\%, 68\% and 95\% confidence level contours in $L_{47}$ and $L_{247}$.
The crosses show the location of the
peaks of the likelihood functions. Note that
$L_{47}$ peaks at $\nnu=3.2$, (up slightly from the case with $\hbox{$Y_{\rm p}$} =
.234$) and $\eta_{10}=1.85$. The second peak of $L_{47}$ occurs at
$\nnu=2.6$, $\eta_{10}=3.6$. The 95\% confidence level allows the
following ranges in $\eta$ and $\nnu$
\begin{eqnarray}
1.7\le\nnu\le4.3 \nonumber \\
1.4\le\eta_{10}\le4.9
\end{eqnarray}
These results differ only slight from those in \cite{oth2}.
Since $L_{2}$ picks out a small range of values
of $\eta$, largely independent of $\nnu$, its effect on $L_{247}$ is
to eliminate one of the two peaks in $L_{47}$.
With the high D/H value, $L_{247}$
peaks at the slightly higher value $\nnu=3.3$,
$\eta_{10}=1.85$. In this case the 95\% contour gives the ranges
\begin{eqnarray}
2.2\le\nnu\le4.4 \nonumber \\
1.4\le\eta_{10}\le 2.4
\end{eqnarray}
(Strictly speaking, $\eta_{10}$ can also be in the range
3.2---3.5, with $2.5\mathrel{\mathpalette\fun <}\nnu\la2.9$ as can be seen by the 95\% contour in
Figure 2a. However this ``peak" is almost completely invisible in Figure
1b.) The 95\% CL ranges in $\nnu$ for both $L_{47}$ and $L_{247}$
include values below the canonical value $\nnu = 3$. Since one could
argue that $\nnu \ge 3$, we could use this condition as a Bayesian
prior. This was done in \cite{osb} and in the present context in
\cite{oth2}. In the latter, the effect on the limit to $\nnu$ was
minor, and we do not repeat this analysis here.
In the case of low D/H,
$L_{2}$ picks out a smaller value of
$\nnu = 2.4$ and a larger value of $\eta = 4.55$.
The 95\% CL upper limit is now $\nnu < 3.2$, and the range for
$\eta$ is $ 3.9 < \eta_{10} < 5.4$. It is important to stress that with
the increase in the determined value of D/H \cite{quas4} in the low D/H
systems, these abundances are now consistent with the standard model
value of $\nnu = 3$ at the 2 $\sigma$ level.
Although we feel that the above set of values represents the {\em current} best choices
for the observational parameters, our real goal in this paper is to generalize these
results for a wide range of possible primordial abundances.
To begin with, we will fix (Li/H)$_p$ from Eq. (\ref{eq:li7}), and allow
$\hbox{$Y_{\rm p}$}$ to vary from 0.225 -- 0.250. In Figure 3, the positions of the two peaks of the
likelihood function, $L_{47}$, are shown as functions of $\hbox{$Y_{\rm p}$}$. The low-$\eta$ peak is shown
by the dashed curve, while the high-$\eta$ peak is shown as dotted. The preferred
value of $\nnu = 3$, corresponds to a peak of the likelihood function either at $\hbox{$Y_{\rm p}$}
= 0.234$ at low $\eta_{10} = 1.8$ or at $\hbox{$Y_{\rm p}$} = 0.243$ at $\eta_{10} = 3.6$
(very close to the value of $\hbox{$Y_{\rm p}$}$ quoted in \cite{iz2}). Since the peaks
of the likelihood function are of comparable height, no useful
statistical information can be extracted concerning the relative
likelihood of the two peaks. The 95\% CL upper limit to
$\nnu$ as a function of $\hbox{$Y_{\rm p}$}$ is shown by the solid curve, and over the range in $\hbox{$Y_{\rm p}$}$
considered varies from 3.3 -- 5.3.
The fact that the peak value of $\nnu$ (and its upper limit) increases with $\hbox{$Y_{\rm p}$}$ is
easy to understand. The BBN production of \he4 increases with increasing $\nnu$.
Thus for fixed Li/H, or fixed $\eta$,
raising $\hbox{$Y_{\rm p}$}$ must be compensated for by raising $\nnu$ in order to avoid moving
the peak likelihood to higher values of $\eta$ and therefore off of the \li7 peak.
In Figure 4, we show the corresponding results with (Li/H)$_p = 4.1 \times 10^{-10}$.
In this case, we must assume that lithium was depleted by a factor of $\sim
2.5$ or 0.4 dex, corresponding to the upper limit derived in \cite{pinn}.
The effect of assuming a higher value for the primordial abundance of Li/H is that
the two peaks in the likelihood function are split apart.
Now the value of $\nnu = 3$ occurs at $\hbox{$Y_{\rm p}$} = 0.227$ at $\eta_{10} = 1.1$ (a very low
value) and at $\hbox{$Y_{\rm p}$} = 0.248$ and $\eta_{10} = 5.7$. The 95\% CL upper
limit on $\nnu$ in this case
can even extend up to 6 at $\hbox{$Y_{\rm p}$} = 0.250$.
In Figure 5, we show a compilation of the 95\% CL upper limits to $\nnu$
for different values of (Li/H)$_p = 1.6, 2.0, 2.6, 3.2, {\rm and} ~4.1
\times 10^{-10}$. The upper limit to $\nnu$ can be approximated by a fit
to our results which can be expressed as
\begin{equation}
\nnu \mathrel{\mathpalette\fun <} 80 \hbox{$Y_{\rm p}$} + 2.5 \times 10^9 {\rm (Li/H)}_p - 15.15
\end{equation}
Finally we turn to the cases when D/H from quasar absorption systems are
also considered in the analysis. For the high D/H given in Eq.
(\ref{dhigh}), though there is formally still a high-$\eta$ peak, the
value of the likelihood function
$L_{247}$ there is so low that it barely falls within the 95\% CL equal
likelihood contour (see Figures 1b and 2a). Therefore
we will ignore it here. In Figure 6, we show the peak value of $\nnu$
and its upper limit for the two cases of (Li/H)$_p = 1.6$ and $4.1 \times
10^{-10}$. These results differ only slightly from those shown in Figures
3 and 4. We note however, that overall the two values of Li/H do not
give an equally good fit. For fixed D/H, the high value prefers a value
of $\eta_{10} \simeq 1.8$ coinciding with the position of the low-$\eta$
peak for (Li/H)$_p = 1.6 \times 10^{-10}$. At higher Li/H, the
low-$\eta$ peak shifts to lower $\eta$ diminishing the overlap with D/H.
In fact at (Li/H)$_p \mathrel{\mathpalette\fun >} 3.8 \times 10^{-10}$, the likelihood function
$L_{247}$ takes peak values which would lie outside the 95\% CL contour
of the case (Li/H)$_p = 1.6 \times 10^{-10}$. The relative values of the
likelihood function $L_{247}$, on the low-$\eta$ peak, for the five values of Li/H
considered are shown in Figure 7. Contrary to our inability to statistically
distinguish between the two peaks of $L_{47}$, the large variability in the values
of
$L_{247}$ shown in Figure 7 are statistically relevant. Thus,
as claimed in \cite{fkot}, if the high D/H could be confirmed, one could
set a strong limit on the amount of \li7 depletion in halo dwarf stars.
Since the low D/H value has come up somewhat, and since here we are
considering the possibility for higher values of $\hbox{$Y_{\rm p}$}$ and (Li/H)$_p$,
the statistical treatment of the low D/H case is warranted. In Figure 8,
we show the peak value and 95\% CL upper limit from $L_{247}$ when the
low value of D/H is used from Eq. (\ref{dlow}) with (Li/H)$_p = 1.6
\times 10^{-10}$. The results are not significantly different in this
case for the other choices of (Li/H)$_p$. In order to obtain $\nnu = 3$,
one needs to go to \he4 abundances as high as $\hbox{$Y_{\rm p}$} = 0.247$ with respect
to the peak of the likelihood function. However, for $Y_p > 0.234$, the
revised low value of D/H is compatible with \he4 and \li7 at the 95\%
CL. The likelihood functions $L_{247}$ are shown in Figure 9 for
completeness.
\section{Conclusions}
We have generalized the full two-dimensional (in $\eta$ and $\nnu$)
likelihood analysis based on big bang nucleosynthesis for a wide range of
possible primordial abundances of \he4 and \li7. Allowing for full freedom
in both the baryon-to-photon ratio,
$\eta$, and the number of light particle degrees of freedom as
characterized by the number of light, stable neutrinos, $\nnu$, we have
updated the allowed range in $\eta$ and $\nnu$ based the higher value of
$Y_p = 0.238 \pm 0.002 \pm 0.005$ from \cite{fdo2} which includes the
recent data in \cite{iz2}. The likelihood analysis based on \he4 and
\li7 yields the 95\% CL upper limits: $\nnu \le 4.3$ and $\eta_{10} \le
4.9$. The result for $\nnu$ is only slightly altered, $\nnu \le 4.4$,
when the high values of D/H observed in certain quasar absorption systems
\cite{quas1,quas3} are included in the analysis. In this case, the upper limit to
$\eta_{10}$ is lowered to 2.4.
Since the low values of D/H have been revised upward somewhat
\cite{quas2}, they are now consistent with \he4 and \li7 and $\nnu = 3$
at the 95\% CL.
We have also shown how our results for the upper limit to $\nnu$ depend
on the specific choice for the primordial abundance of \he4 and \li7.
If we assume that the observational determination of \li7 in halo stars
is a true indicator the primordial abundance of \li7, then the upper
limit to $\nnu$ varies from 3.3 -- 5.3 for $\hbox{$Y_{\rm p}$}$ in the range 0.225 --
0.250. If on the other hand, \li7 is depleted in halo stars by as much
as a factor of 2.5, then the upper limit to $\nnu$ could extend up to 6
at $\hbox{$Y_{\rm p}$} = 0.250$.
\bigskip
{\bf Acknowledgments}
We note that this work was begun in collaboration with David Schramm.
This work was supported in part by
DOE grant DE-FG02-94ER40823 at Minnesota.
| proofpile-arXiv_065-9330 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Let $X$ be a scheme of finite type over a field
$k$. According to Beilinson \cite{Be}, given any quasi-coherent
$\mcal{O}_{X}$-module $\mcal{M}$ and an integer $q$, there is a
flasque $\mcal{O}_{X}$-module
$\underline{\mbb{A}}_{\mrm{red}}^{q}(\mcal{M})$,
called the {\em sheaf of adeles}.
This is a generalization of the classical adeles of number theory
(cf.\ Example \ref{exa2.1}).
Moreover, there are homomorphisms
$\partial : \underline{\mbb{A}}_{\mrm{red}}^{q}(\mcal{M}) \rightarrow
\underline{\mbb{A}}_{\mrm{red}}^{q + 1}(\mcal{M})$
which make $\underline{\mbb{A}}_{\mrm{red}}^{{\textstyle \cdot}}(\mcal{M})$ into a
complex, and
$\mcal{M} \rightarrow \underline{\mbb{A}}_{\mrm{red}}^{{\textstyle \cdot}}(\mcal{M})$
is quasi-isomorphism.
Now let $\Omega_{X / k}^{{\textstyle \cdot}}$ be the
algebra of K\"{a}hler differential forms on $X$. In \cite{HY}
we proved that the sheaf
\[ \mcal{A}^{{\textstyle \cdot}}_{X} =
\underline{\mbb{A}}_{\mrm{red}}^{{\textstyle \cdot}}(\Omega_{X / k}^{{\textstyle \cdot}}) =
\bigoplus_{p, q}
\underline{\mbb{A}}_{\mrm{red}}^{q}(\Omega^{p}_{X / k}) \]
is a resolution of $\Omega_{X / k}^{{\textstyle \cdot}}$ as differential graded
algebras (DGAs). Therefore when $X$ is smooth,
$\mcal{A}^{{\textstyle \cdot}}_{X}$ calculates the
algebraic De Rham cohomology:
$\mrm{H}^{p}_{\mrm{DR}}(X) \cong \mrm{H}^{p} \Gamma(X,
\mcal{A}^{{\textstyle \cdot}}_{X})$.
We see that there is an analogy between $\mcal{A}^{{\textstyle \cdot}}_{X}$
and the Dolbeault sheaves of smooth forms on a complex-analytic
manifold.
Carrying this analogy further, in this paper we show that when
$\operatorname{char} k = 0$, any vector bundle $E$ on $X$ admits an
adelic connection $\nabla$. Given such a connection one can
assign adelic Chern forms
$c_{i}(E, \nabla) \in \Gamma(X, \mcal{A}^{2i}_{X})$,
whose classes
$c_{i}(E) := [c_{i}(E, \nabla)] \in \mrm{H}^{2i}_{\mrm{DR}}(X)$
are the usual Chern classes. We include three applications of our
adelic Chern-Weil theory, to demonstrate its effectiveness and
potential.
The idea of using adeles for an algebraic Chern-Weil theory
goes back to Parshin, who constructed a Chern form
$c_{i}(E) \in \underline{\mbb{A}}^{i}(\Omega_{X / k}^{i})$
using an $i$-cocycle on $\mrm{Gl}(\underline{\mbb{A}}^{1}(\mcal{O}_{X}))$
(see \cite{Pa}). Unfortunately we found it quite difficult to
perform calculations with Parshin's forms. Indeed, there is an
inherent complication to any Chern-Weil theory based on
$\mcal{A}^{{\textstyle \cdot}}_{X}$.
The DGA $\mcal{A}^{{\textstyle \cdot}}_{X}$, with its
Alexander-Whitney product, is not (graded)
commutative. This means that even if one had some kind of
``curvature matrix'' $R$ with entries in $\mcal{A}^{2}_{X}$, one
could not simply evaluate invariant polynomials on $R$.
The problem of noncommutativity was encountered long ago in algebraic
topology, and was dubbed the ``commutative cochain problem''. The
solution, by Thom and Sullivan, was extended to the setup of
cosimplicial DGAs by Bousfield-Gugen\-heim and Hinich-Schechtman (see
\cite{BG}, \cite{HS1}, \cite{HS2}). In our framework this gives a
sheaf of commutative DGAs $\tilde{\mcal{A}}^{{\textstyle \cdot}}_{X}$ on $X$,
called the sheaf of {\em Thom-Sullivan adeles}, and a homomorphism of
complexes (``integration on the fiber'')
$\int_{\Delta} :
\tilde{\mcal{A}}^{{\textstyle \cdot}}_{X} \rightarrow \mcal{A}^{{\textstyle \cdot}}_{X}$.
This map induces an isomorphism of graded algebras
$\mrm{H}^{{\textstyle \cdot}}(\int_{\Delta}) :
\mrm{H}^{{\textstyle \cdot}} \Gamma(X, \tilde{\mcal{A}}^{{\textstyle \cdot}}_{X}) \rightarrow
\mrm{H}^{{\textstyle \cdot}} \Gamma(X, \mcal{A}^{{\textstyle \cdot}}_{X})$.
We should point out that $\int_{\Delta}$ involves denominators, so
it is necessary to work in characteristic $0$.
Bott discovered a way of gluing together connections defined
locally on a manifold (see \cite{Bo1}). This method was imported to
algebraic geometry by Zhou (in \cite{Zh}), who used \v{C}ech
cohomology. When we tried to write the formulas in terms of adeles, it
became evident that they gave a connection on the Thom-Sullivan adeles
$\tilde{\mcal{A}}^{{\textstyle \cdot}}_{X}$. Later we realized that a similar
construction was used by Dupont in the context of simplicial manifolds
(see \cite{Du}).
In the remainder of the Introduction we outline the main results of
our paper.
\paragraph{Adelic Connections}
Let $k$ be a field of characteristic $0$ and $X$ a finite type scheme
over it. The definition of Beilinson adeles on $X$ and their
properties will be reviewed in Section 2.
For now let us just note that the sheaf of adeles
$\tilde{\mcal{A}}^{{\textstyle \cdot}}_{X}$ is a commutative DGA, and
$\Omega^{{\textstyle \cdot}}_{X / k} \rightarrow \tilde{\mcal{A}}^{{\textstyle \cdot}}_{X}$
is a DGA quasi-isomorphism.
Let $\mcal{E}$ be the locally free $\mcal{O}_{X}$-module of rank $r$
associated to the vector bundle $E$.
An {\em adelic connection} on $\mcal{E}$ is by definition a
connection
\[ \nabla : \tilde{\mcal{A}}^{0}_{X} \otimes_{\mcal{O}_{X}}
\mcal{E} \rightarrow
\tilde{\mcal{A}}^{1}_{X} \otimes_{\mcal{O}_{X}} \mcal{E} \]
over the algebra $\tilde{\mcal{A}}^{0}_{X}$.
Such connections are abundant. One way to get an adelic connection is
by choosing, for every point $x$, a basis (or frame; we use these
terms interchangeably)
$\bsym{e}_{x} = (e_{x, 1}, \ldots, e_{x, r})$
for the $\mcal{O}_{X, x}$-module $\mcal{E}_{x}$. We then get a
Levi-Civita connection
\[ \nabla_{x} : \mcal{E}_{x} \rightarrow
\Omega^{1}_{X / k, x} \otimes_{\mcal{O}_{X, x}} \mcal{E}_{x} \]
over the $k$-algebra
$\mcal{O}_{X, x}$.
The Bott gluing mentioned above produces an adelic connection
$\nabla$ (see Proposition \ref{prop3.1}).
\paragraph{Adelic Chern-Weil Homomorphism}
Since $\tilde{\mcal{A}}^{{\textstyle \cdot}}_{X}$ is a (graded) commutative DGA,
an adelic connection $\nabla$ on $\mcal{E}$ gives a curvature form
\[ R := \nabla^{2} \in
\Gamma(X, \tilde{\mcal{A}}^{2}_{X} \otimes_{\mcal{O}_{X}}
\mcal{E}nd(\mcal{E})) . \]
Denote by
$\mrm{S}(\mrm{M}_{r}(k)^{*}) = \mcal{O}(\mrm{M}_{r} \times k)$
the algebra of polynomial functions on $r \times r$ matrices, and let
$I_{r}(k) :=\mrm{S}(\mrm{M}_{r}(k)^{*})^{\mrm{Gl}_{r}(k)}$
be the subalgebra of conjugation-invariant functions.
Denote by $P_{i}$ the $i$-th elementary invariant polynomial, so
$P_{1} = \operatorname{tr}, \ldots, P_{r} = \operatorname{det}$.
For any $P \in I_{r}(k)$ the form
$P(R) \in \Gamma(X, \tilde{\mcal{A}}^{{\textstyle \cdot}}_{X})$
is closed. So there is a $k$-algebra homomorphism
$w_{\mcal{E}} :
I_{r}(k) \rightarrow
\mrm{H}^{{\textstyle \cdot}} \Gamma(X, \tilde{\mcal{A}}^{{\textstyle \cdot}}_{X})$,
$P \mapsto [P(R)]$,
called the {\em adelic Chern-Weil homomorphism}.
In Theorem \ref{thm3.2} we prove that $P(\mcal{E}) = w_{\mcal{E}}(P)$
is independent of the connection $\nabla$ (this is true even if $X$
is singular). Defining the $i$-th {\em adelic Chern form} to be
\[ c_{i}(\mcal{E}; \nabla) := \int_{\Delta} P_{i}(R) \in
\Gamma(X, \mcal{A}^{2i}_{X}) , \]
we show the three axioms of Chern classes are satisfied
(Theorem \ref{thm3.4}). Hence when $X$ is smooth over $k$,
\[ c_{i}(\mcal{E}) := [c_{i}(\mcal{E}; \nabla)] \in
\mrm{H}^{2i}_{\mrm{DR}}(X) \]
is the usual $i$-th Chern class.
\paragraph{Secondary Characteristic Classes}
Suppose now that $X$ is a smooth scheme over $k$, and let $\mcal{E}$
be a locally free sheaf of rank $r$ on $X$. Let $P \in I_{r}(k)$ be
an invariant polynomial function of degree $m \geq 2$.
In \cite{BE}, Bloch and Esnault showed that given
an algebraic connection
$\nabla : \mcal{E} \rightarrow \Omega^{1}_{X / k} \otimes_{\mcal{O}_{X}}
\mcal{E}$,
there is a Chern-Simons class
$\mrm{T} P(\mcal{E}, \nabla)$
satisfying
$\mrm{d} \mrm{T} P(\mcal{E}, \nabla) =
P(\mcal{E}) \in \mrm{H}^{2m}_{\mrm{DR}}(X)$.
(We are using the notation of \cite{Es}.)
Since adelic connections always exist, we can construct adelic
secondary characteristic classes on {\em any} smooth $k$-scheme
$X$ and {\em any} locally free sheaf $\mcal{E}$. Theorem \ref{thm4.1}
says that given an adelic connection $\nabla$ there is a class
\[ \mrm{T} P(\mcal{E}; \nabla) \in
\Gamma \left( X, \mcal{A}^{2m-1}_{X} / \mrm{D}(\mcal{A}^{2m-2}_{X})
\right) \]
satisfying
\[ \mrm{D} \mrm{T} P(\mcal{E}; \nabla) = P(\mcal{E}) \in
\mrm{H}^{2m}_{\mrm{DR}}(X) . \]
The existence of adelic secondary characteristic classes, combined
with the action of adeles on the residue complex (see below,
and Theorem \ref{thm6.1}), opens new possibilities for research
on vanishing of cohomology classes (cf.\ \cite{Es}).
\paragraph{Bott Residue Formula}
The adeles of differential forms can be integrated.
If $\operatorname{dim} X = n$, each maximal chain of points
$\xi = (x_{0}, \ldots, x_{n})$ determines a local integral
$\operatorname{Res}_{\xi} : \Gamma(X, \mcal{A}^{2n}_{X}) \rightarrow k$
(cf.\ \cite{Be}, \cite{Ye1}). If $X$ is smooth and proper then
the global map
\[ \int_{X} := \sum_{\xi} \operatorname{Res}_{\xi} :
\mrm{H}^{2n}_{\mrm{DR}}(X) =
\mrm{H}^{2n} \Gamma(X, \mcal{A}^{{\textstyle \cdot}}_{X}) \rightarrow k \]
coincides with the usual ``algebraic integral'' of, say, \cite{Ha1}.
Assume $X$ is a smooth projective variety of dimension $n$.
Let $P \in I_{r}(k)$ be a homogeneous polynomial of degree $n$,
so that $P = Q(P_{1}, \ldots, P_{r})$ for some polynomial $Q$ in $r$
variables. Let $v \in \Gamma(X, \mcal{T}_{X})$ be a vector field
with isolated zeroes, and assume $v$ acts on the locally free sheaf
$\mcal{E}$. For each zero $z$ of $v$ there is a local
invariant $P(v, \mcal{E}, z) \in k$, which has an explicit expression
in terms of local coordinates. Theorem \ref{thm5.1} says that
\[ \int_{X} Q(c_{1}(\mcal{E}), \ldots, c_{r}(\mcal{E})) =
\sum_{v(z) = 0} P(v, \mcal{E}, z) . \]
The proof of the theorem follows the steps of Bott's original
proof in \cite{Bo2}, translated to adeles and algebraic residues.
Example \ref{exa5.1} provides an explicit illustration of the result
in the case of a nonreduced zero $z$.
We should of course mention the earlier algebraic proof of the Bott
Residue Formula for isolated zeroes, by Carrell-Lieberman \cite{CL},
which uses Grothendieck's global duality.
There is also a Bott Residue Formula for group actions, which is best
stated as a localization formula in equivariant cohomology (cf.\
\cite{AB}). Recently this formula was used in enumerative geometry,
see for instance \cite{ES} and \cite{Ko}.
Edidin-Graham \cite{EG} proved Bott's formula in the equivariant
intersection ring.
\paragraph{The Gauss-Bonnet Formula}
Let $k$ be a perfect field of any characteristic, and let $X$ be a
finite type $k$-scheme. The residue complex $\mcal{K}^{{\textstyle \cdot}}_{X}$ is
by definition the Cousin complex of $\pi^{!} k$, where
$\pi : X \rightarrow \operatorname{Spec} k$ is the structural morphism (cf.\
\cite{RD}). Each $\mcal{K}^{q}_{X}$ is a quasi-coherent sheaf. Let
\[ \mcal{F}^{{\textstyle \cdot}}_{X} := \mcal{H}om_{\mcal{O}_{X}}(
\Omega^{{\textstyle \cdot}}_{X / k}, \mcal{K}^{{\textstyle \cdot}}_{X}) \]
which is a graded sheaf in the obvious way.
According to \cite{EZ} or \cite{Ye3}, there is an operator
$\mrm{D} : \mcal{F}^{i}_{X} \rightarrow \mcal{F}^{i + 1}_{X}$
which makes $\mcal{F}^{{\textstyle \cdot}}_{X}$ into a DG
$\Omega^{{\textstyle \cdot}}_{X / k}$-module. $\mcal{F}^{{\textstyle \cdot}}_{X}$ is called the
{\em De Rham-residue complex}.
When $X$ is smooth,
$\mrm{H}_{i}^{\mrm{DR}}(X) =
\mrm{H}^{-i} \Gamma(X, \mcal{F}^{{\textstyle \cdot}}_{X})$.
In \cite{Ye5} it is we proved that there is a natural
structure of right DG
$\mcal{A}^{{\textstyle \cdot}}_{X}$-module on $\mcal{F}^{{\textstyle \cdot}}_{X}$, extending
the $\Omega^{{\textstyle \cdot}}_{X / k}$-module structure
(cf.\ Theorem \ref{thm6.1}).
The action is ``by taking residues.''
When $f : X \rightarrow Y$ is proper then
$\operatorname{Tr}_{f} : f_{*} \mcal{F}^{{\textstyle \cdot}}_{X} \rightarrow \mcal{F}^{{\textstyle \cdot}}_{Y}$
is a homomorphism of DG $\mcal{A}^{{\textstyle \cdot}}_{Y}$-modules.
If we view the adeles $\mcal{A}^{p, q}_{X}$ as an algebraic analog of
the smooth forms of type $(p, q)$ on a complex-analytic manifold,
then $\mcal{F}^{-p, -q}_{X}$ is the analog of the currents of
type $(p, q)$.
Suppose $\operatorname{char} k = 0$, $X$ is smooth irreducible of dimension
$n$, $\mcal{E}$ is a locally free $\mcal{O}_{X}$-module of rank $r$,
$v$ is a regular section of $\mcal{E}$ and $Z$ is its zero scheme.
Let $C_{X}, C_{Z} \in \Gamma(X, \mcal{F}^{{\textstyle \cdot}}_{X})$ be the
fundamental classes. In Theorem \ref{thm7.1} we prove the following
version of the Gauss-Bonnet Formula:
there is an adelic connection $\nabla$ on $\mcal{E}$ satisfying
\[ C_{X} \cdot c_{r}(\mcal{E}, \nabla) =
(-1)^{m} C_{Z} \in \mcal{F}_{X}^{-2(n - r)} \]
with $m = nr + \binom{r+1}{2}$.
Observe that this formula is on the level of differential forms.
Passing to (co)ho\-mo\-logy we recover the familiar formula
$c_{r}(\mcal{E}) \smile [X] = [Z] \in
\mrm{H}_{2n - r}^{\mrm{DR}}(X)$
(cf.\ \cite{Fu} \S 14.1 ).
\paragraph{Acknowledgements}
The authors wish to thank S.\ Kleiman who suggested studying the
adelic approach to the Bott Residue Formula, J.\ Lipman for calling
our attention to Zhou's work, V.\ Hinich for explaining to us the
construction of the Thom-Sullivan cochains, D.\ Blanc for help with
cosimplicial groups, and P.\ Sastry for discussions. Special thanks
also go to P.\ Golginger and W.\ Krinner for collecting the available
material on algebraic connections (\cite{Go}, \cite{Kr}). Part of the
research was done while the first author was visiting the Weizmann
Institute. He wishes to express his gratitude towards this institution
for its hospitality during his stay there.
\section{Cosimplicial Algebras and their Normalizations}
In this section we review some well known facts about cosimplicial
objects, and also discuss the less known Thom-Sullivan normalization.
Our sources are \cite{Ma}, \cite{ML}, \cite{BG} and \cite{HS1}.
Denote by $\Delta$ the category whose objects are
the finite ordered sets $[n] := \{ 0, \ldots, n \}$, and whose
morphisms are the monotone nondecreasing functions $[m] \rightarrow [n]$. Let
$\partial^{i} : [n-1] \rightarrow [n]$ stand for the $i$-th coface map, and
let $s^{i} : [n+1] \rightarrow [n]$ stand for the $i$-th codegeneracy map.
These maps generate by composition all morphisms in $\Delta$.
Let $\Delta^{\circ}$ denote the opposite category.
By definition a {\em simplicial object} in a category $\msf{C}$ is a
functor $S : \Delta^{\circ} \rightarrow \msf{C}$. Often one writes $S_{n}$
instead of the object $S[n] \in \msf{C}$. A {\em cosimplicial object}
is a functor $S : \Delta \rightarrow \msf{C}$.
Denote by $\Delta^{\circ} \msf{C}$ (resp.\ $\Delta \msf{C}$)
the category of simplicial (resp.\ cosimplicial) objects in $\msf{C}$.
\begin{exa}
Let $P$ be a partially ordered set.
A simplex (or chain) of length $n$ in $P$ is a sequence
$\sigma = (x_{0}, \ldots, x_{n})$, with $x_{i} \leq x_{i+1}$.
More generally, if $P$ is a category, then an $n$-simplex is a functor
$\sigma : [n] \rightarrow P$. Letting
$S(P)_{n}$ be the set of $n$-simplices in $P$, we see that
$S(P)$ is a simplicial set.
\end{exa}
\begin{exa} \label{exa1.5}
If we take $P = [n]$, then we get the standard simplicial complex
$\Delta^{n} \in \Delta^{\circ} \msf{Sets}$.
As a functor $\Delta^{\circ} \rightarrow \msf{Sets}$ one has
$\Delta^{n} = \operatorname{Hom}_{\Delta}(-, [n])$.
Observe that
\[ \Delta_{m}^{n} = \operatorname{Hom}_{\Delta}([m], [n]) =
\{ (i_{0}, \ldots, i_{m})\ |\
0 \leq i_{0} \leq \cdots \leq i_{m} \leq n
\} . \]
\end{exa}
\begin{exa} \label{exa1.1}
Given a scheme $X$, specialization defines a partial ordering on its
underlying set of points: $x \leq y$ if $y \in \overline{\{ x \}}$.
We denote by $S(X)$ the resulting simplicial set.
\end{exa}
\begin{exa} \label{exa1.2}
Let $\Delta^{n}_{\mrm{top}}$ be the standard realization of
$\Delta^{n}$, i.e.\ the compact topological space
\[ \{ (t_{0}, \ldots, t_{n})\ |\ t_{i} \geq 0 \text{ and }
\sum t_{i} =1 \} \subset \mbb{R}^{n+1} . \]
Then $\Delta_{\mrm{top}} = \{ \Delta^{n}_{\mrm{top}} \}$
is a cosimplicial topological space.
\end{exa}
Let $k$ be any commutative ring. By a differential graded (DG)
$k$-module we mean a cochain complex, namely a graded module
$M = \bigoplus_{q \in \mbb{Z}} M^{q}$
with an endomorphism $\mrm{d}$ of degree $1$ satisfying
$\mrm{d}^{2} = 0$.
By a differential graded algebra (DGA) over $k$ we mean a DG module
$A$ with a DG homomorphism $A \otimes_{k} A \rightarrow A$. So $A$ is
neither assumed to be commutative nor associative.
\begin{exa} \label{exa1.3}
Let $t_{0}, t_{1}, \ldots$ be indeterminates (of degree $0$). Define
\[ R_{n} := k \sqbr{ t_{0}, \ldots, t_{n} } /
(t_{0} + \cdots + t_{n} -1) \]
and
$\Delta^{n}_{k} := \operatorname{Spec} R_{n}$.
Then as in the previous example, $\Delta_{k} = \{ \Delta^{n}_{k} \}$
is a cosimplicial scheme. Letting
$\Omega^{{\textstyle \cdot}}(\Delta^{n}_{k}) := \Omega^{{\textstyle \cdot}}_{R_{n} / k}$,
we see that
$\Omega^{{\textstyle \cdot}}(\Delta_{k}) := \{
\Omega^{{\textstyle \cdot}}(\Delta^{n}_{k}) \}$
is a simplicial DGA over $k$.
\end{exa}
Consider a cosimplicial $k$-module
$M = \{ M^{q} \} \in \Delta \msf{Mod}(k)$.
Its standard normalization is the DG module
$(\mrm{N} M, \partial)$ whose degree $q$ piece is
$\mrm{N}^{q} M := \bigcap \operatorname{Ker}(s^{i}) \subset M^{q}$,
and $\partial := \sum (-1)^{i} \partial^{i}$.
Now suppose
$M = \{ M^{{\textstyle \cdot}, q} \} \in \Delta \msf{DGMod}(k)$,
i.e.\ a cosimplicial DG $k$-module. Each
$M^{{\textstyle \cdot}, q} = \bigoplus_{p \in \mbb{Z}} M^{p, q}$
is a DG module with operator
$\mrm{d} : M^{p, q} \rightarrow M^{p+1, q}$, and each
$M^{p,{\textstyle \cdot}}$ is a cosimplicial $k$-module.
Define $\mrm{N}^{p,q} M := \mrm{N}^{q} M^{p,{\textstyle \cdot}}$,
$\mrm{N}^{i} M := \bigoplus_{p+q=i} \mrm{N}^{p,q} M$,
and
$\mrm{N} M := \bigoplus_{i} \mrm{N}^{i} M$.
Then $\mrm{N} M$ is a DG module with coboundary operator
$\mrm{D} := \mrm{D}' + \mrm{D}''$,
where
$\mrm{D}' := (-1)^{q} \mrm{d} : \mrm{N}^{p,q} M \rightarrow
\mrm{N}^{p+1,q} M$
and
$\mrm{D}'' := \partial : \mrm{N}^{p,q} M \rightarrow \mrm{N}^{p,q+1} M$.
Another way to visualize this is by defining for each $q$ a DG module
$\mrm{N}^{{\textstyle \cdot},q} M := \bigcap \operatorname{Ker}(s^{i})
\subset M^{{\textstyle \cdot},q}[-q]$
(the shift by $-q$), so the operator is indeed $\mrm{D}'$.
Then
$\mrm{D}'' = \partial : \mrm{N}^{{\textstyle \cdot},q} M \rightarrow
\mrm{N}^{{\textstyle \cdot},q+1} M$
has degree $1$ and
$\mrm{N} M = \bigoplus_{q} \mrm{N}^{{\textstyle \cdot},q} M$.
If $A$ is a cosimplicial DGA, that is $A \in \Delta \cat{DGA}(k)$,
then $\mrm{N} A$ is a DGA with
the Alexander-Whitney product. For any
$a \in \mrm{N}^{p, q} A$ and $b \in \mrm{N}^{p',q'} A$ one has
\begin{equation} \label{eqn1.2}
a \cdot b =
\partial^{-}(a) \cdot \partial^{+}(b) \in
\mrm{N}^{p+p', q+q'} A ,
\end{equation}
where
$\partial^{-} : [q] \rightarrow [q + q']$ is the simplex $(0,1, \ldots, q)$
and
$\partial^{+} : [q'] \rightarrow [q + q']$ is the simplex
$(q,q+1, \ldots, q+q')$ (cf.\ Example \ref{exa1.5}).
Note that if each algebra $A^{{\textstyle \cdot}, q}$ is associative, then so is
$\mrm{N} A$; however $\mrm{N} A$ is usually not commutative.
If $M$ is a cosimplicial DG left $A$-module then $\mrm{N} M$ is a DG
left $\mrm{N} A$-module.
We shall need another normalization of cosimplicial objects.
The definition below is extracted from the work of Hinich-Schechtman,
cf.\ \cite{HS1}, \cite{HS2}. Fix a commutative
$\mbb{Q}$-algebra $k$.
\begin{dfn} \label{dfn1.1}
Suppose $M= \{ M^{q} \}$ is a cosimplicial $k$-module.
Let
\[ \tilde{\mrm{N}}^{q} M \subset
\prod_{l=0}^{\infty} \left( \Omega^{q}(\Delta^{l}_{\mbb{Q}})
\otimes_{\mbb{Q}} M^{l} \right) \]
be the submodule consisting of all elements
$\bsym{u} = (u_{0}, u_{1}, \ldots)$,
$u_{l} \in \Omega^{q}(\Delta^{l}_{\mbb{Q}}) \otimes_{\mbb{Q}} M^{l}$,
s.t.\
\begin{eqnarray} \label{eqn1.5}
(1 \otimes \partial^{i}) u_{l} & = & (\partial_{i} \otimes 1)
u_{l+1} \\
(s_{i} \otimes 1) u_{l} & = & (1 \otimes s^{i}) u_{l+1}
\end{eqnarray}
for all $0 \leq l$, $0 \leq i \leq l+1$.
Given a cosimplicial DG $k$-module $M = \{ M^{{\textstyle \cdot}, q} \}$, let
$\tilde{\mrm{N}}^{p,q} M := \tilde{\mrm{N}}^{q} M^{p,{\textstyle \cdot}}$,
$\tilde{\mrm{N}}^{i} M := \bigoplus_{p+q=i}
\tilde{\mrm{N}}^{p,q} M$
and
$\tilde{\mrm{N}} M := \bigoplus_{i} \tilde{\mrm{N}}^{i} M$.
Define
$\mrm{D}' := (-1)^{q} \otimes \mrm{d}$,
$\mrm{D}'' := \mrm{d} \otimes 1$ and
$\mrm{D} := \mrm{D}' + \mrm{D}''$.
The resulting complex $(\tilde{\mrm{N}} M, \mrm{D})$
is called the {\em Thom-Sullivan normalization} of $M$.
If $A$ is a cosimplicial $k$-DGA, then $\tilde{\mrm{N}} A$
inherits the component-wise multiplication from
$\prod_{l} \Omega^{q}(\Delta^{l}_{\mbb{Q}})
\otimes_{\mbb{Q}} A^{p, l}$,
so it is a DGA.
\end{dfn}
In the definition above the signs are in agreement with the usual
conventions; keep in mind that $M^{p,q}$ is in degree $p$
(cf.\ \cite{ML} Ch.\ VI \S 7). It is clear that if each DGA
$A^{{\textstyle \cdot}, q}$
is commutative (resp.\ associative), then so is $\tilde{\mrm{N}} A$.
Usual integration on the real simplex $\Delta^{l}_{\mrm{top}}$
yields a $\mbb{Q}$-linear map of degree $0$,
$\int_{\Delta^{l}} : \Omega^{{\textstyle \cdot}}(\Delta^{l}_{\mbb{Q}})
\rightarrow \mbb{Q}[-l]$,
such that
$\int_{\Delta^{l}}(\mrm{d} t_{1} \wedge \cdots \wedge
\mrm{d} t_{l}) = \frac{1}{l !}$.
By linearity, for any cosimplicial DG module $M$ this extends to
a degree $0$ homomorphism
\[ \int_{\Delta^{l}} : \Omega^{{\textstyle \cdot}}(\Delta^{l}_{\mbb{Q}})
\otimes_{\mbb{Q}} M^{{\textstyle \cdot},l} \rightarrow
\mbb{Q}[-l] \otimes_{\mbb{Q}} M^{{\textstyle \cdot},l} = M^{{\textstyle \cdot},l}[-l] . \]
Note that $\int_{\Delta^{l}}$ sends
$\mrm{D}' := (-1)^{q} \otimes \mrm{d}$ to
$\mrm{D}' := (-1)^{l} \mrm{d}$. Define
$\int_{\Delta} : \tilde{\mrm{N}} M \rightarrow \bigoplus_{q} M^{{\textstyle \cdot},q}[-q]$
by:
\begin{equation} \label{eqn1.3}
\int_{\Delta} (u_{0}, u_{1}, \ldots) :=
\sum_{l = 0}^{\infty} \int_{\Delta^{l}} u_{l} .
\end{equation}
\begin{lem}
The image of $\int_{\Delta}$ lies inside $\mrm{N} M$, and
$\int_{\Delta} : \tilde{\mrm{N}} M \rightarrow \mrm{N} M$
is a $k$-linear homomorphism of complexes.
\end{lem}
\begin{proof} A direct verification, amounting to Stoke's Theorem on
$\Delta^{l}_{\mrm{top}}$.
\end{proof}
What we have is a natural transformation
$\int_{\Delta} : \tilde{\mrm{N}} \rightarrow \mrm{N}$
of functors
$\Delta \msf{DGMod}(k) \rightarrow$ \linebreak
$\msf{DGMod}(k)$.
\begin{thm} \label{thm1.1} \textup{(Simplicial De Rham Theorem)}\
Let $M$ a cosimplicial DG $\mbb{Q}$-module. Then
$\int_{\Delta} : \tilde{\mrm{N}} M \rightarrow \mrm{N} M$
is a quasi-isomorphism.
\end{thm}
\begin{thm} \label{thm1.2}
Let $k$ be a commutative $\mbb{Q}$-algebra and $A$ a cosimplicial DG
$k$-algebra. Then
$\mrm{H}(\int_{\Delta}) : \mrm{H} \tilde{\mrm{N}} A \rightarrow
\mrm{H} \mrm{N} A$
is an isomorphism of graded $k$-algebras. If $M$ is a cosimplicial
DG $A$-module, then
$\mrm{H}(\int_{\Delta}) : \mrm{H} \tilde{\mrm{N}} M \rightarrow
\mrm{H} \mrm{N} M$
is an isomorphism of graded $\mrm{H} \tilde{\mrm{N}} A$-modules.
\end{thm}
The proof are essentially contained in \cite{BG} and \cite{HS1}.
For the sake of completeness we include proofs in Appendix A
of this paper.
\section{Adeles of Differential Forms}
In this section we apply the constructions of Section 1 to the
cosimplicial DGA $\underline{\mbb{A}}(\Omega^{{\textstyle \cdot}}_{X/k})$ on a scheme $X$.
This will give two DGAs, $\mcal{A}^{{\textstyle \cdot}}_{X}$ and
$\tilde{\mcal{A}}^{{\textstyle \cdot}}_{X}$, which are resolutions of
$\Omega^{{\textstyle \cdot}}_{X/k}$.
Let us begin with a review of {\em Beilinson adeles} on a noetherian
scheme $X$ of finite dimension. A chain of points in $X$ is a sequence
$\xi = (x_{0}, \ldots, x_{q})$ of points with
$x_{i+1} \in \overline{ \{ x_{i} \} }$. Denote by $S(X)_{q}$ the set
of length $q$ chains, so $\{ S(X)_{q} \}_{q \geq 0}$ is a simplicial
set.
For $T \subset S(X)_{q}$ and $x \in X$ let
\[ \hat{x} T := \{ (x_{1}, \ldots, x_{q}) \mid
(x, x_{1}, \ldots, x_{q}) \in T \} . \]
According to \cite{Be} there is a unique collection of functors
$\mbb{A}(T, -) : \msf{Qco}(X) \rightarrow \msf{Ab}$,
indexed by $T \subset S(X)_{q}$, each of which commuting with direct
limits, and satisfying
\[ \mbb{A}(T, \mcal{M}) = \begin{cases}
\prod_{x \in X} \lim_{\leftarrow n} \mcal{M}_{x} / \mfrak{m}_{x}^{n}
\mcal{M}_{x} & \text{if } q = 0 \\[1mm]
\prod_{x \in X} \lim_{\leftarrow n}
\mbb{A}(\hat{x} T, \mcal{M}_{x} / \mfrak{m}_{x}^{n} \mcal{M}_{x})
& \text{if } q > 0
\end{cases} \]
for $\mcal{M}$ coherent.
Here $\mfrak{m} \subset \mcal{O}_{X, x}$ is the maximal ideal
and $\mcal{M}_{x} / \mfrak{m}_{x}^{n} \mcal{M}_{x}$ is treated as a
quasi-coherent sheaf with support $\overline{ \{ x \} }$.
Furthermore each $\mbb{A}(T, -)$ is exact.
For a single chain $\xi$ one also writes
$\mcal{M}_{\xi} := \mbb{A}(\{ \xi \}, \mcal{M})$,
and this is the {\em Beilinson completion} of $\mcal{M}$ along $\xi$.
Then
\begin{equation} \label{eqn1.6}
\mbb{A}(T, \mcal{M}) \subset \prod_{\xi \in T} \mcal{M}_{\xi}
\end{equation}
which permits us to consider the adeles as a ``restricted product.''
For $q = 0$ and $\mcal{M}$ coherent we have
$\mcal{M}_{(x)} = \widehat{\mcal{M}}_{x}$, the $\mfrak{m}_{x}$-adic
completion, and (\ref{eqn1.6}) is an equality.
In view of this we shall say that $\mbb{A}(T, \mcal{M})$ is the
group of adeles combinatorially supported on $T$ and with values in
$\mcal{M}$.
Define a presheaf
$\underline{\mbb{A}}(T, \mcal{M})$ by
\begin{equation}
\Gamma(U, \underline{\mbb{A}}(T, \mcal{M})) :=
\mbb{A}(T \cap S(U)_{q}, \mcal{M})
\end{equation}
for $U \subset X$ open. Then $\underline{\mbb{A}}(T, \mcal{M})$ is a flasque
sheaf. Also $\underline{\mbb{A}}(T, \mcal{O}_{X})$ is a flat
$\mcal{O}_{X}$-algebra, and
$\underline{\mbb{A}}(T, \mcal{M}) \cong \underline{\mbb{A}}(T, \mcal{O}_{X})
\otimes_{\mcal{O}_{X}} \mcal{M}$.
For every $q$ define the sheaf of degree $q$ {\em Beilinson adeles}
\[ \underline{\mbb{A}}^{q}(\mcal{M}) :=
\underline{\mbb{A}}(S(X)_{q}, \mcal{M}) . \]
Then
$\underline{\mbb{A}}(\mcal{M}) =
\{ \underline{\mbb{A}}^{q}(\mcal{M}) \}_{q \in \mbb{N}}$
is a cosimplicial sheaf.
The standard normalization
$\mrm{N}^{q} \underline{\mbb{A}}(\mcal{M})$
is canonically isomorphic to the sheaf
$\underline{\mbb{A}}^{q}_{\mrm{red}}(\mcal{M}) :=
\underline{\mbb{A}}(S(X)_{q}^{\mrm{red}}, \mcal{M})$,
where $S(X)_{q}^{\mrm{red}}$ is the set of nondegenerate chains.
Note that
$\underline{\mbb{A}}^{q}_{\mrm{red}}(\mcal{M}) = 0$
for all $q > \operatorname{dim} X$.
A fundamental theorem of Beilinson says that the canonical
homomorphism
$\mcal{M} \rightarrow \underline{\mbb{A}}^{{\textstyle \cdot}}_{\mrm{red}}(\mcal{M})$
is a quasi-isomorphism. We see that
$\mrm{H}^{q} \Gamma(X, \underline{\mbb{A}}^{{\textstyle \cdot}}_{\mrm{red}}(\mcal{M}))
= \mrm{H}^{q}(X, \mcal{M})$.
The complex
$\underline{\mbb{A}}^{{\textstyle \cdot}}_{\mrm{red}}(\mcal{O}_{X})$
is a DGA, with the Alexander-Whitney product. For local sections
$a \in \underline{\mbb{A}}^{q}_{\mrm{red}}(\mcal{O}_{X})$
and
$b \in \underline{\mbb{A}}^{q'}_{\mrm{red}}(\mcal{O}_{X})$
the product is
$a \cdot b = \partial^{-}(a) \cdot \partial^{+}(b) \in
\underline{\mbb{A}}^{q+q'}_{\mrm{red}}(\mcal{O}_{X})$,
where $\partial^{-}$ and $\partial^{+}$ correspond respectively to the
initial and final segments of $(0, \ldots, q, \ldots, q+q')$.
This algebra is not (graded) commutative.
For proofs and more details turn to \cite{Hr}, \cite{Ye1} Chapter 3
and \cite{HY} Section 1.
\begin{exa} \label{exa2.1}
Suppose $X$ is a nonsingular curve. The relation to the classical
ring of adeles $\mbb{A}(X)$ of Chevalley and Weil is
$\mbb{A}(X) =
\Gamma(X, \underline{\mbb{A}}^{1}_{\mathrm{red}}(\mcal{O}_{X}))$.
\end{exa}
Now assume $X$ is a finite type scheme over the noetherian ring $k$.
In \cite{HY} it was shown that given any differential operator
(DO) $D : \mcal{M} \rightarrow \mcal{N}$ there is an induced DO
$D : \underline{\mbb{A}}^{q}(\mcal{M}) \rightarrow
\underline{\mbb{A}}^{q}(\mcal{N})$.
Applying this to the De Rham complex $\Omega^{{\textstyle \cdot}}_{X/k}$
we get a cosimplicial DGA
$\underline{\mbb{A}}(\Omega^{{\textstyle \cdot}}_{X/k})$. The {\em De Rham adele complex}
is the DGA
\[ \mcal{A}^{{\textstyle \cdot}}_{X} := \mrm{N}
\underline{\mbb{A}}(\Omega^{{\textstyle \cdot}}_{X/k}) . \]
Since
\[ \mcal{A}^{p, q}_{X}
\cong \underline{\mbb{A}}^{q}_{\mrm{red}}(\mcal{O}_{X})
\otimes_{\mcal{O}_{X}} \Omega^{p}_{X/k} \]
we see that $\mcal{A}^{{\textstyle \cdot}}_{X}$ is bounded.
By a standard double complex spectral sequence argument (see
\cite{HY} Proposition 2.1) we get:
\begin{prop} \label{prop2.1}
The natural DGA map
$\Omega_{X / k}^{{\textstyle \cdot}} \rightarrow \mcal{A}_{X}^{{\textstyle \cdot}}$
is a quasi-iso\-morph\-ism of sheaves. Hence
$\mrm{H}^{{\textstyle \cdot}}(X, \Omega_{X / k}^{{\textstyle \cdot}}) \cong
\mrm{H}^{{\textstyle \cdot}} \Gamma(X, \mcal{A}_{X}^{{\textstyle \cdot}})$.
\end{prop}
Let us examine the DGA $\mcal{A}^{{\textstyle \cdot}}_{X}$ a little closer.
The operators are
$\mrm{D}' = (-1)^{q} \mrm{d} : \mcal{A}^{p, q}_{X} \rightarrow
\mcal{A}^{p+1, q}_{X}$,
$\mrm{D}'' = \partial : \mcal{A}^{p, q}_{X} \rightarrow
\mcal{A}^{p, q+1}_{X}$
and
$\mrm{D} = \mrm{D}' + \mrm{D}''$.
As for the multiplication, consider local sections
$a \in \underline{\mbb{A}}_{\mrm{red}}^{q}(\mcal{O}_{X})$,
$b \in \underline{\mbb{A}}_{\mrm{red}}^{q'}(\mcal{O}_{X})$,
$\alpha \in \Omega^{p}_{X/k}$ and
$\beta \in \Omega^{p'}_{X/k}$. Then
\begin{equation} \label{eqn2.3}
(a \otimes \alpha) \cdot (b \otimes \beta) =
(-1)^{q'p} \partial^{-}(a) \cdot \partial^{+}(b)
\otimes \alpha \wedge \beta
\in \mcal{A}_{X}^{p+p',q+q'}
\end{equation}
(cf.\ formula (\ref{eqn1.2})).
\begin{rem}
In the analogy to sheaves of smooth forms on a complex-analytic
manifold, our operators $\mrm{D}'$, $\mrm{D}''$ play the roles of
$\partial$, $\bar{\partial}$ respectively. Note however that here
$\mcal{A}_{X}^{p,q}$ is not a locally free $\mcal{A}_{X}^{0}$-module
for $0 < q \leq n$, even when $X$ is smooth. The
same is true also for the sheaves
$\tilde{\mcal{A}}_{X}^{p, q}$ defined below.
\end{rem}
If $k$ is a perfect field and $X$ is an integral scheme of dimension
$n$, then each maximal chain $\xi = (x_{0}, \ldots, x_{n})$ defines a
$k$-linear map
$\operatorname{Res}_{\xi} : \Omega^{n}_{X / k} \rightarrow k$ called the {\em Parshin
residue} (cf.\ \cite{Ye1} Definition 4.1.3). By (\ref{eqn1.6}) we
obtain $\operatorname{Res}_{\xi} : \Gamma(X, \mcal{A}^{2n}_{X}) \rightarrow k$.
\begin{prop} \label{prop2.4}
Suppose $k$ is a perfect field.
\begin{enumerate}
\item Given $\alpha \in \Gamma(X, \mcal{A}^{2n}_{X})$, one has
$\operatorname{Res}_{\xi} \alpha = 0$ for all but finitely many
$\xi$. Hence
$\int_{X} := \sum_{\xi} \operatorname{Res}_{\xi} :
\Gamma(X, \mcal{A}^{2n}_{X}) \rightarrow k$
is well-defined.
\item If $X$ is proper then $\int_{X} \mrm{D} \beta = 0$ for all
$\beta \in \Gamma(X, \mcal{A}^{2n - 1}_{X})$. Hence
$\int_{X} : \mrm{H}^{2n}(X, \Omega^{{\textstyle \cdot}}_{X / k}) \rightarrow k$
is well-defined.
\item If $X$ is smooth and proper then
$\int_{X} : \mrm{H}^{2n}_{\mrm{DR}}(X) \rightarrow k$
coincides with the nondegenerate map of \cite{Ha1}.
\end{enumerate}
\end{prop}
\begin{proof}
1.\ See \cite{HY} Proposition 3.4.\\
2.\ This follows from the Parshin-Lomadze Residue Theorem (\cite{Ye1}
Theorem 4.2.15).\\
3.\ By \cite{HY} Theorem 3.1 and \cite{Ye3} Corollary 3.8.
\end{proof}
Now assume $k$ is any $\mbb{Q}$-algebra.
The Thom-Sullivan normalization determines a sheaf
$\tilde{\mrm{N}}^{q} \underline{\mbb{A}}(\mcal{M})$, where
$\Gamma(U, \tilde{\mrm{N}}^{q} \underline{\mbb{A}}(\mcal{M})) =
\tilde{\mrm{N}}^{q} \Gamma(U, \underline{\mbb{A}}(\mcal{M}))$.
Applying this to the cosimplicial DGA
$\underline{\mbb{A}}(\Omega^{{\textstyle \cdot}}_{X/k})$
we obtain:
\begin{dfn}
The sheaf of {\em Thom-Sullivan adeles} is the sheaf of DGAs
\[ \tilde{\mcal{A}}^{{\textstyle \cdot}}_{X} := \tilde{\mrm{N}}
\underline{\mbb{A}}(\Omega^{{\textstyle \cdot}}_{X/k}) . \]
\end{dfn}
$(\tilde{\mcal{A}}^{{\textstyle \cdot}}_{X}, \mrm{D})$ is an associative,
commutative DGA. The natural map
$\Omega^{{\textstyle \cdot}}_{X/k} \rightarrow \tilde{\mcal{A}}^{{\textstyle \cdot}}_{X}$
is an injective DGA homomorphism.
The coboundary operator on $\tilde{\mcal{A}}^{{\textstyle \cdot}}_{X}$ is
$\mrm{D} = \mrm{D}' + \mrm{D}''$,
where
$\mrm{D}' : \tilde{\mcal{A}}^{p,q}_{X} \rightarrow
\tilde{\mcal{A}}^{p+1, q}_{X}$
and
$\mrm{D}'' : \tilde{\mcal{A}}^{p,q}_{X} \rightarrow
\tilde{\mcal{A}}^{p, q+1}_{X}$.
The ``integral on the fibers'' $\int_{\Delta}$
sheafifies to give a degree $0$ DG $\Omega^{{\textstyle \cdot}}_{X/k}$-module
homomorphism
$\int_{\Delta} : \tilde{\mcal{A}}^{{\textstyle \cdot}}_{X} \rightarrow
\mcal{A}^{{\textstyle \cdot}}_{X}$.
This is not an algebra homomorphism! However:
\begin{prop} \label{prop2.2}
For every open set $U \subset X$,
$\mrm{H}^{{\textstyle \cdot}}(\int_{\Delta}) :
\mrm{H}^{{\textstyle \cdot}} \Gamma(U, \tilde{\mcal{A}}^{{\textstyle \cdot}}_{X}) \rightarrow
\mrm{H}^{{\textstyle \cdot}} \Gamma(U, \mcal{A}^{{\textstyle \cdot}}_{X})$
is an isomorphism of graded $k$-algebras.
\end{prop}
\begin{proof}
Apply Theorems \ref{thm1.1} and \ref{thm1.2} to the cosimplicial DGA
$\Gamma(U, \underline{\mbb{A}}(\Omega^{{\textstyle \cdot}}_{X/k}))$.
\end{proof}
\begin{rem}
We do not know whether the sheaves $\tilde{\mcal{A}}^{p, q}_{X}$
are flasque. The DGA $\tilde{\mcal{A}}^{{\textstyle \cdot}}_{X}$ is not bounded;
however, letting
$n := \sup \{ p \mid \Omega^{p}_{X / k, x} \neq 0 \text{ for some }
x \in X \}$, then $n < \infty$ and
$\tilde{\mcal{A}}^{p, q}_{X} = 0$ for all $p > n$.
\end{rem}
\begin{cor} \label{cor2.2}
If $X$ is smooth over $k$, then the homomorphisms
$\Omega^{{\textstyle \cdot}}_{X/k} \rightarrow \tilde{\mcal{A}}^{{\textstyle \cdot}}_{X} \rightarrow
\mcal{A}^{{\textstyle \cdot}}_{X}$
induce isomorphisms of graded $k$-algebras
\[ \mrm{H}^{{\textstyle \cdot}}_{\mrm{DR}}(X) =
\mrm{H}^{{\textstyle \cdot}}(X, \Omega^{{\textstyle \cdot}}_{X / k}) \cong
\mrm{H}^{{\textstyle \cdot}} \Gamma(X, \tilde{\mcal{A}}^{{\textstyle \cdot}}_{X}) \cong
\mrm{H}^{{\textstyle \cdot}} \Gamma(X, \mcal{A}^{{\textstyle \cdot}}_{X}) . \]
\end{cor}
Given a quasi-coherent sheaf $\mcal{M}$ set
\begin{equation} \label{eqn2.4}
\tilde{\mcal{A}}^{p,q}_{X}(\mcal{M}) :=
\tilde{\mrm{N}}^{q} \underline{\mbb{A}}(\Omega^{p}_{X/k}
\otimes_{\mcal{O}_{X}} \mcal{M}) .
\end{equation}
In particular we have
$\tilde{\mcal{A}}^{p,q}_{X} =
\tilde{\mcal{A}}^{0,q}_{X}(\Omega^{p}_{X/k})$.
\begin{lem} \label{lem2.1}
\begin{enumerate}
\item Let $\mcal{M}$ be a quasi-coherent sheaf. Then the complex
\[ 0 \rightarrow \mcal{M} \rightarrow \tilde{\mcal{A}}^{0,0}_{X}(\mcal{M})
\xrightarrow{\mrm{D}''} \tilde{\mcal{A}}^{0,1}_{X}(\mcal{M})
\xrightarrow{\mrm{D}''} \cdots \]
is exact.
\item If $\mcal{E}$ is locally free of finite rank, then
$\tilde{\mcal{A}}^{p,q}_{X}(\mcal{E}) \cong
\tilde{\mcal{A}}^{p,q}_{X} \otimes_{\mcal{O}_{X}} \mcal{E}$.
\item Suppose $\mrm{d} : \mcal{M} \rightarrow \mcal{N}$ is a $k$-linear DO.
Then $\mrm{d}$ extends to a DO
$\mrm{d} : \tilde{\mcal{A}}^{0,q}_{X}(\mcal{M}) \rightarrow
\tilde{\mcal{A}}^{0,q}_{X}(\mcal{N})$
which commutes with $\mrm{D}''$.
\end{enumerate}
\end{lem}
\begin{proof}
1.\ Use the quasi-isomorphism
\[ \int_{\Delta} : \tilde{\mcal{A}}^{0,{\textstyle \cdot}}_{X}(\mcal{M}) =
\tilde{\mrm{N}} \underline{\mbb{A}}(\mcal{M}) \rightarrow
\mrm{N} \underline{\mbb{A}}(\mcal{M}) =
\underline{\mbb{A}}^{{\textstyle \cdot}}_{\mrm{red}}(\mcal{M}) . \]
\noindent 2.\
Multiplication induces a homomorphism
$\tilde{\mcal{A}}^{{\textstyle \cdot}}_{X} \otimes_{\mcal{O}_{X}} \mcal{E}
\rightarrow \tilde{\mcal{A}}^{{\textstyle \cdot}}_{X}(\mcal{E})$.
Choose a {\em local algebraic frame}
$\bsym{f} = (f_{1}, \ldots, f_{r})^{\mrm{t}}$ for $\mcal{E}$ on
a small open set $U$; i.e.\ an isomorphism
$\bsym{f} : \mcal{O}_{U}^{r} \stackrel{\simeq}{\rightarrow} \mcal{E}|_{U}$.
Then we see that
$\tilde{\mcal{A}}^{{\textstyle \cdot}}_{X} \otimes_{\mcal{O}_{X}} \mcal{E}
\stackrel{\simeq}{\rightarrow} \tilde{\mcal{A}}^{{\textstyle \cdot}}_{X}(\mcal{E})$.
\noindent 3.\
The DO $\mrm{d} : \underline{\mbb{A}}(\mcal{M}) \rightarrow
\underline{\mbb{A}}(\mcal{N})$
respects the cosimplicial structure.
\end{proof}
In order to clarify the algebraic structure of
$\tilde{\mcal{A}}^{{\textstyle \cdot}}_{X}$
we introduce the following local objects. Given a chain
$\xi = (x_{0}, \ldots, x_{l})$ in $X$ let
\begin{equation} \label{eqn2.2}
\tilde{\mcal{A}}^{p,q}_{\xi} := \Omega^{q}(\Delta^{l}_{\mbb{Q}})
\otimes_{\mbb{Q}} \Omega^{p}_{X / k, \xi} .
\end{equation}
As usual we set
$\mrm{D}' := (-1)^{q} \otimes \mrm{d}$,
$\mrm{D}'' := \mrm{d} \otimes 1$ and
$\mrm{D} := \mrm{D}' + \mrm{D}''$.
The DGA $(\tilde{\mcal{A}}^{{\textstyle \cdot}}_{\xi}, \mrm{D})$
is generated (as a DGA) by
\begin{equation} \label{eqn2.5}
\tilde{\mcal{A}}^{0}_{\xi} =
\mcal{O}_{X, \xi}[t_{0}, \ldots, t_{l}] / (\sum t_{i} - 1) .
\end{equation}
When $X$ is smooth of dimension $n$ over $k$ near $x_{0}$ then
$\tilde{\mcal{A}}^{p,q}_{\xi}$ is free of
rank $\binom{n}{p} \binom{l}{q}$ over $\tilde{\mcal{A}}^{0}_{\xi}$.
Given a quasi-coherent $\mcal{O}_{X}$-module $\mcal{M}$ let
$\tilde{\mcal{A}}^{p,q}_{\xi}(\mcal{M}) :=
\tilde{\mcal{A}}^{p,q}_{\xi} \otimes_{\mcal{O}_{X}} \mcal{M}$.
\begin{lem} \label{lem2.2}
\begin{enumerate}
\item For any quasi-coherent $\mcal{O}_{X}$-module $\mcal{M}$ and
open set $U \subset X$ there are natural commutative diagrams
\[ \begin{CD}
\Gamma(U, \tilde{\mcal{A}}^{p,q}_{X}(\mcal{M})) @>{\int_{\Delta}}>>
\Gamma(U, \mcal{A}^{p,q}_{X}(\mcal{M})) \\
@V{\Phi^{p, q}_{\mcal{M}}}VV @VVV \\
\prod_{\xi \in S(U)} \tilde{\mcal{A}}^{p,q}_{\xi}(\mcal{M})
@>{\int_{\Delta}}>>
\prod_{\xi \in S(U)^{\mrm{red}}_{q}}
(\Omega^{p}_{X / k, \xi} \otimes_{\mcal{O}_{X}} \mcal{M})[-q]
\end{CD} \]
\item $\Phi_{\mcal{M}} := \sum \Phi^{p, q}_{\mcal{M}}$
is injective and commutes with $\mrm{D}'$ and $\mrm{D}''$.
\item $\Phi_{\mcal{O}_{X}}$ is a DGA homomorphism and
$\Phi_{\mcal{M}}$ is $\Gamma(U, \tilde{\mcal{A}}^{{\textstyle \cdot}}_{X})$-linear.
\end{enumerate}
\end{lem}
\begin{proof}
This is immediate from Definition \ref{dfn1.1} and formula
(\ref{eqn1.6}).
\end{proof}
\begin{lem} \label{lem2.3}
Let $\mcal{M}$ be a quasi-coherent sheaf. The natural homomorphism
$\mcal{M} \rightarrow \tilde{\mcal{A}}^{0}_{X}(\mcal{M})$
extends to an $\mcal{O}_{X}$-linear homomorphism
$\underline{\mbb{A}}^{0}(\mcal{M}) \rightarrow
\tilde{\mcal{A}}^{0}_{X}(\mcal{M})$.
\end{lem}
\begin{proof}
Consider the $i$-th covertex map
$\sigma_{i} : [0] \rightarrow [l]$, which is the simplex
$\sigma_{i} = (i) \in \Delta^{l}_{0} \cong
\operatorname{Hom}_{\Delta}([0], [l])$
(cf.\ Example \ref{exa1.5}). There is a corresponding homomorphism
$\sigma_{i} : \underline{\mbb{A}}^{0}(\mcal{M}) \rightarrow
\underline{\mbb{A}}^{l}(\mcal{M})$.
Given a local section $u \in \underline{\mbb{A}}^{0}(\mcal{M})$,
send it to
$(u_{0}, u_{1}, \ldots) \in
\tilde{\mrm{N}}^{0} \underline{\mbb{A}}(\mcal{M}) =
\tilde{\mcal{A}}^{0}_{X}(\mcal{M})$,
where
$u_{l} := \sum_{i=0}^{l} t_{i} \otimes \sigma_{i}(u)$.
\end{proof}
Because of the functoriality of our constructions we have:
\begin{prop} \label{prop2.3}
Let $f : X \rightarrow Y$ be a morphism of $k$-schemes. Then the pullback
homomorphism
$f^{*} : \Omega^{{\textstyle \cdot}}_{Y / k} \rightarrow f_{*} \Omega^{{\textstyle \cdot}}_{X / k}$
extends to DGA homomorphisms
$f^{*} : \tilde{\mcal{A}}^{{\textstyle \cdot}}_{Y} \rightarrow
f_{*} \tilde{\mcal{A}}^{{\textstyle \cdot}}_{X}$
and
$f^{*} : \mcal{A}^{{\textstyle \cdot}}_{Y} \rightarrow f_{*} \mcal{A}^{{\textstyle \cdot}}_{X}$
giving a commutative diagram
\[ \begin{CD}
\Omega^{{\textstyle \cdot}}_{Y / k} @>>> \tilde{\mcal{A}}^{{\textstyle \cdot}}_{Y}
@>{\int_{\Delta}}>> \mcal{A}^{{\textstyle \cdot}}_{Y} \\
@V{f^{*}}VV @V{f^{*}}VV @V{f^{*}}VV \\
f_{*} \Omega^{{\textstyle \cdot}}_{X / k} @>>> f_{*} \tilde{\mcal{A}}^{{\textstyle \cdot}}_{X}
@>{f_{*}(\int_{\Delta})}>> f_{*} \mcal{A}^{{\textstyle \cdot}}_{X} .
\end{CD} \]
\end{prop}
\begin{rem}
One can show that
$(\tilde{\mcal{A}}^{0}_{X})^{\times} = \mcal{O}_{X}^{\times}$
(invertible elements). We leave this as an exercise to the interested
reader.
\end{rem}
\section{Adelic Chern-Weil Theory}
Let us quickly review the notion of a connection on a module. For a
full account see \cite{GH} \S 0.5 and 3.3, \cite{KL} Appendix B,
\cite{Ka}, \cite{Go} or \cite{Kr}. In this section $k$ is a
field of characteristic $0$. Suppose
$A = A^{0} \oplus A^{1} \oplus \cdots$
is an associative, commutative DG $k$-algebra (i.e.\
$a b = (-1)^{ij} b a$ for $a \in A^{i}$, $b \in A^{j}$), with
operator $\mrm{d}$. Given an $A^{0}$-module $M$, a connection on
$M$ is a $k$-linear map
$\nabla : M \rightarrow A^{1} \otimes_{A^{0}} M$ satisfying the Leibniz rule
$\nabla(a m) = \mrm{d} a \otimes m + a \nabla m$, $a \in A^{0}$.
$\nabla$ extends uniquely to an operator
$\nabla : A \otimes_{A^{0}} M \rightarrow A \otimes_{A^{0}} M$
of degree $1$ satisfying the graded Leibnitz rule.
The curvature of $\nabla$ is the operator
$R := \nabla^{2} : M \rightarrow A^{2} \otimes_{A^{0}} M$, which is
$A^{0}$-linear. The connection is flat, or integrable, if $R = 0$.
If $B$ is another DGA and $A \rightarrow B$ is a DGA
homomorphism, then by extension of scalars there is an induced
connection
$\nabla_{B} : B^{0} \otimes_{A^{0}} M \rightarrow B^{1} \otimes_{A^{0}} M$
over $B^{0}$.
If $M$ is free of rank $r$, choose a frame
$\bsym{e} = (e_{1}, \ldots, e_{r})^{\mrm{t}} : (A^{0})^{r} \stackrel{\simeq}{\rightarrow} M$.
Notice that we write $\bsym{e}$ as a column. This gives a
connection matrix $\bsym{\theta} = (\theta_{i, j})$,
$\theta_{i, j} \in A^{1}$, determined by
$\nabla \bsym{e} = \bsym{\theta} \otimes \bsym{e}$
(i.e.\
$\nabla e_{i} = \sum_{j} \theta_{i,j} \otimes e_{j}$).
In this case
$R \in A^{2} \otimes_{A^{0}} \operatorname{End}(M)$, so we get a curvature
matrix $\bsym{\Theta} = (\Theta_{i,j})$ satisfying
$R = \sum_{i,j} \Theta_{i,j} \otimes (e_{i} \otimes e^{\vee}_{j})$.
Here $\bsym{e}^{\vee} := (e_{1}^{\vee}, \ldots, e_{r}^{\vee})$ is
the dual basis and $\Theta_{i,j} \in A^{2}$.
One has
$\bsym{\Theta} = \mrm{d} \bsym{\theta} -
\bsym{\theta} \wedge \bsym{\theta}$.
If $\bsym{f}$ is another basis of $M$, with transition matrix
$\bsym{g} = (g_{i,j})$, $\bsym{e} = \bsym{g} \cdot \bsym{f}$,
then the matrix of
$\nabla$ w.r.t.\ $\bsym{f}$ is
$\bsym{g}^{-1} \bsym{\theta} \bsym{g} -
\bsym{g}^{-1} \mrm{d} \bsym{g}$,
and the curvature matrix is
$\bsym{g}^{-1} \bsym{\Theta} \bsym{g}$.
\begin{exa}
The Levi-Civita connection on $M$ determined by $\bsym{e}$, namely
$\nabla = (\mrm{d}, \ldots, \mrm{d})$, has matrix $\bsym{\theta} = 0$
and so is integrable. In terms of another basis
$\bsym{f} = \bsym{g} \cdot \bsym{e}$
the matrix will be $- \bsym{g}^{-1} \mrm{d} \bsym{g}$.
\end{exa}
Denote by $\mrm{M}_{r}(k)$ the algebra of matrices over
the field $k$ and
$\mrm{M}_{r}(k)^{*} :=$ \linebreak
$\operatorname{Hom}_{k}(\mrm{M}_{r}(k), k)$.
Then the symmetric algebra
$\mrm{S}(\mrm{M}_{r}(k)^{*})$
is the algebra of polynomial functions on $\mrm{M}_{r}(k)$.
The algebra
$I_{r}(k) := \mrm{S}(\mrm{M}_{r}(k)^{*})^{\mrm{Gl}_{r}(k)}$
of conjugation-invariant functions is generated by the
elementary invariant polynomials
$P_{1} = \operatorname{tr}, \ldots, P_{r} = \operatorname{det}$, with $P_{i}$
homogeneous of degree $i$.
\begin{lem} \label{lem3.1}
Assume that $A^{1} = A^{0} \cdot \mrm{d} A^{0}$.
Given any matrix
$\bsym{\theta} \in \mrm{M}_{r}(A^{1})$
let
$\bsym{\Theta} := \mrm{d} \bsym{\theta} -
\bsym{\theta} \cdot \bsym{\theta}$.
Then for any $P \in I_{r}(k)$
one has $\mrm{d} P(\bsym{\Theta}) = 0$,
\end{lem}
\begin{proof}
By assumption we can write
$\theta_{i, j} = \sum_{l} b_{i, j, l} \mrm{d} a_{l}$
for suitable $a_{l}, b_{i, j, l} \in A^{0}$.
Let $A_{\mrm{u}}$ be the universal algebra for this problem:
$A_{\mrm{u}}^{0}$ is the polynomial algebra
$k[ \bsym{a}, \bsym{b}]$,
where
$\bsym{a} = \{a_{l}\}$
and
$\bsym{b} = \{b_{i, j, l}\}$
are finite sets of indeterminates;
$A_{\mrm{u}} = \Omega^{{\textstyle \cdot}}_{A_{\mrm{u}}^{0} / k}$;
and
$\bsym{\theta}_{\mrm{u}} \in \mrm{M}_{r}(A_{\mrm{u}}^{1})$
is the obvious connection matrix.
The DG $k$-algebra homomorphism
$A_{\mrm{u}} \rightarrow A$ sends
$\bsym{\theta}_{\mrm{u}} \mapsto \bsym{\theta}$,
and hence it suffice to prove the case
$A = A_{\mrm{u}}$ .
Write $X := \operatorname{Spec} A^{0}$, which is nothing but
affine space $\mbf{A}^{N}_{k}$ for some $N$. We want to show
that the form
$\mrm{d} P(\bsym{\Theta}) = 0 \in
\Gamma(X, \Omega^{{\textstyle \cdot}}_{X / k})$.
For a closed point $x \in X$ the residue field $k(x)$ is a
finite separable extension of $k$. This implies that the unique
$k$-algebra lifting
$k(x) \rightarrow \widehat{\mcal{O}}_{X, x} = \mcal{O}_{X, (x)}$
has the property that
$\mrm{d} : \mcal{O}_{X, (x)} \rightarrow \Omega^{1}_{X / k, (x)}$
is $k(x)$-linear. Since $X$ is smooth we have
$\mcal{O}_{X, (x)} \cong k(x)[[ f_{1}, \ldots, f_{N} ]]$.
We see that the differential equation on page 401 of \cite{GH} can
be solved formally in $\mrm{M}_{r}(\mcal{O}_{X, (x)})$.
Then the proof of the lemma on page 403 of \cite{GH} shows that
$\mrm{d} P(\bsym{\Theta})_{(x)} \in
\mfrak{m}_{x} \cdot \Omega^{{\textstyle \cdot}}_{X / k, (x)}$.
Since this is true for all closed points $x \in X$ and
$\Omega^{{\textstyle \cdot}}_{X / k, (x)}$ is a free $\mcal{O}_{X}$-module,
it follows that
$\mrm{d} P(\bsym{\Theta}) = 0$.
\end{proof}
Let us now pass to schemes. Assume $X$ is a finite type $k$-scheme
(not necessarily smooth), and let $\mcal{E}$ be a locally free
$\mcal{O}_{X}$-module of rank $r$.
We shall be interested in the sheaf of commutative
DGAs $\tilde{\mcal{A}}^{{\textstyle \cdot}}_{X}$ and the locally free
$\tilde{\mcal{A}}^{0}_{X}$-module
$\tilde{\mcal{A}}^{0}_{X}(\mcal{E}) \cong
\tilde{\mcal{A}}^{0}_{X} \otimes_{\mcal{O}_{X}} \mcal{E}$.
\begin{dfn} \label{dfn3.2}
An {\em adelic connection} on $\mcal{E}$ is a connection
\[ \nabla : \tilde{\mcal{A}}^{0}_{X}(\mcal{E}) \rightarrow
\tilde{\mcal{A}}^{1}_{X}(\mcal{E}) \]
over the algebra $\tilde{\mcal{A}}^{0}_{X}$.
\end{dfn}
\begin{dfn} \label{dfn3.3}
The {\em adelic curvature form} associated to an adelic connection
$\nabla$ on $\mcal{E}$ is
\[ R := \nabla^{2} \in
\operatorname{Hom}_{\tilde{\mcal{A}}_{X}^{0}}
\left( \tilde{\mcal{A}}_{X}^{0}(\mcal{E}),
\tilde{\mcal{A}}_{X}^{2}(\mcal{E}) \right)
\cong \Gamma \left( X, \tilde{\mcal{A}}_{X}^{2} \otimes_{\mcal{O}_{X}}
\mcal{E}nd_{\mcal{O}_{X}} (\mcal{E}) \right) . \]
\end{dfn}
Suppose $\nabla$ is an adelic connection on $\mcal{E}$
and $P \in I_{r}(k)$. Since
$P : \mcal{E}nd(\mcal{E}) \rightarrow \mcal{O}_{X}$
is well defined, we get an induced sheaf homomorphism
$P : \tilde{\mcal{A}}^{{\textstyle \cdot}}_{X} \otimes_{\mcal{O}_{X}}
\mcal{E}nd(\mcal{E}) \rightarrow \tilde{\mcal{A}}^{{\textstyle \cdot}}_{X}$
In particular we have
$P(R) \in \Gamma(X, \tilde{\mcal{A}}^{{\textstyle \cdot}}_{X})$.
\begin{lem}
$P(R)$ is closed, i.e\ $\mrm{D} P(R) = 0$.
\end{lem}
\begin{proof}
This can be checked locally on $X$, so let $U$ be an open set on
which $\mcal{E}$ admits an algebraic frame
$\bsym{f}$. This frame induces isomorphisms of sheaves
$\bsym{f} : (\tilde{\mcal{A}}^{p, q}_{U})^{r} \stackrel{\simeq}{\rightarrow}
\tilde{\mcal{A}}^{p, q}_{X}(\mcal{E})|_{U}$
for all $p, q$. If
$\bsym{\theta} \in \mrm{M}_{r}(\Gamma(U,
\tilde{\mcal{A}}^{1}_{X}))$
is the matrix of the connection
$\nabla : \Gamma(U, \tilde{\mcal{A}}^{0}_{X}(\mcal{E})) \rightarrow
\Gamma(U, \tilde{\mcal{A}}^{1}_{X}(\mcal{E}))$
then
$\bsym{\Theta} = \mrm{D} \bsym{\theta} - \bsym{\theta} \cdot
\bsym{\theta} \in
\mrm{M}_{r}(\Gamma(U, \tilde{\mcal{A}}^{2}_{X}))$
is the matrix of $R$, and we must show that
$\mrm{D} P(\bsym{\Theta}) = 0$.
According to Lemma \ref{lem2.2},
$\Phi : \Gamma(U, \tilde{\mcal{A}}^{{\textstyle \cdot}}_{X}) \rightarrow
\prod_{\xi \in S(U)} \tilde{\mcal{A}}^{{\textstyle \cdot}}_{\xi}$
is an injective DGA homomorphism. Thus letting
$\bsym{\Theta}_{\xi}$ be the $\xi$-component of $\bsym{\Theta}$, it
suffices to show that $\mrm{D} P(\bsym{\Theta}_{\xi}) = 0$
for all $\xi$. Since
$\tilde{\mcal{A}}^{1}_{\xi} = \tilde{\mcal{A}}^{0}_{\xi} \cdot
\mrm{D} \tilde{\mcal{A}}^{0}_{\xi}$
we are done by Lemma \ref{lem3.1}.
\end{proof}
Recall that given a morphism of schemes $f : X \rightarrow Y$ there is a
natural homomorphism of DGAs
$f^{*} : \tilde{\mcal{A}}^{{\textstyle \cdot}}_{Y} \rightarrow
f_{*} \tilde{\mcal{A}}^{{\textstyle \cdot}}_{X}$.
\begin{prop} \label{prop3.2}
Suppose $f : X \rightarrow Y$ is a morphism of schemes, $\mcal{E}$ a
locally free $\mcal{O}_{Y}$-module and $\nabla$ an adelic connection
on $\mcal{E}$. Then there is an induced adelic connection
$f^{*}(\nabla)$ on $f^{*} \mcal{E}$, and
\[ f^{*}(P(R_{\nabla})) = P(R_{f^{*} (\nabla)}) \in
\Gamma(X, \tilde{\mcal{A}}^{{\textstyle \cdot}}_{X}) . \]
\end{prop}
\begin{proof}
By adjunction there are homomorphisms
\[ f^{-1} \mcal{E} \xrightarrow{f^{-1}(\nabla)}
f^{-1} \tilde{\mcal{A}}^{1}_{Y}(\mcal{E}) \rightarrow
\tilde{\mcal{A}}^{1}_{X}(f^{*} \mcal{E}) \]
of sheaves on $X$. Now
$\tilde{\mcal{A}}^{0}_{X}(f^{*} \mcal{E}) =
\tilde{\mcal{A}}^{0}_{X} \otimes_{f^{-1} \mcal{O}_{Y}}
f^{-1} \mcal{E}$,
so by Leibnitz rule we get $f^{*}(\nabla)$.
\end{proof}
\begin{thm} \label{thm3.2}
Let $X$ be a finite type
$k$-scheme and $\mcal{E}$ be a locally free $\mcal{O}_{X}$-module.
Choose an adelic connection $\nabla$ on $\mcal{E}$ and let $R$ be the
adelic curvature form. Then the $k$-algebra homomorphism
\tup{(}doubling degrees\tup{)}
\[ \begin{aligned}
w_{\mcal{E}} :
I_{r}(k) & \rightarrow \mrm{H}^{{\textstyle \cdot}} \Gamma(X, \tilde{\mcal{A}}^{{\textstyle \cdot}}_{X})
\cong \mrm{H}^{{\textstyle \cdot}}(X, \Omega^{{\textstyle \cdot}}_{X / k}) \\
P & \mapsto [P(R)] .
\end{aligned} \]
is independent of the connection $\nabla$.
\end{thm}
We call $w_{\mcal{E}}$ the {\em adelic Chern-Weil homomorphism},
and we also write
$P(\mcal{E}) := w_{\mcal{E}}(P)$.
\begin{proof}
Suppose $\nabla'$ is another adelic connection, with curvature form
$R'$. We need to prove that
$[P(R)] = [P(R')] \in
\mrm{H}^{{\textstyle \cdot}} \Gamma(X, \tilde{\mcal{A}}^{{\textstyle \cdot}}_{X})$.
Consider the scheme
$Y := X \times \Delta^{1}_{\mbb{Q}}$, with projection morphisms
$s = s^{0} : Y \rightarrow X$ and two sections
$\partial^{0}, \partial^{1} : X \rightarrow Y$
(cf.\ Example \ref{exa1.3} for the notation).
Since
$s_{*} \Omega^{{\textstyle \cdot}}_{Y / k} \cong
\Omega^{{\textstyle \cdot}}_{X / k} \otimes_{\mbb{Q}}
\Omega^{{\textstyle \cdot}}(\Delta^{1}_{\mbb{Q}})$
and
$\mbb{Q} \rightarrow \Omega^{{\textstyle \cdot}}(\Delta^{1}_{\mbb{Q}})$
is a quasi-isomorphism (Poincar\'{e} Lemma), we see that
$s^{*} : \Omega^{{\textstyle \cdot}}_{X / k} \rightarrow
s_{*} \Omega^{{\textstyle \cdot}}_{Y / k}$
is a quasi-isomorphism of sheaves on $X$. Because $s$ is an
affine morphism the sheaves $\Omega^{p}_{Y / k}$ are acyclic for
$s_{*}$, and it follows that
$s_{*} \Omega^{{\textstyle \cdot}}_{Y / k} \rightarrow s_{*} \mcal{A}^{{\textstyle \cdot}}_{Y}$
is a quasi-isomorphism. We conclude (cf.\ Proposition \ref{prop2.3})
that
$s^{*} : \mcal{A}^{{\textstyle \cdot}}_{X} \rightarrow s_{*} \mcal{A}^{{\textstyle \cdot}}_{Y}$
is a quasi-isomorphism. Passing to global cohomology we also get
$\mrm{H}^{{\textstyle \cdot}}(s^{*}) :
\mrm{H}^{{\textstyle \cdot}} \Gamma(X, \mcal{A}^{{\textstyle \cdot}}_{X}) \stackrel{\simeq}{\rightarrow}
\mrm{H}^{{\textstyle \cdot}} \Gamma(Y, \mcal{A}^{{\textstyle \cdot}}_{Y})$.
Therefore
\begin{equation} \label{eqn3.6}
\mrm{H}^{{\textstyle \cdot}}(\partial^{0 *}) =
\mrm{H}^{{\textstyle \cdot}}(\partial^{1 *}) :
\mrm{H}^{{\textstyle \cdot}} \Gamma(Y, \tilde{\mcal{A}}^{{\textstyle \cdot}}_{Y}) \stackrel{\simeq}{\rightarrow}
\mrm{H}^{{\textstyle \cdot}} \Gamma(X, \tilde{\mcal{A}}^{{\textstyle \cdot}}_{X})
\end{equation}
with inverse $\mrm{H}^{{\textstyle \cdot}}(s^{*})$.
Let $\mcal{E}_{Y} := s^{*} \mcal{E}$, with two induced adelic
connections $s^{*} \nabla$ and $s^{*} \nabla'$.
Define the mixed adelic connection
\[ \nabla_{Y} := t_{0} s^{*} \nabla + t_{1} s^{*} \nabla' \]
on $\mcal{E}_{Y}$, with curvature $R_{Y}$. Now
$\partial^{0 *}(t_{0}) = 0$, so
\[ \partial^{0 *} \nabla_{Y} = \partial^{0 *}(t_{0} s^{*} \nabla)
+ \partial^{0 *}(t_{1} s^{*} \nabla') = \nabla' \]
as connections on $\mcal{E}$.
Therefore
$\partial^{0 *}(P(R_{Y})) = P(R')$ and likewise
$\partial^{1 *}(P(R_{Y})) = P(R)$.
Finally use (\ref{eqn3.6}).
\end{proof}
Next we show how to construct adelic connections.
Recall that to every chain $\xi = (x_{0}, \ldots, x_{l})$
of length $l$ there is attached a DGA
\[ \tilde{\mcal{A}}^{{\textstyle \cdot}}_{\xi} =
\Omega^{{\textstyle \cdot}}(\Delta^{l}_{\mbb{Q}}) \otimes_{\mbb{Q}}
\Omega^{{\textstyle \cdot}}_{X/k, \xi} \]
(cf.\ formula (\ref{eqn2.2})).
Set
$\tilde{\mcal{A}}^{i}_{\xi}(\mcal{E}) :=
\tilde{\mcal{A}}^{i}_{\xi} \otimes_{\mcal{O}_{X}} \mcal{E}$,
so $\tilde{\mcal{A}}^{0}_{\xi}(\mcal{E})$ is a free
$\tilde{\mcal{A}}^{0}_{\xi}$-module of rank $r$.
If $l = 0$ and $\xi = (x)$ then
$\tilde{\mcal{A}}^{{\textstyle \cdot}}_{(x)} = \Omega^{{\textstyle \cdot}}_{X/k, (x)}$
and
$\tilde{\mcal{A}}^{0}_{(x)} = \mcal{O}_{X, (x)} =
\widehat{\mcal{O}}_{X,x}$,
the complete local ring. For $0 \leq i \leq l$ there is a DGA
homomorphism
\[ \tilde{\mcal{A}}^{{\textstyle \cdot}}_{(x_{i})} =
\Omega^{{\textstyle \cdot}}_{X/k, (x_{i})} \xrightarrow{\sigma_{i}}
\Omega^{{\textstyle \cdot}}_{X/k, \xi} \subset \tilde{\mcal{A}}^{{\textstyle \cdot}}_{\xi} \]
(cf.\ proof of Lemma \ref{lem2.3}).
Suppose we are given a set
$\{ \nabla_{(x)} \}_{x \in X}$
where for each point $x$
\begin{equation} \label{eqn3.9}
\nabla_{(x)} : \mcal{E}_{(x)} \rightarrow \Omega^{1}_{X / k, (x)}
\otimes_{\mcal{O}_{X, (x)}} \mcal{E}_{(x)}
\end{equation}
is a connection over $\mcal{O}_{X, (x)}$. Since
$\tilde{\mcal{A}}^{i}_{\xi}(\mcal{E}) \cong
\tilde{\mcal{A}}^{i}_{\xi} \otimes_{\mcal{O}_{X, (x)}}
\mcal{E}_{(x)}$,
each connection $\nabla_{(x_{i})}$ induces, by extension of scalars,
a connection
\[ \nabla_{\xi,i} :
\tilde{\mcal{A}}^{0}_{\xi}(\mcal{E}) \rightarrow
\tilde{\mcal{A}}^{1}_{\xi}(\mcal{E}) \]
over the algebra $\tilde{\mcal{A}}^{0}_{\xi}$.
Define the ``mixed'' connection
\begin{equation} \label{eqn3.2}
\nabla_{\xi} := \sum_{i= 0}^{l} t_{i} \nabla_{\xi,i}:
\tilde{\mcal{A}}^{0}_{\xi}(\mcal{E}) \rightarrow
\tilde{\mcal{A}}^{1}_{\xi}(\mcal{E}) .
\end{equation}
\begin{prop} \label{prop3.1}
Given a set of connections $\{ \nabla_{(x)} \}_{x \in X}$ as above,
there is a unique adelic connection
$\nabla$ on $\mcal{E}$, such that under the embedding
\[ \Phi_{\mcal{E}} :
\Gamma(U, \tilde{\mcal{A}}^{{\textstyle \cdot}}_{X}(\mcal{E})) \subset
\prod_{\xi \in S(U)} \tilde{\mcal{A}}^{{\textstyle \cdot}}_{\xi}(\mcal{E}) \]
of Lemma \tup{\ref{lem2.2}}, one has
$\nabla e = (\nabla_{\xi} e_{\xi})$
for every local section
$e = (e_{\xi}) \in$ \linebreak
$\Gamma(U, \tilde{\mcal{A}}^{0}_{X}(\mcal{E}))$.
Moreover,
$\nabla (\mcal{E}) \subset \tilde{\mcal{A}}^{1,0}_{X}(\mcal{E})$.
\end{prop}
\begin{proof}
The product
\[ \nabla := \prod_{\xi} \nabla_{\xi} :
\prod_{\xi \in S(X)} \tilde{\mcal{A}}^{0}_{\xi}(\mcal{E}) \rightarrow
\prod_{\xi \in S(X)} \tilde{\mcal{A}}^{1}_{\xi}(\mcal{E}) \]
is a connection over the algebra
$\prod_{\xi} \tilde{\mcal{A}}^{0}_{\xi}$.
Since $\Phi_{\mcal{E}}$ is injective and $\Phi$ is a DGA homomorphism,
it suffices to show that
$\nabla e \in \tilde{\mcal{A}}^{1}_{X}(\mcal{E})$
for every local section
$e \in \tilde{\mcal{A}}^{0}_{X}(\mcal{E})$.
First consider a local section $e \in \mcal{E}$. For every point $x$,
\[ \nabla_{(x)} e \in \left( \Omega^{1}_{X/k}
\otimes_{\mcal{O}_{X}} \mcal{E} \right)_{(x)} . \]
Therefore, writing
$\nabla_{l} := \prod_{\xi \in S(X)_{l}} \nabla_{\xi}$,
we see that
\[ \nabla_{0} e \in
\underline{\mbb{A}}^{0}(\mcal{O}_{X}) \otimes_{\mcal{O}_{X}}
\Omega^{1}_{X/k} \otimes_{\mcal{O}_{X}} \mcal{E} \cong
\underline{\mbb{A}}^{0}(\Omega^{1}_{X/k}
\otimes_{\mcal{O}_{X}} \mcal{E}) . \]
According to Lemma \ref{lem2.3} we get a section
\[ \bsym{\alpha} = (\alpha_{0} , \alpha_{1} , \ldots) \in
\tilde{\mcal{A}}^{0}_{X}(\Omega^{1}_{X/k}
\otimes_{\mcal{O}_{X}} \mcal{E}) \cong
\tilde{\mcal{A}}^{1,0}_{X}(\mcal{E}) \]
with
\[ \alpha_{l} =
\sum_{i=0}^{l} t_{i} \otimes \sigma_{i}(\nabla_{0} e) =
\nabla_{l} e , \]
so $\bsym{\alpha} = \nabla e$.
Finally, any section of
$\tilde{\mcal{A}}^{0}_{X}(\mcal{E}) \cong
\tilde{\mcal{A}}^{0}_{X} \otimes_{\mcal{O}_{X}} \mcal{E}$
is locally a sum of tensors
$a \otimes e$ with $a \in \tilde{\mcal{A}}^{0}_{X}$ and
$e \in \mcal{E}$, so by the Leibniz rule
\[ \nabla(a \otimes e) = \mrm{D} a \otimes e + a \nabla e \in
\tilde{\mcal{A}}^{1}_{X}(\mcal{E}) . \]
\end{proof}
Observe that relative to a local algebraic frame
$\bsym{f}$ for $\mcal{E}$,
the matrix of a connection $\nabla$ as in the proposition
has entries in $\tilde{\mcal{A}}^{1, 0}_{X}$.
A {\em global adelic frame} for $\mcal{E}$ is a family
$\bsym{e} = \{ \bsym{e}_{(x)} \}_{x \in X}$,
where for each $x \in X$,
$\bsym{e}_{(x)} : \mcal{O}_{X, (x)}^{r} \stackrel{\simeq}{\rightarrow} \mcal{E}_{(x)}$
is a frame. In other words this is an isomorphism
$\bsym{e} : \underline{\mbb{A}}^{0}(\mcal{O}_{X})^{r} \stackrel{\simeq}{\rightarrow}
\underline{\mbb{A}}^{0}(\mcal{E})$
of $\underline{\mbb{A}}^{0}(\mcal{O}_{X})$-modules.
The next corollary is inspired by the work of Parshin \cite{Pa}.
\begin{cor} \label{cor3.2}
A global adelic frame $\bsym{e}$ of $\mcal{E}$ determines an
adelic connection $\nabla$.
\end{cor}
\begin{proof}
The frame $\bsym{e}_{(x)}$ determines a Levi-Civita connection
$\nabla_{(x)}$ on $\mcal{E}_{(x)}$. Now use Proposition
\ref{prop3.1}.
\end{proof}
We call such an connection {\em pointwise trivial}. In Sections
5 and 7 we shall only work with pointwise trivial
connections.
Given a local section
$\alpha \in \tilde{\mcal{A}}_{X}^{{\textstyle \cdot}}(\mcal{M})$
we write
$\alpha = \sum \alpha^{p, q}$
with
$\alpha^{p, q} \in \tilde{\mcal{A}}_{X}^{p, q}(\mcal{M})$.
For a chain $\xi$ we write $\alpha_{\xi}$ for the $\xi$ component of
$\Phi_{\mcal{M}}(\alpha)$ (see Lemma \ref{lem2.2}).
\begin{lem} \label{lem3.9}
Let $\nabla$ be the pointwise trivial connection on $\mcal{E}$
determined by an adelic frame $\bsym{e}$. Let
$\xi = (x_{0}, \ldots, x_{l})$ be a chain, and let $\bsym{f}$ be any
frame of $\mcal{E}_{\xi}$. Write
$\bsym{e}_{(x_{i})} = \bsym{g}_{i} \cdot \bsym{f}$
for matrices
$\bsym{g}_{i} \in \operatorname{Gl}_{r}(\mcal{O}_{X, \xi})$, $0 \leq i \leq l$.
Then:
\begin{enumerate}
\item The connection matrix of $\nabla_{\xi}$ w.r.t.\ the frame
$\bsym{f}$ is
$\bsym{\theta} = - \sum t_{i} \bsym{g}_{i}^{-1}
\mrm{d} \bsym{g}_{i}$.
\item Let $\bsym{\Theta}^{1,1}$ be the matrix of the curvature form
$R^{1, 1}_{\xi}$ w.r.t.\ the frame
$\bsym{f} \otimes \bsym{f}^{\vee}$.
Then
\[ \bsym{\Theta}^{1,1} = - \sum \mrm{d} t_{i}
\wedge \bsym{g}_{i}^{-1} \mrm{d} \bsym{g}_{i} . \]
\end{enumerate}
\end{lem}
\begin{proof}
Direct calculation.
\end{proof}
\begin{dfn} \label{dfn3.1}
The $i$-th Chern forms of $\mcal{E}$ with respect to the
adelic connection $\nabla$ are
\[ \begin{aligned}
\tilde{c}_{i}(\mcal{E}, \nabla) & :=
P_{i}(R) \in \Gamma(X, \tilde{\mcal{A}}_{X}^{2i}) \\
c_{i}(\mcal{E}, \nabla) & :=
\int_{\Delta} P_{i}(R) \in \Gamma(X, \mcal{A}_{X}^{2i}) .
\end{aligned} \]
\end{dfn}
Let $t$ be an indeterminate, and define
$P_{t} := \sum_{i = 1}^{r} P_{i} t^{i} \in I_{r}(k)[t]$.
\begin{prop}[Whitney Sum Formula] \label{prop3.5}
Let $X$ be a finite type $k$-scheme. \linebreak
Suppose
$0 \rightarrow \mcal{E}' \rightarrow \mcal{E} \rightarrow \mcal{E}'' \rightarrow 0$
is a short exact sequence of locally free $\mcal{O}_{X}$-modules. Then
there exist adelic connections $\nabla', \nabla, \nabla''$ on
$\mcal{E}', \mcal{E}, \mcal{E}''$ respectively, with
corresponding curvature forms $R', R, R''$, s.t.\
\[ P_{t}(R) = P_{t}(R') \cdot P_{t}(R'') \in
\Gamma(X, \tilde{\mcal{A}}^{{\textstyle \cdot}}_{X})[t] . \]
\end{prop}
\begin{proof}
For any point $x \in X$ choose a splitting
$\sigma_{(x)} : \mcal{E}_{(x)}'' \rightarrow \mcal{E}_{(x)}$
of the sequence, as modules over $\mcal{O}_{X, (x)}$. Also choose
frames $\bsym{e}_{(x)}'$, $\bsym{e}_{(x)}''$ for
$\mcal{E}_{(x)}'$, $\mcal{E}_{(x)}''$ respectively. Let
$\bsym{e}_{(x)} := (\bsym{e}_{(x)}', \sigma_{(x)}(\bsym{e}_{(x)}''))$
be the resulting frame of $\mcal{E}_{(x)}$. Use the global adelic
frame $\bsym{e} = \{ \bsym{e}_{(x)} \}$ to define an adelic
connection $\nabla$ on $\mcal{E}$; and likewise define $\nabla'$ and
$\nabla''$.
In order to check that $P_{t}(R) = P_{t}(R') \cdot P_{t}(R'')$
it suffices, according to Lemma \ref{lem2.2}, to look separately
at each chain $\xi = (x_{0}, \ldots, x_{q})$.
Let
$\bsym{g}_{i} \in \mrm{Gl}_{r}(\mcal{O}_{X, \xi})$
be the transition matrix
$\bsym{e}_{(x_{i})} = \bsym{g}_{i} \cdot \bsym{e}_{(x_{q})}$.
Because of our special choice of frames the initial segment of each
frame $\bsym{e}_{(x_{i})}$ is a frame for the submodule
$\mcal{E}_{\xi}' \subset \mcal{E}_{\xi}$,
which implies that
$\bsym{g}_{i} = \left( \begin{smallmatrix}
\bsym{g}_{i}' & * \\ 0 & \bsym{g}_{i}''
\end{smallmatrix} \right)$,
where $\bsym{g}_{i}', \bsym{g}_{i}''$ are the obvious transition
matrices.
Now with respect to the frame $\bsym{e}_{(x_{q})}$ of
$\mcal{E}_{\xi}$, the connection matrix of $\nabla_{(x_{i})}$ is
$\bsym{\theta}_{i} = - \bsym{g}_{i}^{-1} \mrm{d} \bsym{g}_{i}$,
so the matrices of
$\nabla_{\xi}$ and $R_{\xi}$ are
\begin{eqnarray*}
\bsym{\theta} & = & - (t_{0} \bsym{g}_{0}^{-1} \mrm{d} \bsym{g}_{0}
+ \cdots + t_{q-1} \bsym{g}_{q-1}^{-1} \mrm{d} \bsym{g}_{q-1}) \\
\bsym{\Theta} & = & \mrm{D} \bsym{\theta} -
\bsym{\theta} \wedge \bsym{\theta} .
\end{eqnarray*}
It follows that
$\bsym{\Theta} = \left( \begin{smallmatrix}
\bsym{\Theta}' & * \\ 0 & \bsym{\Theta}''
\end{smallmatrix} \right)$.
By linear algebra we conclude that
$P_{t}(\bsym{\Theta}) = P_{t}(\bsym{\Theta}') \cdot
P_{t}(\bsym{\Theta}'')$.
\end{proof}
\begin{prop} \label{prop3.6}
If $\mcal{E}$ is an invertible $\mcal{O}_{X}$-module
and $\nabla$ is an adelic connection on it, then the differential
logarithm
\[ \operatorname{dlog} : \operatorname{Pic} X = \mrm{H}^{1}(X, \mcal{O}^{*}_{X})
\rightarrow \mrm{H}^{2}(X, \Omega^{{\textstyle \cdot}}_{X / k}) \cong
\mrm{H}^{2}(X, \mcal{A}^{{\textstyle \cdot}}_{X}) \]
sends
\[ \operatorname{dlog}([\mcal{E}]) = [c_{1}(\mcal{E}; \nabla)] . \]
\end{prop}
\begin{proof}
(Cf.\ \cite{HY} Proposition 2.6.) Suppose $\{ U_{i} \}$
is a finite open cover of $X$ s.t.\ $\mcal{E}|_{U_{i}}$ is trivial
with frame $e_{i}$. Let
$g_{i, j} \in \Gamma(U_{i} \cap U_{j}, \mcal{O}^{*}_{X})$
satisfy $e_{i} = g_{i, j} e_{j}$. Then the \v{C}ech cocycle
$\{ g_{i, j} \} \in C^{1}(\{ U_{i} \}; \mcal{O}^{*}_{X})$
represents $[\mcal{E}]$.
Choose a global adelic frame $\{ e_{(x)} \}$ for $\mcal{E}$ and let
$\nabla$ be the connection it determines. For a chain $(x, y)$ let
$g_{(x, y)} \in \mcal{O}^{*}_{X, (x, y)}$ satisfy
$e_{(x)} = g_{(x, y)} e_{(y)}$. By Lemma \ref{lem3.9} we see that
$c_{1}(\mcal{E}; \nabla) = \{ \operatorname{dlog} g_{(x, y)} \} \in
\Gamma(X, \mcal{A}^{1,1}_{X})$.
For any point $x \in U_{i}$ define
$g_{i, (x)} \in \mcal{O}^{*}_{X, (x)}$ in the obvious way. Then
$\{ \operatorname{dlog} g_{i, (x)} \}$ \linebreak
$\in C^{0}(\{ U_{i} \}; \mcal{A}^{1, 0}_{X})$,
and
\[ \mrm{D} \{ \operatorname{dlog} g_{i, (x)} \} =
\{ \operatorname{dlog} g_{i, j} \} - \{ \operatorname{dlog} g_{(x, y)} \} . \]
Since
$\Gamma(X, \mcal{A}^{{\textstyle \cdot}}_{X}) \rightarrow
C^{{\textstyle \cdot}}(\{ U_{i} \}; \mcal{A}^{{\textstyle \cdot}}_{X})$
is a quasi-isomorphism we are done.
\end{proof}
\begin{thm} \label{thm3.4}
Suppose $X$ is smooth over $k$, so that
$\mrm{H}^{{\textstyle \cdot}} \Gamma(X, \tilde{\mcal{A}}_{X}^{{\textstyle \cdot}}) =
\mrm{H}^{{\textstyle \cdot}}_{\mrm{DR}}(X)$. Then the Chern classes
\[ c_{i}(\mcal{E}) := [ \tilde{c}_{i}(\mcal{E}, \nabla) ] \in
\mrm{H}^{2i}_{\mrm{DR}}(X) \]
coincide with the usual ones.
\end{thm}
\begin{proof}
By Theorem \ref{thm3.2} and Propositions \ref{prop3.2},
\ref{prop3.5} and \ref{prop3.6} we see that the axioms of
Chern classes (cf.\ \cite{Ha2} Appendix A) are satisfied.
\end{proof}
\begin{exa}
Consider the projective line $\mbf{P} = \mbf{P}^{1}_{k}$
and the sheaf $\mcal{O}_{\mbf{P}}(1)$.
Let $v \in \Gamma(\mbf{P}, \mcal{O}_{\mbf{P}}(1))$
have a zero at the point $z$. Define a global adelic frame
$\{ e_{(x)} \}$ by
$e_{(x)} = v$ if $x \neq z$, and
$e_{(z)} = w$, any basis of $\mcal{O}_{\mbf{P}}(1)_{(z)}$.
So $v = a w$ for some regular parameter
$a \in \mcal{O}_{\mbf{P}, (z)}$.
The local components of the Chern form
$c_{1}(\mcal{O}_{\mbf{P}}(1); \nabla)$ are $0$
unless $\xi = (z_{0}, z)$ ($z_{0}$ is the generic point),
where we get
$c_{1}(\mcal{O}_{\mbf{P}}(1); \nabla)_{\xi} = a^{-1} \mrm{d} a$.
\end{exa}
An algebraic connection on $\mcal{E}$ is a connection
$\nabla : \mcal{E} \rightarrow \mcal{E} \otimes_{\mcal{O}_{X}}
\Omega^{1}_{X / k}$. The connection $\nabla$ is trivial if
$(\mcal{E}, \nabla) \cong (\mcal{O}_{X}, \mrm{d})^{r}$.
$\nabla$ is generically trivial if it's trivial on a dense open set.
The next proposition explores the relation between adelic and
algebraic connections.
\begin{prop} \label{prop3.10}
Assume $X$ is smooth irreducible and $k$ is algebraically
closed.
Let $\nabla$ be an integrable adelic connection on $\mcal{E}$.
\begin{enumerate}
\item If
$\nabla(\mcal{E}) \subset \tilde{\mcal{A}}^{1, 0}_{X}(\mcal{E})$
then $\nabla$ is algebraic.
\item If $\nabla$ is algebraic and generically trivial
then it is trivial.
\end{enumerate}
\end{prop}
\begin{proof}
1.\ By Lemma 2.15 part 1, with
$\mcal{M} = \Omega^{1}_{X / k} \otimes_{\mcal{O}_{X}} \mcal{E}$,
it suffices to prove that
$\mrm{D}'' \nabla(\mcal{E}) = 0$, which is
a local statement. So choose a local algebraic frame $\bsym{f}$ of
$\mcal{E}$ on some open set. Then we have a connection matrix
$\bsym{\theta}$ which is homogeneous of bidegree $(1, 0)$,
and by assumption the curvature matrix
$\bsym{\Theta} = \mrm{D} \bsym{\theta}
- \bsym{\theta} \cdot \bsym{\theta}$
is zero. But since
$\bsym{\Theta}^{1, 1} = \mrm{D}'' \bsym{\theta}$ we are done.
\medskip \noindent
2.\
The algebraic connection $\nabla$ extends uniquely to an adelic
connection with the same name (by Proposition \ref{prop3.1}).
Let $x_{0}$ be the generic point, so by assumption we have a frame
$\bsym{e}_{(x_{0})}$ for $\mcal{E}_{(x_{0})}$ which trivializes
$\nabla$.
Now take any closed point $x_{1}$, so
$\mcal{O}_{X, (x_{1})} \cong k[[t_{1}, \ldots, t_{n}]]$.
It is well known that there is a frame
$\bsym{e}_{(x_{1})}$ for $\mcal{E}_{(x_{1})}$ which trivializes
$\nabla$ (cf.\ \cite{Ka}).
Consider the chain $\xi = (x_{0}, x_{1})$. W.r.t.\ the frame
$\bsym{e}_{(x_{0})}$, the connection matrix of $\nabla_{\xi}$ is
$\bsym{\theta}_{\xi} = -t_{1} \bsym{g}^{-1} \mrm{d} \bsym{g}$,
where
$\bsym{e}_{(x_{1})} = \bsym{g} \cdot \bsym{e}_{(x_{0})}$
and
$\bsym{g} \in \mrm{Gl}_{r}(\mcal{O}_{X, \xi})$.
Since $\nabla$ is integrable we get
\[ -\mrm{d} t_{1} \cdot \bsym{g}^{-1} \mrm{d} \bsym{g} =
\mrm{D}'' \bsym{\theta}_{\xi} = \bsym{\Theta}^{1, 1} = 0 . \]
We conclude that
\[ \mrm{d} \bsym{g} = 0 \in \mrm{M}_{r}(\Omega^{1}_{X / k, \xi}) . \]
But because $X$ is smooth and $k$ is algebraically closed, it follows
that
$\mrm{H}^{0} \Omega^{{\textstyle \cdot}}_{X / k, \xi} \subset
\mrm{H}^{0} \Omega^{{\textstyle \cdot}}_{X / k, \eta} = k$,
where $\eta$ is any maximal chain containing $\xi$. So in fact
$\bsym{g} \in \mrm{Gl}_{r}(k)$, and by faithful flatness we get
\[ \bsym{e}_{(x_{0})} =
\bsym{g}^{-1} \cdot \bsym{e}_{(x_{1})} \in
\mcal{E}_{(x_{1})} \cap \mcal{E}_{(x_{0})} = \mcal{E}_{x_{1}} . \]
By going over all closed points $x_{1}$ we see that
$\bsym{e}_{(x_{0})} \in \Gamma(X, \mcal{E})$,
which trivializes $\nabla$.
\end{proof}
There do however exist integrable adelic connections which are not
algebraic.
\begin{exa} \label{exa3.10}
Let $X$ be any scheme of positive dimension, and let
$\tilde{a} \in \Gamma(X, \tilde{\mcal{A}}^{0}_{X})$
be any element satisfying
$\mrm{D}'' \mrm{D} \tilde{a} \neq 0$.
For instance take a fixed point $x_{0}$ and an element
$a_{(x_{0})} \in \mcal{O}_{X, (x_{0})}$
s.t.\ $\mrm{d} a_{(x_{0})} \neq 0$. For any $x \neq x_{0}$ set
$a_{(x)} := 0 \in \mcal{O}_{X, (x)}$.
Then
$\{ a_{(x)} \} \in \Gamma(X, \underline{\mbb{A}}^{0}(\mcal{O}_{X}))$,
and by Lemma \ref{lem2.3} we get
$\tilde{a} \in \Gamma(X, \tilde{\mcal{A}}^{0}_{X})$.
Now
$\mrm{D}'' \mrm{D} \tilde{a} = \mrm{D}'' \mrm{D}' \tilde{a}$,
and clearly $\mrm{D}' \tilde{a}$ is not algebraic.
Take $\mcal{E} = \mcal{O}_{X}^{2}$.
The matrix
\[ \bsym{e} =
\begin{pmatrix}
1 & \tilde{a} \\ 0 & 1 \end{pmatrix}
\in \mrm{M}_{2}(\Gamma(X, \tilde{\mcal{A}}^{0}_{X})) \]
is invertible, and we consider $\bsym{e}$ as a frame for
$\tilde{\mcal{A}}^{0}_{X}(\mcal{E})$.
Define $\nabla$ to be the Levi-Civita connection for $\bsym{e}$.
If we now consider the algebraic frame
$\bsym{f} = \left( \begin{smallmatrix}
1 & 0 \\ 0 & 1 \end{smallmatrix} \right)$
of $\mcal{E}$, then the connection matrix w.r.t.\ $\bsym{f}$ is
\[ -\bsym{e}^{-1} \cdot \mrm{D} \bsym{e} =
- \begin{pmatrix}
0 & \mrm{D} \tilde{a} \\ 0 & 0 \end{pmatrix} .\]
So $\mrm{D}'' \nabla(\mcal{E}) \neq 0$ and hence
$\nabla$ is not algebraic.
\end{exa}
\begin{question} \label{que3.1}
Does there exist an adelic connection
$\nabla$ with curvature form $R$ homogeneous of bidegree $(1,1)$?
\end{question}
\begin{rem}
Theorem \ref{thm3.2} works just as well for a relative situation:
$Y$ is a finite type $k$-scheme and $f : X \rightarrow Y$ is a finite
type morphism. Then we can define
$\mcal{A}^{{\textstyle \cdot}}_{X / Y} :=
\mrm{N} \underline{\mbb{A}}(\Omega^{{\textstyle \cdot}}_{X / Y})$
and likewise $\tilde{\mcal{A}}^{{\textstyle \cdot}}_{X / Y}$.
There are relative adelic connections on any locally free
$\mcal{O}_{X}$-module $\mcal{E}$, and there is a Chern-Weil
homomorphism
\[ w_{\mcal{E}} : I_{r}(k) \rightarrow
\mrm{H}^{{\textstyle \cdot}} f_{*} \mcal{A}^{{\textstyle \cdot}}_{X / Y} \cong
\mrm{H}^{{\textstyle \cdot}} \mrm{R} f_{*} \Omega^{{\textstyle \cdot}}_{X / Y} . \]
\end{rem}
\begin{rem}
In \cite{Du} a very similar construction is carried out to calculate
characteristic classes of principal $G$-bundles, for a Lie group $G$.
These classes are in the cohomology of the classifying space $BG$,
which coincides with the cohomology of the simplicial manifold $NG$.
\end{rem}
\begin{rem}
Suppose $X$ is any finite type scheme over $k$. Then
$R \in \Gamma(X, \tilde{\mcal{A}}_{X}^{{\textstyle \cdot}})$
is nilpotent and we may define
\[ \tilde{\mrm{ch}}(\mcal{E}; \nabla) := \operatorname{tr} \operatorname{exp} R \in
\Gamma(X, \tilde{\mcal{A}}_{X}^{{\textstyle \cdot}}) . \]
Using the idea of the proof of Proposition \ref{prop3.5} one can
show that given a bounded complex
$\mcal{E}_{{\textstyle \cdot}}$ of locally free sheaves, which is acyclic on an
open set $U$, it is possible to find
connections $\nabla_{i}$ on $\mcal{E}_{i}$ s.t.\
$\sum_{i} (-1)^{i} \tilde{\mrm{ch}}(\mcal{E}_{i}; \nabla_{i}) = 0$
on $U$. In particular when $U = X$ we get a ring homomorphism
\[ \mrm{ch} : \mrm{K}^{0}(X) \rightarrow
\mrm{H}^{{\textstyle \cdot}} \Gamma(X, \tilde{\mcal{A}}_{X}^{{\textstyle \cdot}}) \cong
\mrm{H}^{{\textstyle \cdot}}(X, \Omega^{{\textstyle \cdot}}_{X/k}) , \]
the {\em Chern character}. When $X$ is smooth this
is the usual Chern character into
$\mrm{H}^{{\textstyle \cdot}}_{\operatorname{DR}}(X)$.
\end{rem}
\section{Secondary Characteristic Classes}
Let $k$ be a field of characteristic $0$.
In \cite{BE}, Bloch and Esnault show that given a locally free
sheaf $\mcal{E}$ on a smooth $k$-scheme $X$, an algebraic connection
$\nabla : \mcal{E} \rightarrow \Omega^{1}_{X / k}
\otimes_{\mcal{O}_{X}} \mcal{E}$
and an invariant polynomial $P \in I_{r}(k)$ of degree $m \geq 2$,
there is a Chern-Simons class
\[ \mrm{T} P(\mcal{E}; \nabla) \in
\Gamma \left(X, \Omega^{2m -1}_{X / k} /
\mrm{d}(\Omega^{2m - 2}_{X / k}) \right) \]
satisfying
\[ \mrm{d} \mrm{T} P(\mcal{E}; \nabla) =
P(\mcal{E}) \in \mrm{H}^{2m}_{\mrm{DR}}(X) . \]
$\mrm{T} P(\mcal{E}; \nabla)$ is called the secondary, or
Chern-Simons, characteristic class. The notation we use is taken from
\cite{Es}; the original notation in \cite{BE} is
$w_{m}(\mcal{E}, \nabla, P)$.
Such algebraic connections exist when $X$ is affine. However
the authors of \cite{BE} point out that any quasi-projective scheme
$X$ admits a vector bundle whose total space $X'$ is affine, and then
$\mrm{H}^{{\textstyle \cdot}}_{\mrm{DR}}(X) \rightarrow \mrm{H}^{{\textstyle \cdot}}_{\mrm{DR}}(X')$
is an isomorphism.
In Section 3 we proved that adelic connections always exist.
In this section we define adelic Chern-Simons classes, which are
global sections of sheaves on $X$ itself:
\begin{thm} \label{thm4.1}
Suppose $X$ is a smooth $k$-scheme, $\mcal{E}$ a locally free
$\mcal{O}_{X}$-module, $\nabla$ an adelic connection on $\mcal{E}$
and $P \in I_{r}(k)$ homogeneous of degree $m \geq 2$.
Then there is a class
\[ \mrm{T} P(\mcal{E}; \nabla) \in
\Gamma \left( X, \mcal{A}^{2m-1}_{X} /
\mrm{D}(\mcal{A}^{2m-2}_{X}) \right) \]
satisfying
\[ \mrm{D} [\mrm{T} P(\mcal{E}; \nabla)] = P(\mcal{E}) \in
\mrm{H}^{2m}_{\mrm{DR}}(X) \]
and commuting with pullbacks for morphisms of schemes $X' \rightarrow X$.
\end{thm}
The proof is later in this section, after some preparation.
According to \cite{BE} Theorem 2.2.1, for any commutative
$k$-algebra $B$, invariant polynomial
$P \in I_{r}(k)$ homogeneous of degree $m$ and matrix
$\bsym{\theta} \in \mrm{M}_{r}(\Omega^{1}_{B / k})$,
there is a differential form
$\mrm{T} P(\bsym{\theta}) \in \Omega^{2m-1}_{B / k}$.
$\mrm{T} P(\bsym{\theta})$ is functorial on $k$-algebras, and
satisfies
\begin{equation} \label{eqn4.1}
\mrm{d} \mrm{T} P(\bsym{\theta}) = P(\bsym{\Theta}) \in
\Omega^{2m}_{B / k} ,
\end{equation}
where
$\bsym{\Theta} = \mrm{d} \bsym{\theta} -
\bsym{\theta} \cdot \bsym{\theta}$.
We shall need a slight generalization of \cite{BE}
Proposition 2.2.2. Consider $\operatorname{M}_{r}$ and $\operatorname{GL}_{r}$
as schemes over $k$. There is a universal invertible matrix
\[ \bsym{g} = \bsym{g}_{\mrm{u}} \in
\Gamma (\mrm{GL}_{r}, \mrm{GL}_{r}(\mcal{O}_{\mrm{GL}_{r}})) . \]
For an integer $N$ there is a universal connection matrix
\[ \bsym{\theta} = \bsym{\theta}_{\mrm{u}}
\in \Gamma(Y, \mrm{M}_{r}(\Omega^{1}_{Y / k})) , \]
where
$Y := \mrm{M}_{r} \times \mbf{A}^{N} =
\operatorname{Spec} [\bsym{a}_{\mrm{u}}, \bsym{b}_{\mrm{u}}]$
for a collection of indeterminates
$\bsym{a}_{\mrm{u}} = \{ a_{p} \}$
and
$\bsym{b}_{\mrm{u}} = \{ b_{i, j, p} \}$,
$1\leq p \leq N$, $1 \leq i, j \leq r$.
The matrix is of course
$\bsym{\theta}_{\mrm{u}} = (\theta_{i, j})$
with
$\theta_{i, j} = \sum_{p} b_{i, j, p} \mrm{d} a_{p}$.
Then we get by pullback matrices $\bsym{g}$ and $\bsym{\theta}$
on $\mrm{GL}_{r} \times Y$.
\begin{lem}
Given an invariant polynomial $P$ there is an open cover
$\mrm{GL}_{r} = \bigcup U_{i}$
and forms
$\beta_{i} = \beta_{\mrm{u}, i} \in \Gamma(U_{i} \times Y,
\Omega^{2m-2}_{U_{i} \times Y})$
s.t.\
\[ \left( \mrm{T} P(\bsym{\theta}) -
\mrm{T} P(\mrm{d} \bsym{g} \cdot \bsym{g}^{-1} +
\bsym{g} \cdot \bsym{\theta} \cdot \bsym{g}^{-1} ) \right)|_{
U_{i} \times Y} = \mrm{d} \beta_{i} . \]
\end{lem}
\begin{proof}
Write
\[ \alpha = \alpha_{\mrm{u}} := \mrm{T}P(\bsym{\theta}) -
\mrm{T}P(\mrm{d} \bsym{g} \cdot \bsym{g}^{-1} +
\bsym{g} \cdot \bsym{\theta} \cdot \bsym{g}^{-1})
\in \Gamma(\mrm{GL}_{r} \times Y,
\Omega^{2m - 1}_{\mrm{GL}_{r} \times Y / k}) . \]
It's known that $\mrm{d} \alpha = 0$. Let
$s : \mrm{GL}_{r} \rightarrow \mrm{GL}_{r} \times Y$
correspond to any $k$-rational point of $Y$. Choose a point
$x \in \mrm{GL}_{r}$. By \cite{BE} Proposition 2.2.2 there is an
affine open neighborhood $V$ of $s(x)$ in $\mrm{GL}_{r} \times Y$
and a form
$\beta' \in \Gamma(V, \Omega^{2m-2}_{\mrm{GL}_{r} \times Y / k})$,
s.t.\
$\alpha|_{V} = \mrm{d} \beta'$.
Define $U := s^{-1}(V)$, so
$s^{*}(\alpha)|_{U} = \mrm{d}\, s^{*}(\beta') \in
\Gamma(U, \Omega^{2m-2}_{\mrm{GL}_{r} / k})$.
Since
$\mrm{H}(s^{*}) :
\mrm{H}^{{\textstyle \cdot}}_{\mrm{DR}}(U \times Y) \rightarrow
\mrm{H}^{{\textstyle \cdot}}_{\mrm{DR}}(U)$
is an isomorphism, it follows that there is some
$\beta \in \Gamma(U \times Y,
\Omega^{2m-2}_{U \times Y / k})$
with
$\alpha|_{U \times Y} = \mrm{d} \beta$.
\end{proof}
\begin{proof}[Proof of the Theorem]
Say $\operatorname{dim} X = n$.
Let $U$ be a sufficiently small affine open set of $X$ s.t.\
$\mrm{d} a_{1}, \ldots, \mrm{d} a_{n}$ is an algebraic frame of
$\Omega^{1}_{X / k}$, for some
$a_{1}, \ldots, a_{n} \in \Gamma(U, \mcal{O}_{X})$;
and there a local algebraic frame $\bsym{f}$ for $\mcal{E}$ on $U$.
We get an induced isomorphism
$\bsym{f} : (\tilde{\mcal{A}}_{U}^{0})^{r} \stackrel{\simeq}{\rightarrow}
\tilde{\mcal{A}}^{0}_{X}(\mcal{E})|_{U}$.
Let
$\bsym{\theta} = (\theta_{i,j}) \in
\mrm{M}_{r}(\Gamma(U, \tilde{\mcal{A}}^{1}_{U}))$
be the connection matrix of $\nabla$ with respect to $\bsym{f}$.
Define the commutative DGAs
\[ A_{l} := \Omega^{{\textstyle \cdot}}(\Delta^{l}_{\mbb{Q}}) \otimes_{\mbb{Q}}
\Gamma(U, \underline{\mbb{A}}^{l}(\Omega^{{\textstyle \cdot}}_{X / k})) . \]
Then by definition \ref{dfn1.1},
$\bsym{\theta} = (\bsym{\theta}_{1}, \bsym{\theta}_{2}, \ldots)$
where
$\bsym{\theta}_{l} \in \mrm{M}_{r}(A_{l}^{1})$,
and the various matrices $\bsym{\theta}_{l}$ have to satisfy
certain simplicial compatibility conditions.
Fix an index $l$. We have
$A_{l}^{0} = \mcal{O}(\Delta^{l}_{\mbb{Q}}) \otimes_{\mbb{Q}}
\Gamma(U, \underline{\mbb{A}}^{l}(\mcal{O}_{X}))$
which contains $\Gamma(U, \mcal{O}_{X})$.
Thus we may uniquely write
$(\theta_{i,j})_{l} = \sum_{p = 1}^{n} b_{i,j,p} \mrm{d} a_{p} +
\sum_{p = 1}^{l} b_{i,j,p + n} \mrm{d} t_{p}$,
with $b_{i,j,p} \in A_{l}^{0}$.
It follows that for $N = n + l$
and $Y = \mrm{M}_{r} \times \mbf{A}^{N}$
there is a unique $k$-algebra homomorphism
$\phi_{l} : \Gamma(Y, \mcal{O}_{Y}) \rightarrow A^{0}_{l}$,
with
$\phi_{l}(\bsym{a}_{\mrm{u}}, \bsym{b}_{\mrm{u}}) =
(\bsym{a}, \bsym{t}, \bsym{b})$.
This extends to a DGA homomorphism
$\phi_{l} : \Gamma(Y, \Omega^{{\textstyle \cdot}}_{Y / k}) \rightarrow A_{l}$,
and sends the universal connection $\bsym{\theta}_{\mrm{u}}$ to
$\bsym{\theta}_{l}$. Define
$\mrm{T}P(\bsym{\theta}_{l}) :=
\phi_{l}(\mrm{T}P(\bsym{\theta}_{\mrm{u}})) \in A_{l}$.
Because the homomorphisms $\phi_{l}$ are completely determined by the
matrices $\bsym{\theta}_{l}$ it follows that the forms
$\mrm{T}P(\bsym{\theta}_{l})$ satisfy the simplicial compatibilities.
So there is an adelic form
$\mrm{T}P(\bsym{\theta}) \in
\Gamma(U, \tilde{\mcal{A}}^{2m -1 }_{X})$.
Now let $\bsym{f}' = \bsym{g} \cdot \bsym{f}$ be another local
algebraic frame for $\mcal{E}$ on $U$, with
$\bsym{g} \in \Gamma(U, \mrm{GL}_{r}(\mcal{O}_{X}))$.
Fix $l$ as before and write
\[ \alpha_{l} := \mrm{T} P(\bsym{\theta}_{l}) -
\mrm{T} P(\mrm{d} \bsym{g} \cdot \bsym{g}^{-1} +
\bsym{g} \cdot \bsym{\theta}_{l} \cdot \bsym{g}^{-1})
\in \Gamma(U, \tilde{\mcal{A}}^{2m - 1}_{X}) . \]
Let $h : U \rightarrow \mrm{GL}_{r}$ be the scheme morphism s.t.\
$h^{*} (\bsym{g}_{\mrm{u}}) = \bsym{g}$.
The $k$-algebra homomorphism
\[ \psi_{l} = h^{*} \otimes \phi_{l} :
\Gamma(\mrm{GL}_{r} \times Y, \mcal{O}_{\mrm{GL}_{r} \times Y})
\rightarrow A^{0}_{l} \]
extends to a DGA homomorphism and
$\alpha_{l} = \psi_{l}(\alpha_{\mrm{u}})$,
where $\alpha_{\mrm{u}}$ is the obvious universal form.
By the lemma, for every $i$ there is a form
\[ \beta_{i, l} := \psi_{l}(\beta_{\mrm{u}, i}) \in
\Omega^{{\textstyle \cdot}}(\Delta^{l}_{\mbb{Q}}) \otimes_{\mbb{Q}}
\Gamma(h^{-1}(U), \underline{\mbb{A}}^{l}(\Omega^{{\textstyle \cdot}}_{X / k})) \]
of degree $2m -2$.
Since we are not making choices to define
$\beta_{i, l}$ it follows that the simplicial compatibilities hold,
and so we obtain an adele
$\beta_{i} \in \Gamma(h^{-1}(U), \tilde{\mcal{A}}^{2m-2}_{X})$,
which evidently satisfies
\[ \alpha|_{h^{-1}(U_{i})} = \mrm{D} \beta_{i}
\in \Gamma(h^{-1}(U_{i}), \tilde{\mcal{A}}^{2m-1}_{X}) . \]
This means that the element
\[ \mrm{T} \tilde{P}(\mcal{E}; \nabla) := \mrm{T} P(\bsym{\theta})
\in \Gamma \left( U, \tilde{\mcal{A}}^{2m-1}_{X} /
\mrm{D}(\tilde{\mcal{A}}^{2m-1}_{X})
\right) \]
is independent of the local algebraic frame $\bsym{f}$,
and therefore glues to a global section on $X$.
Finally set
$\mrm{T} P(\mcal{E}; \nabla) := \int_{\triangle}
\mrm{T} \tilde{P}(\mcal{E}; \nabla)$.
\end{proof}
Some of the deeper results of \cite{BE} deal with integrable
algebraic connections. Denote by
$\mcal{H}^{i}_{\mrm{DR}}$ the sheafified De Rham cohomology, i.e.\
the sheaf associated to the presheaf
$U \mapsto \mrm{H}^{i}_{\mrm{DR}}(U)$.
Then
\[ \begin{aligned}
\mcal{H}^{i}_{\mrm{DR}} & = \mrm{H}^{i} \Omega^{{\textstyle \cdot}}_{X / k} =
\operatorname{Ker} \left(
\frac{\Omega^{i}_{X / k} }{ \mrm{d}(\Omega^{i - 1}_{X / k}) }
\xrightarrow{\mrm{d}} \Omega^{i + 1}_{X / k} \right) \\
& \cong \mrm{H}^{i} \mcal{A}^{{\textstyle \cdot}}_{X} =
\operatorname{Ker} \left(
\frac{ \mcal{A}^{i}_{X} }{ \mrm{D}( \mcal{A}^{i- 1}_{X}) }
\xrightarrow{\mrm{D}} \mcal{A}^{i + 1}_{X} \right) .
\end{aligned} \]
Because of formula (\ref{eqn4.1}), we get an adelic generalization of
\cite{BE} Proposition 2.3.2:
\begin{prop} \label{prop4.1}
If the adelic connection $\nabla$ is integrable then
$\mrm{T} P(\mcal{E}; \nabla) \in$ \linebreak
$\Gamma(X, \mcal{H}^{2m - 1}_{\mrm{DR}})$.
\end{prop}
The next question is an extension of Basic Question 0.3.1
of \cite{BE}.
\begin{question}
Are the classes $\mrm{T} P (\mcal{E}; \nabla)$ all zero for an
integrable adelic connection $\nabla$?
\end{question}
\section{The Bott Residue Formula}
Let $X$ be smooth $n$-dimensional projective variety over the field
$k$ ($\operatorname{char} k = 0 $). Suppose $\mcal{E}$ is a locally free
$\mcal{O}_{X}$-module of rank $r$ and
$P \in I_{r}(k)$ is a homogeneous polynomial of degree $n$.
The problem is to calculate the Chern number
$\int_{X} P(\mcal{E}) \in k$,
where
$\int_{X} : \mrm{H}^{2n}_{\mrm{DR}}(X) \rightarrow k$
is the nondegenerate map defined e.g.\ in \cite{Ha1}.
Assume $v \in \Gamma(X, \mcal{T}_{X})$ is a vector field which acts
on $\mcal{E}$. By this we mean there is a DO
$\Lambda : \mcal{E} \rightarrow \Omega^{1}_{X / k}
\otimes_{\mcal{O}_{X}} \mcal{E}$
satisfying
$\Lambda(a e) = v(a) \otimes e + a \otimes \Lambda e$
for local section $a \in \mcal{O}_{X}$ and $e \in \mcal{E}$.
Furthermore assume the zero scheme $Z$ of $v$ is finite
(but not necessarily reduced). Then we
shall define a local invariant $P(v, \mcal{E}, z) \in k$ for
every zero $z \in Z$, explicitly in terms of local coordinates,
in equation (\ref{eqn5.1}). Our result is:
\begin{thm}[Bott Residue Formula] \label{thm5.1}
\[ \int_{X} P(\mcal{E}) = \sum_{z \in Z} P(v, \mcal{E}, z) . \]
\end{thm}
The proof appears later in this section, after some preparation.
It follows the original proof of Bott \cite{Bo2}, but
using algebraic residues and adeles instead of complex-analytic
methods. This is made possible by Proposition \ref{prop2.4} and
Theorem \ref{thm3.4}.
We show that a good choice of adelic connection $\nabla$ on
$\mcal{E}$ enables one to localize the integral to the zero locus
$Z$. This is quite distinct from the proof of the Bott Residue
Formula in \cite{CL}, where classes in Hodge cohomology
$\mrm{H}^{q}(X, \Omega^{p}_{X / k})$ are considered, and integration
is done using Grothendieck's global duality theory.
Let us first recall the local cohomology residue map
\[ \operatorname{Res}_{\mcal{O}_{X, (z)} / k} :
\mrm{H}^{n}_{z} (\Omega^{n}_{X / k}) \rightarrow k \]
of \cite{Li} and \cite{HK}.
Choose local coordinates $f_{1}, \ldots, f_{n}$ at $z$, so
$\mcal{O}_{X, (z)} \cong$ \linebreak
$k(z)[[ f_{1}, \ldots, f_{n}]]$.
Local cohomology classes are represented by generalized fractions.
Given
$a = \sum_{\bsym{i}} b_{\bsym{i}} \bsym{f}^{\bsym{i}} \in
\mcal{O}_{X, (z)}$
where $\bsym{i} = (i_{1}, \ldots, i_{n})$,
$b_{\bsym{i}} \in k(z)$ and
$\bsym{f}^{\bsym{i}} = f_{1}^{i_{1}} \cdots f_{n}^{i_{n}}$,
the residue is
\begin{equation} \label{eqn5.7}
\operatorname{Res}_{\mcal{O}_{X, (z)} / k}
\gfrac{a \cdot \mrm{d} f_{1} \wedge \cdots \wedge \mrm{d} f_{n}}{
f_{1}^{i_{1}}, \ldots, f_{n}^{i_{n}}} =
\operatorname{tr}_{k(z) / k}(b_{i_{1} - 1, \ldots, i_{n} - 1})
\in k .
\end{equation}
Let $a_{1}, \ldots, a_{n} \in \mcal{O}_{X}$ be the unique
local sections near $z$ satisfying
$v = \sum a_{i} \frac{\partial}{\partial f_{i}}$.
Then by definition
\[ \mcal{O}_{Z, z} \cong k(z)[[ f_{1}, \ldots, f_{n}]] /
(a_{1}, \ldots, a_{n}) . \]
The DO $\Lambda$ restricts to an $\mcal{O}_{Z}$-linear
endomorphism $\Lambda |_{Z}$ of $\mcal{O}_{Z}
\otimes_{\mcal{O}_{X}} \mcal{E}$,
giving an element
$P(\Lambda |_{Z}) \in \mcal{O}_{Z, z}$.
Choose any lifting $P'$ of $P(\Lambda |_{Z})$ to
$\mcal{O}_{X, (z)}$, and define
\begin{equation} \label{eqn5.1}
P(v, \mcal{E}, z) := (-1)^{\binom{n + 1}{2}}
\operatorname{Res}_{\mcal{O}_{X, (z)} / k}
\gfrac{P' \cdot \mrm{d} f_{1} \wedge \cdots \wedge
\mrm{d} f_{n}}{ a_{1}, \ldots, a_{n}} .
\end{equation}
The calculation of (\ref{eqn5.1}), given the $a_{i}$ and $P'$,
is quite easy: first express these elements as power series in
$\bsym{f}$. The rules for manipulating generalized fractions
are the same as for ordinary fractions, so the denominator can
be brought to be $\bsym{f}^{\bsym{i}}$. Now use (\ref{eqn5.7}).
\begin{exa} \label{exa5.1}
Let $X := \mbf{P}^{1}_{k}$ and $\mcal{E} := \mcal{O}_{X}(1)$. Let
$\operatorname{Spec} k[f] \subset X$ be the complement of
one point (infinity), and let $z$ be the origin (i.e.\ $f(z) = 0$).
We embed $\mcal{E}$ in the function field $k(X)$ as the
subsheaf of functions with at most one pole at $z$. Now
$v := f^{2} \frac{\partial}{\partial f} \in \Gamma(X, \mcal{T}_{X})$
is a global vector field, and we see that its action on $k(X)$
preserves $\mcal{E}$. So the theorem applies with $\Lambda = v$.
Here is the calculation:
the zero scheme of $v$ is $Z = \operatorname{Spec} k[f] / (a)$ with
$a = f^{2}$. Since
$\Lambda(f^{-1}) = f^{2} \frac{\partial f^{-2}}{\partial f} = -1$
we see that
$P(\Lambda) = -f$, and so
\[ P(v, \mcal{E}, z) := (-1)^{3}
\operatorname{Res}_{k[[f]] / k} \gfrac{-f \mrm{d} f}{f^{2}} = 1 , \]
as should be.
\end{exa}
\begin{exa} \label{exa5.2}
If $z$ is a simple zero (that is to say $\mcal{O}_{Z, z} = k(z)$)
we recover the familiar formula of Bott.
Denote by $\mrm{ad}(v)$ the adjoint action of $v$
on $\mcal{T}_{X}$. Then $\mrm{ad}(v)|_{z}$ is an invertible
$k(z)$-linear endomorphism of $\mcal{T}_{X}|_{k(z)}$. Its matrix
w.r.t.\ the frame
$( \frac{\partial}{\partial f_{1}}, \ldots,
\frac{\partial}{\partial f_{n}})^{\mrm{t}}$
is
$- \left( \frac{\partial a_{i}}{\partial f_{j}} \right)^{\mrm{t}}$,
and (\ref{eqn5.1}) becomes
\[ P(v, \mcal{E}, z) = (-1)^{\binom{n}{2}}
\operatorname{tr}_{k(z) / k} \left( \frac{ P(\Lambda |_{k(z)})}{
\operatorname{det}(\operatorname{ad}(v) |_{k(z)})} \right) . \]
In the previous example we could have chosen
$v := f \frac{\partial}{\partial f}$. This has $2$ simple zeroes:
$z$, where $a = f$ and $P(\Lambda) = -1$, so
$P(v, \mcal{E}, z) = 1$; and infinity, where $P(\Lambda) = 0$.
\end{exa}
Let us start our proofs by showing that the local invariant is indeed
independent of choices.
\begin{lem} \label{lem5.1}
$P(v, \mcal{E}, z)$ is independent of the coordinate system
$f_{1}, \ldots, f_{n}$ and the lifting $P'$.
\end{lem}
\begin{proof}
Let $g_{1}, \ldots, g_{n}$ be another system of coordinates,
and write
$v = \sum b_{i} \frac{\partial}{\partial g_{i}}$.
Then we get
$a_{i} = \sum \frac{\partial f_{i}}{\partial g_{j}} b_{j}$.
The formulas for changing numerator and denominator in generalized
fractions imply that the value of (\ref{eqn5.1}) remains the same when
computed relative to $g_{1}, \ldots, g_{n}$.
\end{proof}
\begin{rem}
According to \cite{HK} Theorems 2.3 and 2.4 one has
$P(v, \mcal{E}, z) =$ \linebreak
$(-1)^{\binom{n}{2}}
\tau^{f}_{a} P(\Lambda|_{Z})$,
where
$\tau^{f}_{a} : \mcal{O}_{Z, z} \rightarrow k$
is the trace of Scheja-Storch \cite{SS}.
\end{rem}
\begin{lem} \label{lem5.4}
There exists an open subset $U \subset X$ containing $Z$, and sections
$f_{1}, \ldots, f_{n} \in \Gamma(U, \mcal{O}_{X})$,
such that the corresponding morphism
$U \rightarrow \mbf{A}^{n}_{k}$
is unramified \textup{(}and even \'{e}tale\textup{)}, and the fiber
over the origin is the reduced scheme $Z_{\mrm{red}}$. Thus
$\mcal{T}_{X}|_{U}$ has a frame
$( \frac{\partial}{\partial f_{1}}, \ldots,
\frac{\partial}{\partial f_{n}})^{\mrm{t}}$.
Moreover, we can choose $U$ s.t.\ there is a frame
$(e_{1}, \ldots, e_{r})^{\mrm{t}}$ for $\mcal{E}|_{U}$.
\end{lem}
\begin{proof}
As $X$ is projective and $Z$ finite we can certainly find an affine
open set $U = \operatorname{Spec} R$ containing $Z$.
For each point $z \in Z$ we can find sections
$f_{1, z}, \ldots, f_{n, z} \in R$
and
$e_{1, z}, \ldots, e_{r, z} \in \Gamma(U, \mcal{E})$
which satisfy the requirements at $z$.
Choose a ``partition of unity of $U$ to
order $1$ near $Z$'', i.e.\ a set of functions
$\{ \epsilon_{z} \}_{z \in Z} \subset R$ representing the
idempotents of $R / \sum_{z} \mfrak{m}_{z}^{2}$. Then define
$f_{i} := \sum_{z} \epsilon_{z} f_{i, z}$
and
$e_{i} := \sum_{z} \epsilon_{z} e_{i, z}$,
and shrink $U$ sufficiently.
\end{proof}
From here we continue along the lines of \cite{Bo2}, but of course
we use adeles instead of smooth functions.
The sheaf $\tilde{\mcal{A}}_{X}^{p, q}$ plays the role of the
sheaf of smooth $(p, q)$ forms on a complex manifold.
The operator $\mrm{D}''$ behaves like the anti-holomorphic
derivative $\bar{\partial}$; specifically $\mrm{D}'' \alpha = 0$ for
any $\alpha \in \Omega^{{\textstyle \cdot}}_{X / k}$.
Fix an open set $U$ and sections $f_{1}, \ldots, f_{n}$ like in Lemma
\ref{lem5.4}. Then we get an algebraic frame
$(\frac{\partial}{\partial f_{1}}, \ldots,
\frac{\partial}{\partial f_{n}})^{\mrm{t}}$
of $\mcal{T}_{X}|_{U}$, and we can write the
vector field
$v = \sum a_{i} \frac{\partial}{\partial f_{i}}$
with $a_{i} \in \Gamma(U, \mcal{O}_{X})$.
Choose a global adelic frame $\{ \bsym{e}_{(x)} \}_{x \in X}$
for $\mcal{E}$ as follows:
\begin{equation} \label{eqn5.2}
\begin{array}{ll}
\bsym{e}_{(x)} =
(e_{1}, \ldots, e_{r})^{\mrm{t}} & \text{ if } x \in U \\
\bsym{e}_{(x)} = \text{ arbitrary } & \text{ if } x \notin U .
\end{array}
\end{equation}
Then we get a family of connections
$\{ \nabla_{(x)} \}_{x \in X}$, and a global connection
$\nabla : \tilde{\mcal{A}}_{X}^{0}(\mcal{E}) \rightarrow
\tilde{\mcal{A}}_{X}^{1}(\mcal{E})$
over the algebra $\tilde{\mcal{A}}_{X}^{0}$.
The curvature form
$R \in \tilde{\mcal{A}}_{X}^{2}(\mcal{E}nd(\mcal{E}))$
decomposes into homogeneous parts
$R = R^{2,0} + R^{1,1}$. Since $\tilde{\mcal{A}}_{X}^{p, q} = 0$ for
$p > n$, we get $P(R) = P(R^{1, 1})$; we will work with $R^{1,1}$.
\begin{lem} \label{lem5.2}
Applying the $\mcal{O}_{X}$-linear homomorphism
$\mrm{D}'' : \tilde{\mcal{A}}_{X}^{p, q}(
\mcal{E}nd (\mcal{E})) \rightarrow$ \linebreak
$\tilde{\mcal{A}}_{X}^{p, q+1}(\mcal{E}nd (\mcal{E}))$
one has
$\mrm{D}'' R^{1,1} = 0$.
\end{lem}
\begin{proof}
This is a local statement. Passing to matrices using a local algebraic
frame, it is enough to prove that
$\mrm{D}'' \bsym{\Theta}^{1,1} = 0$.
Now $\bsym{\Theta} = \mrm{D} \bsym{\theta} -
\bsym{\theta} \wedge \bsym{\theta}$,
and $\bsym{\theta} \in \mrm{M}_{r}(\tilde{\mcal{A}}_{X}^{1,0})$,
so
$\bsym{\Theta}^{1,1} = \mrm{D}'' \bsym{\theta}$. But
$(\mrm{D}'')^{2} = 0$.
\end{proof}
Denote the canonical pairing
$\mcal{T}_{X} \otimes_{\mcal{O}_{X}} \Omega^{1}_{X / k}
\rightarrow \mcal{O}_{X}$
by $\langle -, - \rangle$.
It extends to a bilinear pairing
$\tilde{\mcal{A}}_{X}^{0}(\mcal{T}_{X}) \otimes_{\mcal{O}_{X}}
\tilde{\mcal{A}}_{X}^{0}(\Omega^{1}_{X/k}) \rightarrow
\tilde{\mcal{A}}_{X}^{0}$.
For each point $x \in X$ we choose a form
$\pi_{(x)} \in \Omega^{1}_{X / k, (x)}$ as follows:
\begin{equation} \label{eqn5.4}
\parbox{10cm}{ \begin{enumerate}
\item If $x \in Z$ set $\pi_{(x)} := 0$.
\item If $x \in U - Z$, let $j$ be the first index s.t.\
$a_{j}(x) \neq 0$, and set
$\pi_{(x)} := a_{j}^{-1} \mrm{d} f_{j}$.
\item If $x \notin U$ take any form
$\pi_{(x)} \in \Omega^{1}_{X / k, (x)}$ satisfying
$\langle v, \pi_{(x)} \rangle = 1$.
\end{enumerate} }
\end{equation}
Together we get a global section
$\pi = \{ \pi_{(x)} \}_{x \in X} \in
\underline{\mbb{A}}^{0}(\Omega^{1}_{X / k})$, and
as indicated in Lemma \ref{lem2.3}, there is a corresponding global
section
$\pi \in \tilde{\mcal{A}}_{X}^{0}(\Omega^{1}_{X / k}) =
\tilde{\mcal{A}}_{X}^{1, 0}$.
\begin{lem}
Considering
$v \in \tilde{\mcal{A}}_{X}^{0}(\mcal{T}_{X})$, one has the identity
\begin{equation} \label{eqn5.5}
\langle v, \pi \rangle = 1 \in \tilde{\mcal{A}}_{X}^{0}
\text{ on } X - Z .
\end{equation}
\end{lem}
\begin{proof}
Use the embedding of Lemma \ref{lem2.2} to reduce formula
(\ref{eqn5.5}) to the local formula
$\langle v, \pi_{\xi} \rangle = \sum t_{i} = 1
\in \tilde{\mcal{A}}_{\xi}^{0}$.
\end{proof}
In Bott's language (see \cite{Bo2}) $\pi$ is a projector for $v$.
Let
$\iota_{v} = \langle v, - \rangle: \Omega^{1}_{X / k}
\rightarrow \mcal{O}_{X}$
be the interior derivative, or contraction along $v$. It extends
to an $\mcal{O}_{X}$-linear operator of degree $-1$ on
$\Omega^{{\textstyle \cdot}}_{X / k}$,
and hence to an $\tilde{\mcal{A}}_{X}^{0}$-linear
operator of bidegree $(-1,0)$ on $\tilde{\mcal{A}}_{X}^{{\textstyle \cdot}}$,
which commutes (in the graded sense)
with $\mrm{D}''$ and satisfies $\iota_{v}^{2} = 0$.
\begin{lem} \label{lem5.6}
There exists a global section
$L \in \tilde{\mcal{A}}_{X}^{0}(\mcal{E}nd(\mcal{E}))$
satisfying
\[ \iota_{v} R^{1, 1} = \mrm{D}'' L \in
\tilde{\mcal{A}}_{X}^{0,1}(\mcal{E}nd(\mcal{E})) \]
and
\[ L|_{Z} = \Lambda|_{Z} \in \mcal{E}nd(\mcal{E}|_{Z}) . \]
\end{lem}
\begin{proof}
Using Lemma \ref{lem2.1}, define
\[ L := \Lambda - \iota_{v} \circ \nabla :
\tilde{\mcal{A}}_{X}^{0}(\mcal{E})
\rightarrow \tilde{\mcal{A}}_{X}^{0}(\mcal{E}) . \]
This is an $\tilde{\mcal{A}}_{X}^{0}$-linear homomorphism.
Let us distinguish between
$\mrm{D}'' L$, which is the image of $L$ under
$\mrm{D}'' :
\tilde{\mcal{A}}_{X}^{0}(\mcal{E}nd_{\mcal{O}_{X}}(\mcal{E}))
\rightarrow
\tilde{\mcal{A}}_{X}^{0, 1}(
\mcal{E}nd_{\mcal{O}_{X}}(\mcal{E}))$,
and
$\mrm{D}'' \circ L$, which is the composed operator
$\tilde{\mcal{A}}_{X}^{0}(\mcal{E}) \rightarrow
\tilde{\mcal{A}}_{X}^{0, 1}(\mcal{E})$.
Both $\iota_{v} R^{1, 1}$ and $\mrm{D}'' L$ can be thought of as
$\mcal{O}_{X}$-linear homomorphisms
$\mcal{E} \rightarrow \tilde{\mcal{A}}_{X}^{0,1}(\mcal{E})$.
Since $\mrm{D}'' (\mcal{E}) = 0$, one checks (using a local algebraic
frame) that
$\mrm{D}'' L = \mrm{D}'' \circ L$
on $\mcal{E}$. By the proof of Lemma \ref{lem5.2},
$\mrm{D}'' \circ \nabla = R^{1, 1}$
as operators
$\mcal{E} \rightarrow \tilde{\mcal{A}}_{X}^{1, 1}(\mcal{E})$.
Now
$\mrm{D}'' \circ \Lambda = \Lambda \circ \mrm{D}''$.
Therefore we get equalities
\[ \mrm{D}'' L = \mrm{D}'' \circ L - L \circ \mrm{D}'' =
- \mrm{D}'' \circ \iota_{v} \circ \nabla
= \iota_{v} \circ \mrm{D}'' \circ \nabla =
\iota_{v} R^{1, 1} \]
of maps $\mcal{E} \rightarrow \tilde{\mcal{A}}_{X}^{0,1}(\mcal{E})$.
Finally the equality
$L|_{Z} = \Lambda|_{Z}$ follows from the vanishing
of $\iota_{v}$ on $Z$.
\end{proof}
Let $t$ be an indeterminate, and define
\[ \begin{aligned}
\eta & := P(L + t R^{1, 1}) \cdot \pi \cdot
(1 - t \mrm{D}'' \pi)^{-1} \\[1mm]
& = P(L + t R^{1, 1}) \cdot \pi
\cdot (1 + t \mrm{D}'' \pi + (t \mrm{D}'' \pi)^{2} + \cdots)
\in \tilde{\mcal{A}}_{X}^{{\textstyle \cdot}} \sqbr{t}
\end{aligned} \]
(note that $(\mrm{D}'' \pi)^{n+1} = 0$, so this makes sense).
Writing
$\eta = \sum_{i} \eta_{i} t^{i}$ we see that
$\eta_{i} \in \tilde{\mcal{A}}_{X}^{i+1, i}$.
\begin{lem} \label{lem5.5}
$\mrm{D}'' \eta_{n-1} + P(R^{1,1}) = 0$ on $X - Z$.
\end{lem}
\begin{proof}
Using the multilinear polarization $\tilde{P}$ of $P$, Lemma
\ref{lem5.6} and the fact that $\iota_{v} - t \mrm{D}''$ is
an odd derivation, one sees that
$(\iota_{v} - t \mrm{D}'') P(L + t R^{1, 1}) = 0$
(cf.\ \cite{Bo2}). Since
$\langle v, \pi \rangle = 1$ on $X - Z$ we get
$(\iota_{v} - t \mrm{D}'') \pi = (1 - t \mrm{D}'') \pi$,
$(\iota_{v} - t \mrm{D}'')(1 - t \mrm{D}'' \pi) = 0$,
and hence
$(\iota_{v} - t \mrm{D}'') \eta = P(L + t R^{1, 1})$
on $X - Z$. Finally consider the coefficient of $t^{n}$ in this
expression, noting that $\eta_{n} = 0$, being a section of
$\tilde{\mcal{A}}_{X}^{n+1, n}$.
\end{proof}
The proof of the next lemma is easy.
\begin{lem} \label{lem3.2}
Let $P(M_{1}, \ldots, M_{n})$ be a multilinear polynomial on
$\mrm{M}_{r}(k)$, invariant under permutations.
Let $A = \bigoplus A^{i}$ be a commutative DG $k$-algebra
and $A^{-} := \bigoplus A^{2i+1}$.
Let $\alpha_{1}, \ldots, \alpha_{n} \in A^{-}$ and
$M_{1}, \ldots, M_{n} \in \operatorname{M}_{r}(A^{-})$. Then
\begin{enumerate}
\item If $\alpha_{i} = \alpha_{j}$ or $M_{i} = M_{j}$ for two distinct
indices $i,j$ then
\[ P(\alpha_{1} M_{1}, \ldots, \alpha_{n} M_{n}) = 0 . \]
\item
\[ P \left( \sum_{i=1}^{n} \alpha_{i} M_{i}, \ldots,
\sum_{i=1}^{n} \alpha_{i} M_{i} \right) =
n! P(\alpha_{1} M_{1}, \ldots, \alpha_{n} M_{n}) . \]
\end{enumerate}
\end{lem}
\begin{proof}[Proof of Theorem \ref{thm5.1}]
By definition
$c_{i}(\mcal{E}) = [ \int_{\Delta} \tilde{c}_{i}(\mcal{E}; \nabla) ]
\in \mrm{H}^{2i}_{\mrm{DR}}(X)$.
It is known that $\tilde{\mcal{A}}^{{\textstyle \cdot}}_{X}$ is a commutative DGA,
and that
$\mrm{H}(\int_{\Delta}) :
\mrm{H} \Gamma(X, \tilde{\mcal{A}}^{{\textstyle \cdot}}_{X}) \rightarrow
\mrm{H}^{{\textstyle \cdot}}_{\mrm{DR}}(X)$
is an isomorphism of graded algebras (see Corollary \ref{cor2.2}).
Hence
\[ Q(c_{1}(\mcal{E}), \ldots, c_{r}(\mcal{E})) =
[ \int_{\Delta} Q(\tilde{c}_{1}(\mcal{E}; \nabla), \ldots,
\tilde{c}_{r}(\mcal{E}; \nabla)) ] =
[ \int_{\Delta} P(R) ] . \]
As mentioned before,
$P(R) = P(R^{1,1}) \in \tilde{\mcal{A}}_{X}^{2n}$.
We must verify:
\[ \int_{X} \int_{\Delta} P(R^{1,1}) = \sum_{z \in Z}
P(v, \mcal{E}, z) . \]
Let
\[
\Xi := S(U)_{n}^{\mrm{red}} - S(U - Z)_{n}^{\mrm{red}} =
\{ (x_{0}, \ldots, x_{n}) \ |\ x_{n} \in Z \} . \]
We are given that $X$ is proper, so by \cite{Be} (or by the
Parshin-Lomadze Residue Theorem, \cite{Ye1} Theorem 4.2.15)
\[ \int_{X} \int_{\Delta} \mrm{D}'' \eta_{n-1} =
\int_{X} \mrm{D}'' \int_{\Delta} \eta_{n-1} = 0 . \]
Therefore by Lemma \ref{lem5.5}
\[ \begin{aligned}
\int_{X} \int_{\Delta} P(R^{1,1}) & =
\int_{X} \int_{\Delta} (P(R^{1,1}) + \mrm{D}'' \eta_{n-1}) \\[1mm]
& = \sum_{\xi \in \Xi} \operatorname{Res}_{\xi} \int_{\Delta}
(P(R^{1,1}) + \mrm{D}'' \eta_{n-1}) .
\end{aligned} \]
Let us look what happens on the open set $U$. By construction the
connection $\nabla$ is integrable there
(it is a Levi-Civita connection w.r.t.\ the algebraic frame $\underline{e}$),
so $R = 0$ ; hence $P(R^{1,1}) = 0$ and
$\mrm{D}'' \eta_{n-1} = P(L) (\mrm{D}'' \pi)^{n}$.
According to Lemma \ref{lem5.6},
$\mrm{D}'' L = 0$, so by Lemma \ref{lem2.1} one has
$L \in \mcal{E}nd(\mcal{E})$. Therefore $P(L) \in \mcal{O}_{X}$.
Since $\int_{\Delta}$ is $\mcal{O}_{X}$-linear we get
$\int_{\Delta} \mrm{D}'' \eta_{n-1} =
P(L) \int_{\Delta} (\mrm{D}'' \pi)^{n}$.
All the above is on $U$. The conclusion is:
\begin{equation} \label{eqn5.6}
\int_{X} \int_{\Delta} P(R^{1,1}) =
\sum_{\xi \in \Xi} \operatorname{Res}_{\xi} \left( P(L) \int_{\Delta}
(\mrm{D}'' \pi)^{n} \right) .
\end{equation}
Using the embedding of Lemma \ref{lem2.2}, for each $\xi \in \Xi$
one has $\pi_{\xi} = \sum t_{i} \pi_{(x_{i})}$, and therefore
$\mrm{D}'' \pi_{\xi} = \sum \mrm{d} t_{i} \wedge \pi_{(x_{i})}$.
Let
\[ \Xi_{a} := \{ \xi = (x_{0}, \ldots, x_{n})\ |\
a_{1}(x_{i}) = \cdots = a_{i}(x_{i}) = 0 \text{ for }
i = 1, \ldots, n \} . \]
If $\xi \notin \Xi_{a}$ then for at least one index $0 \leq i < n$,
$\pi_{(x_{i})} = \pi_{(x_{i+1})}$. So by Lemma \ref{lem3.2} we get
$(\mrm{D}'' \pi_{\xi})^{n} = 0$.
It remains to consider only $\xi \in \Xi_{a}$. Since
$\pi_{(x_{i})} = a_{i+1}^{-1} \mrm{d} f_{i+1}$
for $0 \leq i < n$, and $\pi_{(x_{n})} = 0$, it follows from
Lemma \ref{lem3.2} that
\[ (\mrm{D}'' \pi_{\xi})^{n} = n! (-1)^{\binom{n}{2}}
\mrm{d} t_{0} \wedge \cdots \wedge \mrm{d} t_{n-1}
\wedge
\frac{\mrm{d} f_{1} \wedge \cdots \wedge
\mrm{d} f_{n}}{a_{1} \cdots a_{n}} , \]
so
\[ \int_{\Delta} (\mrm{D}'' \pi_{\xi})^{n} =
(-1)^{\binom{n+1}{2}}
\frac{\mrm{d} f_{1} \wedge \cdots \wedge
\mrm{d} f_{n}}{a_{1} \cdots a_{n}} \in \Omega^{n}_{X/k, \xi} . \]
Finally, according to \cite{Hu2} Corollary 2.5 or \cite{SY}
Theorem 0.2.9, and our Lemma \ref{lem5.1},
and using the fact that $L|_{Z} = \Lambda|_{Z}$, it holds:
\[ \begin{split}
\sum_{\xi \in \Xi_{a}} (-1)^{l} & \operatorname{Res}_{\xi}
\frac{ P(L) \mrm{d} f_{1} \wedge \cdots \wedge \mrm{d} f_{n}}{
a_{1} \cdots a_{n}} \\[1mm]
& = \sum_{z \in Z} (-1)^{l} \operatorname{Res}_{\mcal{O}_{X (z)}/k}
\gfrac{ P( L ) \mrm{d} f_{1} \wedge \cdots \wedge \mrm{d} f_{n} }{
a_{1}, \ldots, a_{n}} \\[1mm]
& = \sum_{z \in Z} P(v, \mcal{E}, z) .
\end{split} \]
\end{proof}
\begin{rem}
There is a sign error in \cite{Ye1} Section 2.4. Let $K$ and
$L =$ \linebreak
$K((t_{1}, \ldots, t_{n}))$ be topological local fields.
Since the residue map
$\operatorname{Res}_{L / K} : \Omega^{{\textstyle \cdot}, \mrm{sep}}_{L/k} \rightarrow
\Omega^{{\textstyle \cdot}, \mrm{sep}}_{K/k}$
is an $\Omega^{{\textstyle \cdot}, \mrm{sep}}_{K/k}$-linear map of degree $-n$,
it follows by transitivity that \linebreak
$\operatorname{Res}_{L / K}(t_{1}^{-1} \mrm{d} t_{1} \wedge \cdots \wedge
t_{n}^{-1} \mrm{d} t_{n})= 1$, not
$(-1)^{\binom{n}{2}}$. This error carried into \cite{SY}
Theorem 0.2.9.
\end{rem}
\begin{rem}
Consider an action of an algebraic torus $T$ on $X$ and $\mcal{E}$,
with positive dimensional fixed point locus.
A residue formula is known in this case (see \cite{Bo2} and
\cite{AB}), but we were unable to prove it using our adelic method.
The sticky part was finding an adelic
model for the equivariant cohomology $\mrm{H}^{{\textstyle \cdot}}_{T}(X)$.
An attempt to use an adelic version of the Cartan-De Rham complex did
not succeed. Another unsuccessful try was to consider the
classifying space $BT$ as a cosimplicial scheme, and compute its
cohomology via adeles.
Recently Edidin and Graham defined equivariant Chow groups, and
pro\-ved the Bott formula in that context (see \cite{EG}).
The basic idea there
is to approximate the classifying space $BT$ by finite type schemes,
``up to a given codimension''.
This approach is suited for global constructions, but
again it did not help as far as adeles were concerned.
\end{rem}
\section{The Gauss-Bonnet Formula}
Let $k$ be a perfect field (of any characteristic) and $X$ a finite
type scheme, with structural morphism $\pi : X \rightarrow \operatorname{Spec} k$.
According to Grothendieck Duality Theory \cite{RD} there is a functor
$\pi^{!} : \msf{D}^{+}_{\mrm{c}}(\msf{Mod}(k)) \rightarrow
\msf{D}^{+}_{\mrm{c}}(\msf{Mod}(X))$
between derived categories, called the twisted inverse image. The
object $\pi^{!} k$ is a dualizing complex on $X$, and it has a
canonical representative, namely the residue complex
$\mcal{K}^{{\textstyle \cdot}}_{X} := \mrm{E} \pi^{!} k$. Here $\mrm{E}$ is the
Cousin functor. As graded sheaf
$\mcal{K}^{-q}_{X} = \bigoplus_{\operatorname{dim} \overline{ \{ x \} } = q}
\mcal{K}_{X}(x)$,
where $\mcal{K}_{X}(x)$ is a quasi-coherent sheaf, constant with
support $\overline{ \{x\} }$, and as $\mcal{O}_{X, x}$-module it
is an injective hull of the residue field $k(x)$.
$\mcal{K}^{{\textstyle \cdot}}_{X}$ enjoys some remarkable properties,
which are deduced from corresponding properties of $\pi^{!}$.
If $X$ is pure of dimension $n$ then there is a canonical
homomorphism
$C_{X} : \Omega^{n}_{X / k} \rightarrow \mcal{K}^{-n}_{X}$,
which gives a quasi-isomorphism
$\Omega^{n}_{X / k}[n] \rightarrow \mcal{K}^{{\textstyle \cdot}}_{X}$
on the smooth locus of $X$. If $f : X \rightarrow Y$ is a morphism of schemes
then there is a map of graded sheaves
$\operatorname{Tr}_{f} : f_{*} \mcal{K}^{{\textstyle \cdot}}_{X} \rightarrow \mcal{K}^{{\textstyle \cdot}}_{Y}$,
which becomes a map of complexes when $f$ is proper.
For integers $p, q$ define
\[ \mcal{F}_{X}^{p, q} := \mcal{H}om_{X}
(\Omega^{-p}_{X / k}, \mcal{K}^{q}_{X}) . \]
Clearly $\mcal{F}^{{\textstyle \cdot}}_{X}$ is a graded (left and right)
$\Omega^{{\textstyle \cdot}}_{X / k}$-module. Moreover according to \cite{Ye3}
Theorem 4.1, or \cite{EZ}, $\mcal{F}^{{\textstyle \cdot}}_{X}$ is in fact a DG
$\Omega^{{\textstyle \cdot}}_{X / k}$-module. The difficult thing is to define the
dual operator
$\operatorname{Dual}(\mrm{d}) : \mcal{F}_{X}^{p, q} \rightarrow
\mcal{F}_{X}^{p + 1, q}$,
which is a differential operator of order $1$.
$\mcal{F}^{{\textstyle \cdot}}_{X}$ is called the {\em De Rham-residue complex}.
There is a special cocycle
$C_{X} \in \Gamma(X, \mcal{F}_{X}^{{\textstyle \cdot}})$
called the {\em fundamental class}. When $X$ is integral of
dimension $n$ then $C_{X} \in \mcal{F}_{X}^{-n, -n}$
is the natural map
$\Omega^{n}_{X/k} \rightarrow \mcal{K}^{-n}_{X} = k(X) \otimes_{\mcal{O}_{X}}
\Omega^{n}_{X/k}$.
For any closed subscheme $f: Z \rightarrow X$, the trace map
$\mrm{Tr}_{f} : f_{*} \mcal{F}_{Z}^{{\textstyle \cdot}} \rightarrow
\mcal{F}_{X}^{{\textstyle \cdot}}$
is injective, which allows us to write
$C_{Z} \in \mcal{F}_{X}^{{\textstyle \cdot}}$.
If $Z_{1}, \ldots, Z_{r}$ are the irreducible components of $Z$
(with reduced subscheme structures) and
$z_{1}, \ldots, z_{r}$ are the generic points,
one has
$C_{Z} = \sum_{i} (\operatorname{length} \mcal{O}_{Z, z_{i}}) C_{Z_{i}}$.
When $X$ is pure of dimension $n$,
$C_{X}$ induces a map of complexes
$C_{X} : \Omega^{{\textstyle \cdot}}_{X / k}[2n]$ \linebreak
$\rightarrow \mcal{F}^{{\textstyle \cdot}}_{X}$,
$\alpha \mapsto C_{X} \cdot \alpha$.
This is a quasi-isomorphism on the smooth locus of $X$ (\cite{Ye3}
Proposition 5.8). Hence when $X$ is smooth
$\mrm{H}^{-i} \Gamma(X, \mcal{F}^{{\textstyle \cdot}}_{X}) \cong
\mrm{H}_{i}^{\mrm{DR}}(X)$,
De Rham homology.
Given a morphism $f : X \rightarrow Y$, the trace
$\operatorname{Tr}_{f} : f_{*} \mcal{K}^{{\textstyle \cdot}}_{X}
\rightarrow \mcal{K}^{{\textstyle \cdot}}_{Y}$
and
$f^{*} : \Omega^{{\textstyle \cdot}}_{Y / k} \rightarrow f_{*} \Omega^{{\textstyle \cdot}}_{X / k}$
induce a map of graded sheaves
$\operatorname{Tr}_{f} : f_{*} \mcal{F}^{{\textstyle \cdot}}_{X}
\rightarrow \mcal{F}^{{\textstyle \cdot}}_{Y}$.
Given an \'{e}tale morphism $g : U \rightarrow X$ there is a homomorphism of
complexes
$\mrm{q}_{g} : \mcal{F}^{{\textstyle \cdot}}_{X} \rightarrow g_{*} \mcal{F}^{{\textstyle \cdot}}_{U}$,
and $\mrm{q}_{g}(C_{X}) = C_{U}$.
The next theorem summarizes a few theorems in \cite{Ye5} about
the action of $\mcal{A}^{{\textstyle \cdot}}_{X}$ on $\mcal{F}^{{\textstyle \cdot}}_{X}$.
\begin{thm} \label{thm6.1}
Let $X$ be a finite type scheme over a perfect field $k$.
\begin{enumerate}
\item $\mcal{F}_{X}^{{\textstyle \cdot}}$ is a right DG
$\mcal{A}^{{\textstyle \cdot}}_{X}$-module, and the
multiplication extends the $\Omega^{{\textstyle \cdot}}_{X / k}$-module structure.
\item If $f : X \rightarrow Y$ is proper then
$\operatorname{Tr}_{f} : f_{*} \mcal{F}^{{\textstyle \cdot}}_{X} \rightarrow \mcal{F}^{{\textstyle \cdot}}_{Y}$
is $\mcal{A}^{{\textstyle \cdot}}_{Y}$-linear.
\item If $g : U \rightarrow X$ is \'{e}tale then
$\mrm{q}_{g} : \mcal{F}^{{\textstyle \cdot}}_{X} \rightarrow g_{*} \mcal{F}^{{\textstyle \cdot}}_{U}$
is $\mcal{A}^{{\textstyle \cdot}}_{X}$-linear.
\end{enumerate}
\end{thm}
Note that from part 1 it follows that if $X$ is smooth of dimension
$n$ then
$\mcal{A}^{{\textstyle \cdot}}_{X}[2n] \rightarrow \mcal{F}^{{\textstyle \cdot}}_{X}$,
$\alpha \mapsto C_{X} \cdot \alpha$ is a quasi-isomorphism.
Let us say a few words about the multiplication
$\mcal{F}^{{\textstyle \cdot}}_{X} \otimes \mcal{A}^{{\textstyle \cdot}}_{X}
\rightarrow \mcal{A}^{{\textstyle \cdot}}_{X}$.
Since
$\mcal{A}^{{\textstyle \cdot}}_{X} \cong
\underline{\mbb{A}}_{\mrm{red}}^{{\textstyle \cdot}}(\mcal{O}_{X}) \otimes_{\mcal{O}_{X}}
\Omega^{{\textstyle \cdot}}_{X / k}$
and
$\mcal{F}^{{\textstyle \cdot}}_{X} \cong \mcal{H}om_{\mcal{O}_{X}}
(\Omega^{{\textstyle \cdot}}_{X / k}, \mcal{K}^{{\textstyle \cdot}}_{X})$,
it suffices to describe the product
$\mcal{K}^{{\textstyle \cdot}}_{X} \otimes
\underline{\mbb{A}}_{\mrm{red}}^{{\textstyle \cdot}}(\mcal{O}_{X}) \rightarrow
\mcal{K}^{{\textstyle \cdot}}_{X}$.
This requires the explicit construction
of $\mcal{K}_{X}^{{\textstyle \cdot}}$ which we gave in \cite{Ye3}, and which
we quickly review below.
The construction
starts with the theory of {\em Beilinson completion algebras}
(BCAs) developed in \cite{Ye2}. A BCA $A$ is a semilocal $k$-algebra
with a topology and with valuations on its residue fields. Each
local factor of $A$ is a quotient of the ring of formal power series
$L((s_{n})) \cdots ((s_{1}))[[t_{1}, \ldots, t_{m}]]$,
where $L$ is a finitely generated extension field of $k$,
and $L((s_{n})) \cdots ((s_{1}))$
is the field of iterated Laurent series.
One considers two kinds of homomorphisms between BCAs:
morphisms $f : A \rightarrow B$ and intensifications $u : A \rightarrow \widehat{A}$.
Each BCA $A$ has a {\em dual module} $\mcal{K}(A)$, which is
functorial w.r.t.\ these homomorphisms; namely there are maps
$\operatorname{Tr}_{f} : \mcal{K}(B) \rightarrow \mcal{K}(A)$
and
$\mrm{q}_{u} : \mcal{K}(A) \rightarrow \mcal{K}(\widehat{A})$.
If $A$ is local with maximal ideal $\mfrak{m}$ and residue field
$K = A / \mfrak{m}$, then a choice of coefficient field
$\sigma : K \rightarrow A$ determines an isomorphism
\[ \mcal{K}(A) \cong \operatorname{Hom}_{K}^{\mrm{cont}}(A,
\Omega^{n, \mrm{sep}}_{K / k}) , \]
where $\Omega^{{\textstyle \cdot} , \mrm{sep}}_{K / k}$ is the separated algebra
of differentials on $K$ and
$n = \operatorname{rank}_{K} \Omega^{1, \mrm{sep}}_{K / k}$.
In particular, algebraically $\mcal{K}(A)$ is an injective hull of
$K$.
Suppose $\xi = (x, \ldots, y)$ is a saturated chain of points in
$X$ (i.e.\ immediate specializations). Then the Beilinson completion
$\mcal{O}_{X, \xi}$ is a BCA. The natural algebra homomorphisms
$\partial^{-} : \mcal{O}_{X, (x)} \rightarrow \mcal{O}_{X, \xi}$ and
$\partial^{+} : \mcal{O}_{X, (y)} \rightarrow \mcal{O}_{X, \xi}$
are an intensification and a morphism, respectively. So there are
homomorphisms on dual modules
$\mrm{q}_{\partial^{-}} : \mcal{K}(\mcal{O}_{X, (x)})
\rightarrow \mcal{K}(\mcal{O}_{X, \xi})$
and
$\operatorname{Tr}_{\partial^{+}} : \mcal{K}(\mcal{O}_{X, \xi}) \rightarrow
\mcal{K}(\mcal{O}_{X, (y)})$.
The composition
$\operatorname{Tr}_{\partial^{+}} \circ \operatorname{q}_{\partial^{-}}$ is denoted by
$\delta_{\xi}$. We regard
$\mcal{K}_{X}(x) := \mcal{K}(\mcal{O}_{X, (x)})$
as a quasi-coherent
$\mcal{O}_{X}$-module, constant on the closed set
$\overline{\{ x \}}$. Define
\begin{equation} \label{eqn1.1}
\mcal{K}_{X}^{q} := \bigoplus_{\operatorname{dim} \overline{\{x\}} = -q}
\mcal{K}_{X}(x)
\end{equation}
and
\begin{equation} \label{eqn6.2}
\delta = (-1)^{q+1} \sum_{(x,y)} \delta_{(x,y)} :
\mcal{K}_{X}^{q} \rightarrow \mcal{K}_{X}^{q + 1} .
\end{equation}
Then the pair $(\mcal{K}^{{\textstyle \cdot}}_{X}, \delta)$ is the residue
complex of $X$. That is to say, there is a canonical isomorphism
$\mcal{K}^{{\textstyle \cdot}}_{X} \cong \pi^{!} k$ in the derived
category $\msf{D}(\msf{Mod}(X))$ (see \cite{Ye3} Corollary 2.5).
Let $x$ be a point of dimension $q$ in $X$, and consider a local
section
$\phi_{x} \in \mcal{K}_{X}(x) \subset \mcal{K}_{X}^{-q}$.
Let $\xi = (x_{0}, \ldots, x_{q'})$ be any chain of length $q'$
in $X$, and let
$a_{\xi} \in \mcal{O}_{X, \xi}$.
Define
$\phi_{x} \cdot a_{\xi} \in \mcal{K}_{X}^{-q + q'}$
as follows.
If $x = x_{0}$ and $\xi$ is saturated, then
\[ \phi_{x} \cdot a_{\xi} :=
\operatorname{Tr}_{\partial^{+}}(a_{\xi} \cdot \mrm{q}_{\partial^{-}}
(\phi_{x})) \in \mcal{K}_{X}(x_{q'}) , \]
where the product
$a_{\xi} \cdot \mrm{q}_{\partial^{-}}(\phi_{x})$
is in $\mcal{K}(\mcal{O}_{X, \xi})$.
Otherwise set
$\phi_{x} \cdot a_{\xi} := 0$.
It turns out that for local sections
$\phi = (\phi_{x}) \in \mcal{K}^{-q}_{X}$
and
$a = (a_{\xi}) \in \underline{\mbb{A}}_{\mrm{red}}^{q'}(\mcal{O}_{X})$,
one has
$\phi_{x} \cdot a_{\xi} = 0$
for all but finitely many pairs $x, \xi$. Hence
\[ \phi \cdot a := \sum_{x, \xi} \phi_{x} \cdot a_{\xi} \in
\mcal{K}^{-q + q'}_{X} \]
is well defined, and this is the product we use.
\begin{exa} \label{exa6.1}
Suppose $X$ is integral of dimension $n$, $x_{0}$ is its generic
point and
$\phi \in \mcal{K}^{-n}_{X} = \mcal{K}_{X}(x_{0}) =
\Omega^{n}_{k(x_{0}) / k}$.
Consider a saturated chain
$\xi = (x_{0}, \ldots, x_{q})$
and an element
$a \in \mcal{O}_{X, \xi} = k(x_{0})_{\xi}$.
We want to see what is
$\psi := \phi \cdot a \in \mcal{K}_{X}(x_{q}) =
\mcal{K}(\mcal{O}_{X, (x_{q})})$.
Choose a coefficient field
$\sigma : k(x_{q}) \rightarrow \mcal{O}_{X, (x_{q})}$,
so that
$\mcal{K}(\mcal{O}_{X, (x_{q})}) \cong
\operatorname{Hom}^{\mrm{cont}}_{k(x_{q})}(\mcal{O}_{X, (x_{q})},
\Omega^{n - q}_{k(x_{q}) / k})$.
It's known that $k(x_{0})_{\xi} = \prod L_{i}$, a finite product of
topological local fields (TLFs), and $\sigma : k(x_{q}) \rightarrow L_{i}$
is a morphism of TLFs. Then for $b \in \mcal{O}_{X, (x_{q})}$
one has
\[ \psi(b) = \sum_{i} \operatorname{Res}_{L_{i} / k(x_{q})} (b a \phi) \in
\Omega^{n - q}_{k(x_{q}) / k} , \]
where $\operatorname{Res}_{L_{i} / k(x_{q})}$ is the the residue of \cite{Ye1}
Theorem 2.4.3, and the product
$b a \phi \in \Omega^{n, \mrm{sep}}_{k(x_{0}) / k}$.
\end{exa}
We can now state the main result of this section. From here to the
end of Section 6 we assume $\operatorname{char} k = 0$.
\begin{thm}[Gauss-Bonnet] \label{thm7.1}
Assume $\operatorname{char} k = 0$, and let $X$ be an integral,
$n$-dimensional, quasi-projective $k$-variety
\textup{(}not necessarily smooth\textup{)}.
Let $\mathcal{E}$ be a locally free $\mathcal{O}_{X}$-module of
rank $r$. Suppose $v \in \Gamma(X, \mathcal{E})$
is a regular section, with zero scheme $Z$.
Then there is an adelic connection $\nabla$ on $\mcal{E}$ satisfying
\[ C_{X} \cdot c_{r}(\mathcal{E}, \nabla) =
(-1)^{m} C_{Z} \in \mathcal{F}_{X}^{-2(n - r)} \]
with $m = nr + \binom{r+1}{2}$.
\end{thm}
Let $U$ be an open subset such that $\mcal{E}|_{U}$ is trivial
and $U$ meets each irreducible component of $Z$. Fix an
algebraic frame $(v_{1}, \ldots, v_{r})^{\mrm{t}}$ of
$\mcal{E}|_{U}$ and write
\begin{equation}
v = \sum_{i = 1}^{r} a_{i} v_{i},\ a_{i} \in \Gamma(U, \mcal{O}_{X}) .
\end{equation}
For each $x \in X$ choose a local frame $\bsym{e}_{(x)}$ of
$\mcal{E}_{(x)}$ as follows:
\begin{equation} \label{eqn7.2}
\parbox{10cm}{ \begin{enumerate}
\item If $x \notin Z \cup U$, take
$\bsym{e}_{(x)} = (v, *, \ldots, *)^{\mrm{t}}$.
\item If $x \in U - Z$, there is some $0 \leq i < r$ such that
$a_{1}(x), \ldots, a_{i}(x) = 0$ but $a_{i+1}(x) \neq 0$. Then take\\
$\bsym{e}_{(x)} = (v, v_{1}, \ldots, v_{i}, v_{i+2}, \ldots,
v_{r})^{\mrm{t}}$.
\item If $x \in Z \cap U$, take
$\bsym{e}_{(x)} = (v_{1}, \ldots, v_{r})^{\mrm{t}}$.
\item If $x \in Z - U$, take $\bsym{e}_{(x)} $ arbitrary.
\end{enumerate} }
\end{equation}
Let $\nabla_{(x)}$ be the resulting Levi-Civita connection on
$\mcal{E}_{(x)}$, let
$\nabla : \tilde{\mcal{A}}_{X}^{0}(\mcal{E}) \rightarrow
\tilde{\mcal{A}}_{X}^{1}(\mcal{E})$ be the induced adelic connection,
and let
$R \in \tilde{\mcal{A}}_{X}^{2}(\mcal{E}nd(\mcal{E}))$
be the curvature. We get a top Chern form
$P_{r}(R) = \operatorname{det} R \in \tilde{\mcal{A}}_{X}^{2r}$.
Under the embedding of DGAs
$\Gamma(X, \tilde{\mcal{A}}_{X}^{p,q}) \subset \prod_{\xi \in S(X)}
\tilde{\mcal{A}}_{\xi}^{p,q}$
of Lemma \ref{lem2.2} we write $R = (R_{\xi})$.
\begin{lem} \label{lem7.1}
Suppose $\xi = (x_{0}, \ldots, x_{q})$ is a saturated chain of
length $q$,
with $x_{0}$ the generic point of $X$, and either:
\textup{(i)}\ $q < r$;
\textup{(ii)}\ $x_{q} \notin Z$; or
\textup{(iii)}\ $q = r$ and $\bsym{e}_{x_{i}} = \bsym{e}_{x_{i+1}}$
for some $i$. Then $\int_{\Delta^{q}} \operatorname{det} R_{\xi} = 0$.
\end{lem}
\begin{proof}
Let
$\bsym{g}_{i} \in \operatorname{GL}_{r}(\mcal{O}_{X, \xi})$ be the transition
matrix
$\bsym{e}_{x_{i}} = \bsym{g}_{i} \cdot \bsym{e}_{x_{q}}$,
let $\bsym{\theta}_{i}$ be the connection matrix of $\nabla_{x_{i}}$
w.r.t.\ the frame $\bsym{e}_{x_{q}}$. Then the matrices
$\bsym{\theta}$, $\bsym{\Theta}$ of $\nabla_{\xi}$, $R_{\xi}$
are
\[ \begin{aligned}
\bsym{\theta} & = - (t_{0} \bsym{g}_{0}^{-1} \mrm{d} \bsym{g}_{0}
+ \cdots + t_{q-1} \bsym{g}_{q-1}^{-1} \mrm{d} \bsym{g}_{q-1}) \\
\bsym{\Theta} & = \mrm{D} \bsym{\theta} -
\bsym{\theta} \wedge \bsym{\theta} .
\end{aligned} \]
In cases (i) and (ii), all $x_{i} \notin Z$, so
\[ g_{i} =
\left( \begin{smallmatrix}
1 & 0 & \dots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
* & * & \dots & * \\
* & * & \dots & *
\end{smallmatrix} \right)
\hspace{5mm}
\bsym{\theta}_{i} =
\left( \begin{smallmatrix}
0 & 0 & \dots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
* & * & \dots & * \\
* & * & \dots & *
\end{smallmatrix} \right)
. \]
Therefore $\bsym{\Theta}$ has a zero first row too and
$\operatorname{det} \bsym{\Theta} = 0$.
Now suppose $q = r$. Since
$\bsym{\Theta} \in \operatorname{M}_{r}(\tilde{\mcal{A}}_{\xi}^{1, 1}) \oplus
\operatorname{M}_{r}(\tilde{\mcal{A}}_{\xi}^{2, 0})$, from degree
considerations we conclude that
$\int_{\Delta^{r}} \operatorname{det} \bsym{\Theta} =
\int_{\Delta^{r}} \operatorname{det} (\bsym{\Theta}^{1,1})$.
Let $\tilde{P}_{r}$ be the polarization of $\operatorname{det}$.
One has
$\bsym{\Theta}^{1,1} = - \sum \mrm{d} t_{i} \wedge \bsym{\theta}_{i}$
(cf.\ Lemma \ref{lem3.9}), so by Lemma \ref{lem3.2} we have
\begin{equation} \label{eqn7.3}
\operatorname{det} (\bsym{\Theta}^{1,1}) =
\tilde{P}_{r}(- \mrm{d} t_{0} \wedge \bsym{\theta}_{0}, \ldots,
- \mrm{d} t_{r-1} \wedge \bsym{\theta}_{r-1}) .
\end{equation}
But in case (iii), using Lemma \ref{lem2.2} again, we get
$\operatorname{det} (\bsym{\Theta}^{1,1}) = 0$.
\end{proof}
\begin{lem} \label{lem7.2}
Suppose $\xi = (x_{0}, \ldots, x_{r})$ is a saturated chain in $U$
satisfying: $x_{0}$ is the generic point of $X$, and
$a_{1}(x_{i}), \ldots, a_{i}(x_{i}) = 0$ for $0 \leq i \leq r$
\textup{(}so in particular $x_{r} \in Z$\textup{)}. Then
\[ \int_{\Delta^{r}} \operatorname{det} R_{\xi} =
(-1)^{\binom{r+1}{2}}
\frac{\mrm{d} a_{1} \wedge \cdots \wedge \mrm{d} a_{r}}{
a_{1} \cdots a_{r}} \in \Omega^{r}_{X / k, \xi} . \]
\end{lem}
\begin{proof}
Since the point $x_{r}$ falls into case 3 of (\ref{eqn7.2}), and for
every $i < r$, $x_{i}$ falls into case 2,
an easy linear algebra calculation shows that
$\bsym{\Theta}^{1, 1} = ( \mrm{d} t_{i-1} \wedge
a_{i}^{-1} \mrm{d} a_{j} )$
(i.e.\ $\mrm{d} t_{i-1} \wedge a_{i}^{-1} \mrm{d} a_{j}$ appears
in the $(i,j)$ position). By Lemma \ref{lem3.2},
\[ \begin{aligned}
\operatorname{det} \bsym{\Theta}^{1,1} & =
r! \mrm{d} t_{0} \wedge a_{1}^{-1} \mrm{d} a_{1}
\wedge \cdots \wedge \mrm{d} t_{r-1} \wedge
a_{r}^{-1} \mrm{d} a_{r} \\[1mm]
& = r! (-1)^{\binom{r}{2}}
\mrm{d} t_{0} \wedge \cdots \wedge \mrm{d} t_{r-1}
\wedge
\frac{\mrm{d} a_{1} \wedge \cdots \wedge \mrm{d} a_{r}}{
a_{1} \cdots a_{r}} .
\end{aligned} \]
Now use the fact that
$\int_{\Delta^{r}}
\mrm{d} t_{0} \wedge \cdots \wedge \mrm{d} t_{r-1} =
(-1)^{r} (r!)^{-1}$.
\end{proof}
\begin{lem} \label{lem7.3}
Let $\xi = (x_{0}, \ldots, x_{r} = z)$ be a saturated chain,
and let $\sigma : k(z) \rightarrow \mcal{O}_{X, (z)}$ a coefficient field.
Then the Parshin residue map
$\operatorname{Res}_{k(\xi) / k(z)} :
\Omega^{{\textstyle \cdot}, \mrm{sep}}_{k(\xi) / k} \rightarrow
\Omega^{{\textstyle \cdot}}_{k(z) / k}$
\textup{(}see \cite{Ye1} Definition \textup{4.1.3)} satisfies
\[ \operatorname{Res}_{k(\xi) / k(z)} ( \operatorname{dlog} a_{1} \wedge \cdots \wedge
\operatorname{dlog} a_{r} \wedge \alpha ) = 0 \]
for all $a_{1}, \ldots, a_{r} \in \mcal{O}_{X, (z)}$
and
$\alpha \in (\mfrak{m}_{z} + \mrm{d} \mfrak{m}_{z})
\Omega^{{\textstyle \cdot}}_{X/k, (z)}$.
\end{lem}
\begin{proof}
By induction on $r$. We start with $r=1$.
By the definition of residues, it suffices to prove that for any
local factor $L$ of $k(\xi)$,
$\operatorname{Res}_{L / k(z)}(\operatorname{dlog} a_{1} \wedge \alpha) = 0$.
Now $L \cong K((t))$, with $K$ a finite field extension of $k(z)$,
and the image of $\mcal{O}_{X, (z)} \rightarrow L$ lies in $K[\sqbr{t}]$
(we are using the fact that $\operatorname{char} k = 0$). Note that
$\Omega^{{\textstyle \cdot}}_{X/k, (z)} =
\Omega^{{\textstyle \cdot}, \mrm{sep}}_{\mcal{O}_{X, (z)} / k}$, so
$\alpha = t \beta + \mrm{d} t \wedge \gamma$ for some
$\beta, \gamma \in \Omega^{{\textstyle \cdot}, \mrm{sep}}_{K[\sqbr{t}] / k}$.
Also, $a_{1} = t^{e} u$ with $u \in K[\sqbr{t}]^{*}$ and
$e \in \mbb{Z}$,
so $\operatorname{dlog} a_{1} = e \operatorname{dlog} t + \operatorname{dlog} u$. But
\[ \operatorname{Res}_{L / K} ((e \operatorname{dlog} t + \operatorname{dlog} u)
(t \beta + \mrm{d} t \wedge \gamma)) = 0 . \]
Now assume $r > 1$, and set
$\partial_{r} \xi := (x_{0}, \ldots, x_{r-1})$ and $y := x_{r-1}$.
First take $a_{1}, \ldots, a_{r}, \alpha$ algebraic, i.e.\
$a_{i} \in \mcal{O}_{X, z}$ and
$\alpha \in (\mfrak{m}_{z} + \mrm{d} \mfrak{m}_{z})
\Omega^{{\textstyle \cdot}}_{X/k, z}$.
Let
$\tau: k(y) \rightarrow \mcal{O}_{X, (y)}$ be a lifting compatible with
$\sigma : k(z) \rightarrow \mcal{O}_{X, (z)}$ (cf.\ \cite{Ye1}
Definition 4.1.5; again we use $\operatorname{char} k = 0$), so by
\cite{Ye1} Corollary 4.1.16,
\[ \operatorname{Res}_{k(\xi) / k(z)} = \operatorname{Res}_{k((y,z)) / k(z)} \circ
\operatorname{Res}_{k(\partial_{r} \xi) / k(y)} :
\Omega^{{\textstyle \cdot}}_{k(x) / k} \rightarrow \Omega^{{\textstyle \cdot}}_{k(z) / k} . \]
The lifting $\tau$ determines a decomposition
\[ \Omega^{{\textstyle \cdot}, \mrm{sep}}_{\mcal{O}_{X, (y)} / k} =
\Omega^{{\textstyle \cdot}}_{k(y) / k} \oplus (\mfrak{m}_{y} + \mrm{d}
\mfrak{m}_{y})
\Omega^{{\textstyle \cdot}, \mrm{sep}}_{\mcal{O}_{X, (y)} / k} , \]
and we decompose $\alpha = \alpha_{0} + \alpha_{1}$
and
$\operatorname{dlog} a_{r} = \beta_{0} + \beta_{1}$
(or rather their images in
$\Omega^{{\textstyle \cdot}, \mrm{sep}}_{\mcal{O}_{X, (y)} / k}$)
accordingly.
Using the $\Omega^{{\textstyle \cdot}}_{k(y) / k}$-linearity of
$\operatorname{Res}_{k(\partial_{r} \xi) / k(y)}$ and induction applied to
$\beta_{0} \wedge \alpha_{1}$,
$\beta_{1} \wedge \alpha_{0}$ and
$\beta_{1} \wedge \alpha_{1}$, we get
\[ \operatorname{Res}_{k(\partial_{r} \xi) / k(y)} (
\operatorname{dlog} a_{1} \wedge \cdots \wedge
\operatorname{dlog} a_{r-1} \wedge (\operatorname{dlog} a_{r} \wedge \alpha)) =
m \beta_{0} \wedge \alpha_{0} \]
with $m \in \mbb{Z}$. Since $\beta_{0}$, $\alpha_{0}$ are respectively
the images
of $\operatorname{dlog} a_{r}$, $\alpha$ in
$\Omega^{{\textstyle \cdot}, \mrm{sep}}_{k((y,z)) / k}$, again using induction
we have
\[ \operatorname{Res}_{k((y,z)) / k(z)} (\beta_{0} \wedge \alpha_{0}) = 0 . \]
Finally by the continuity of
$\operatorname{Res}_{k(\xi) / k(z)}$ the result holds for any
$a_{1}, \ldots, a_{r}, \alpha$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm7.1}]
By the definition of the product and by Lemma \ref{lem7.1},
for evaluating the product
$C_{X} \cdot c_{r}(\mcal{E}, \nabla)$
we need only consider the components
$c_{r}(\mcal{E}, \nabla)_{\xi} = \int_{\Delta} \operatorname{det} R_{\xi}$
of $c_{r}(\mcal{E}, \nabla)$ for saturated chains
$\xi = (x_{0}, \ldots, x_{r})$, where $x_{0}$ is
the generic point of $X$, $x_{r} = z$ is the generic point of some
irreducible component $Z'$ of $Z$, and $a_{i}(x_{j}) = 0$ for all
$i \leq j$. Fix one such component of $Z'$, and
let $\Xi_{z}$ be the set of all such chains ending with $z$.
By definition of $C_{Z}$ we must show that the map
\begin{equation} \label{eqn7.4}
(C_{X} \cdot \int_{\Delta} \operatorname{det} R)_{z} :
\Omega^{n-r}_{X / k, z} \rightarrow \mcal{K}_{X}(z)
\end{equation}
factors through the maps
\[ \Omega^{n-r}_{X / k, z} \twoheadrightarrow \Omega^{n-r}_{k(z) / k}
\xrightarrow{(-1)^{m} l} \Omega^{n-r}_{k(z) / k} =
\mcal{K}_{Z'_{\mrm{red}}}(z)
\subset \mcal{K}_{X}(z) , \]
where $Z'_{\mrm{red}}$ is the reduced scheme, $l$ is the length of the
artinian ring
$\mcal{O}_{Z', z} = \mcal{O}_{X, (z)} / (a_{1}, \ldots, a_{r})$, and
$m = \binom{r+1}{2} + nr$.
Choose a coefficient field $\sigma : k(z) \rightarrow \mcal{O}_{X, (z)}$.
By Lemma \ref{lem7.2} and Example \ref{exa6.1} we have
for any $\alpha \in \Omega^{n-r}_{X / k, z}$:
\begin{equation} \label{eqn7.5}
\begin{split}
( C_{X} \cdot & c_{r}(\mcal{E}, \nabla) )_{z} (\alpha) \\[1mm]
& = (-1)^{m} \sum_{\xi \in \Xi_{z}}
( \mrm{d} a_{1} \wedge \cdots \wedge \mrm{d} a_{r}
\wedge \alpha)
\cdot \left(\frac{1}{a_{1} \cdots a_{r}} \right)_{\xi} \\[1mm]
& = (-1)^{m} \sum_{\xi \in \Xi_{z}}
\operatorname{Res}_{k(\xi) / k(z)} \left(
\frac{\mrm{d} a_{1} \wedge \cdots \wedge \mrm{d} a_{r} \wedge \alpha
}{a_{1} \cdots a_{r}} \right) .
\end{split}
\end{equation}
By Lemma \ref{lem7.3} we see that this expression vanishes for
$\alpha \in
\operatorname{Ker}(\Omega^{n-r}_{X / k, (z)} \twoheadrightarrow \Omega^{n-r}_{k(z) / k})$,
so that we can assume
$\alpha \in \operatorname{Im}(\sigma : \Omega^{n-r}_{k(z) / k} \rightarrow
\Omega^{n-r}_{X / k, (z)})$. Now
$\operatorname{Res}_{k(\xi) / k(z)}$ is a graded left
$\Omega^{{\textstyle \cdot}}_{k(z) / k}$-linear homomorphism of degree $-r$, so
$\alpha$ may be extracted. On the other hand, by
\cite{Hu2} Corollary 2.5 or \cite{SY} Theorem 0.2.5, and by
\cite{HK} Example 1.14.b, we get
\[ \sum_{\xi \in \Xi_{z}}
\operatorname{Res}_{k(\xi) / k(z)} \left(
\frac{\mrm{d} a_{1} \wedge \cdots \wedge \mrm{d} a_{r}
}{a_{1} \cdots a_{r}} \right) =
\operatorname{Res}_{\mcal{O}_{X, (z)} / k(z)}
\gfrac{ \mrm{d} a_{1} \wedge \cdots \wedge \mrm{d} a_{r} }{
a_{1}, \ldots, a_{r}}
= l . \]
This concludes the proof.
\end{proof}
| proofpile-arXiv_065-9331 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{introduction}
The transfer of population in a multi-level atomic system from an initial to
a target quantum state in a fast and effective way is currently a problem of
practical importance as well as of substantial theoretical interest. If
there is a dipole allowed transition between an initial and a target state,
one
can achieve the desired transfer by using either a constant-frequency $\pi
-pulse tuned to resonance, or an adiabatic process based on a swept
carrier-frequency. Since a dipole-allowed transition implies radiative
decay, one is however often interested in systems with two metastable states
without a direct electric-dipole coupling. Whereas an extension of the
two-state $\pi$-pulse approach to multistate excitation is possible, these
techniques require careful control of the pulse areas. Adiabatic processes
do not require such precise control, if the time-evolution is slow (meaning,
generally, large pulse areas). In a three-state Raman-transition system, for
example, it is possible to achieve adiabatic passage with the use of two
constant-frequency pulses suitably delayed (counterintuitive order)
\cite{Oreg}. The
process of this stimulated Raman adiabatic passage (STIRAP)
\cite{STIRAP,STIRAP2}
can be represented by a slow rotation of a decoupled eigenstate of the H
amiltonian (dark
state) \cite{Arimondo}.
The disadvantage of STIRAP is the requirement for large pulse areas: to
ensure adiabatic time evolution the effective average Rabi-frequency of the
pulses must be large compared to the radiative decay rates of the
intermediate level(s). Non-adiabatic corrections and the associated diabatic
losses \cite{Elk95,Stenholm96,Fleischhauer96}, scale with $1/\Omega T$
where $\hbar \Omega$ is a characteristic interaction energy and $T$ is the
effective time required for the transfer. In some potential applications, as
for example the transfer of information in form of coherences \cite
{Pellizzari95}, it is desirable to minimize these losses without the need
of intense pulses or long transfer times. Intense fields induce time-varying
ac-Stark shifts, which may be detrimental to the coherence transfer.
Short times are required to minimize the effect of
decoherence processes during the transfer \cite{cavity_decay}.
An approach, which reduces non-adiabatic losses for pulses of moderate
fluence in a three-state system, was recently introduced in Ref.\cite{loop}.
In addition to the pair of Raman pulses (``pump pulse'' and ``Stokes
pulse'') which couple the initial and target state via a common upper level,
a direct coupling (called ``detuning pulse'') between them is introduced.
This scheme of loop-STIRAP does not require the usual adiabaticity
conditions (of large pulse areas), nor is it of the $\pi$-pulse type
(requiring specific pulse areas). Nevertheless, the scheme can produce
complete population transfer.
In the present paper we show that the physical mechanism of loop-STIRAP is
not an adiabatic rotation of the dark state, but the rotation of a {\it
higher-order trapping state} in a generalized adiabatic basis. The concept
of generalized adiabatic basis sets allows to rationalize many
examples of population transfer even when the adiabaticity condition is
poorly fulfilled. If pump and Stokes pulses fulfill certain conditions (they
are then called generalized matched pulses), a higher-order trapping exists,
which is an exact constant of motion. In this case analytic solutions for
the atomic dynamics can be found which in contrast to the case of ordinary
matched pulses with identical pulse shape \cite{matched} also include the
possibility of population transfer. This can be exploited to design pulse
sequences which give maximum population transfer. In contrast to techniques
based on optimum control theory, which are used for such tasks, the
generalized-dark-state concept provides a physical interpretation of the
results. However, the design of pulse, which in some cases can lead to
complete
population transfer (i.e. without {\it any} diabatic losses) needs to
respect more restrictive
requirements for specific pulse properties similar to $\pi$-pulse techniques.
Our paper is organized as follows. In Sec.II we discuss the loop-STIRAP and
propose a simple physical interpretation in terms of an adiabatic rotation
of a generalized trapping state. In Sec.III we define generalized trapping
states via an iterative partial diagonalization of the time-dependent
Hamiltonian. In Sec.IV we derive conditions under which a higher-order
trapping state is an exact constant of motion and thus allow for an
analytic solution of the atomic dynamics. Finally, various examples of
population and
coherence transfer based on generalized trapping states are discussed
in Sec.V.
\section{Loop-STIRAP}
To set the stage we consider in the present section a three-state system
driven by coherent fields in a loop configuration, as shown in Fig.~\ref
{loop_system}. The bare atomic states $\psi_1$ and $\psi_3$ are coupled by a
resonant Raman transition via the excited atomic state $\psi_2$ by a pump
pulse and a Stokes pulse, having Rabi-frequencies $P(t)$
and $S(t)$, respectively, which are in general complex. In addition there
is a direct coupling between
states $1$ and $3$ by a coherent detuning pulse described by the (complex)
Rabi-frequency $D(t)$. Before the application of the pulses the system is in
state $1$ and the goal is to transfer all population into the target state
3$ by an appropriate sequence of pulses. For simplicity we assume that the
carrier frequencies of the pulses coincide with the atomic transition
frequencies and that the phases of the pulses are time-independent. Since
the phases of pump and Stokes fields can be included into the definition of
the bare atomic states $\psi_1$ and $\psi_3$, they can be set equal to zero
without loss of generality. The phase of the detuning pulse is relevant and
cannot be
eliminated. The time-dependent Schr\"odinger equation for
this system, in the usual rotating wave approximation, reads
\begin{equation}
\frac{d}{dt}{\bf C}(t) = -i\, {\sf W}(t) {\bf C}(t)
\end{equation}
where ${\bf C}(t)$ is the column vector of probability amplitudes
C_n(t)=\langle n |\psi(t)\rangle$, ($|n\rangle\in\{\psi_1, \psi_2, \psi_3\}
). The evolution matrix ${\sf W}(t)$ has the form
\begin{equation}
{\sf W}(t)=\frac{1}{2} \left[ \matrix{0& P(t) & D(t)\cr P(t)&0&S(t)\cr
D^*(t)&S(t)&0} \right] . \label{W_loop}
\end{equation}
\begin{figure}[tbp]
\begin{center}
\leavevmode \epsfxsize=6 true cm
\epsffile{fig1.eps}
\end{center}
\caption{Three-state system with loop linkage. $P(t)$, $S(t)$, $D(t)$ denote
Rabi-frequencies of pump, Stokes and detuning pulse.}
\label{loop_system}
\end{figure}
It is well known that
the counterintuitive pulse sequence (Stokes puls precedes pump pulse,
without a detuning pulse) leads to an almost complete
population transfer, if the adiabaticity condition $\Omega T \gg 1$ is
fulfilled. Here $T$ is the characteristic time for the transfer, given by
the interval where $S(t)$ and $P(t)$ overlap, and $\Omega$ the
effective total Rabi-frequency averaged over the interval $T$
\begin{equation}
\Omega = \frac{1}{T} \int_{-\infty}^{\infty}\!\! dt \sqrt{ P(t)^2 + S(t)^2 }.
\end{equation}
As shown in \cite{loop} an almost perfect transfer is also possible when
pump and Stokes alone do not fulfill the adiabaticity condition by applying
an additional detuning pulse. Fig. \ref{loop_pulses} illustrates an example
of ramped pump and Stokes pulses intersected by a hyperbolic-secant detuning
pulse,
\begin{eqnarray}
P(t) &=& A_P\, \sin\Bigl[\frac{1}{2}\arctan\bigl(t/T_P\bigr)+ \frac{\pi}{4
\Bigr], \\
S(t) &=& A_S\, \cos\Bigl[\frac{1}{2}\arctan\bigl(t/T_S\bigr)+ \frac{\pi}{4
\Bigr], \\
D(t) &=& A_D\, {\rm sech}\bigl[t/T_D\bigr].
\end{eqnarray}
\begin{figure}[tbp]
\begin{center}
\leavevmode \epsfxsize=7 true cm
\epsffile{fig2.eps}
\end{center}
\caption{Pair of ramped pump (line) and Stokes (dotted) pulses with
A_P=A_S=20$ and $T_P=T_S=0.1$ applied in counterintuitive order (Stokes
precedes pump) with additional hyperbolic secant detuning pulse (dashed)
with $A_D=-13.4 i$ and $T_D =0.2$ }
\label{loop_pulses}
\end{figure}
Fig. \ref{loop_populations} shows examples of population histories for these
pulses. When only the pump and Stokes pulses are present, the population
transfer is rather poor, since the pulse areas are small ($\Omega\, T\sim
|A_P|\, T_P = |A_S|\, T_S =2$). As can be seen from the upper part of Fig
\ref{loop_populations}, only about 70\% of the initial population ends up in
state $3$.
\begin{figure}[tbp]
\begin{center}
\leavevmode \epsfxsize=8 true cm
\epsffile{fig3.eps}
\end{center}
\caption{Populations of states $\psi_1$ (line), $\psi_2$ (dotted) and $\psi_3
$ (dashed) for pulse sequence of Fig.\ref{loop_pulses}. The upper picture
shows population when only pump and Stokes pulses are applied, and the lower
one if the detuning pulse is added. }
\label{loop_populations}
\end{figure}
The situation is remarkably different when a detuning pulse with
|A_D|T_D\approx 2.7$ and a phase factor of ${\rm e}^{-i\pi/2}$ is applied; see the
lower part of Fig.\ref{loop_populations}. With a detuning pulse present all
the population is transfered from the initial to the target state. This
result is relatively insensitive to changes in the amplitude (or the shape
of the detuning pulse) if the phase is $-\pi/2$.
We note that in contrast to ordinary STIRAP there is (for a short time) a
substantial intermediate population of state 2. This indicates that the
transfer does not occur as adiabatic rotation of the dark state from $\psi_1$
to $\psi_3$.
For our present discussion it is useful to describe ordinary STIRAP in terms
of the following set of adiabatic superposition states
\begin{equation}
\left[ \matrix{\ket{\Phi_1(t)}\cr\ket{\Phi_2(t)}\cr\ket{\Phi_3(t)}} \right]
{\sf U}(t)^* \left[ \matrix{\ket{\psi_1}\cr\ket{\psi_2}\cr\ket{\psi_3}}
\right] \label{dark}
\end{equation}
with the unitary matrix
\begin{equation}
{\sf U}(t)= \left[ \matrix{0 & 1 & 0 \cr \enspace \sin\theta_0(t) & 0 &
\quad \cos\theta_0(t) \cr i\cos\theta_0(t) & 0 & -i\sin\theta_0(t) } \right]
.
\end{equation}
The dynamical angle $\theta_0$ is defined by
\begin{eqnarray}
\tan\theta_0(t) = \frac{P(t)}{S(t)}. \label{theta0}
\end{eqnarray}
The vector of probability amplitudes in the bare atomic basis ${\bf C}(t)$
and a corresponding vector ${\bf B}(t)$ in the superposition basis (\ref
{dark}) are related through the transformation
\begin{equation}
{\bf B}(t) ={\sf U}(t){\bf C}(t).
\end{equation}
Since ${\sf U}(t)$ is time-dependent, the transformed evolution matrix has
the form
\begin{equation}
{\sf W}(t)\to \widetilde {{\sf W}}(t)={\sf U}(t){\sf W}(t){\sf U}(t)^{-1}
+i\, \dot {{\sf U}}(t){\sf U}(t)^{-1}.
\end{equation}
In the adiabatic limit, the second term can be disregarded and we are left
with the first one, which for ordinary STIRAP, i.e. without the detuning
pulse, reads
\begin{equation}
{\sf U}(t){\sf W}(t){\sf U}(t)^{-1} =\frac{1}{2} \left[ \matrix{0 &
\Omega(t) & 0 \cr \Omega(t) & 0 & 0\cr 0 & 0 & 0} \right] ,
\end{equation}
where $\Omega(t)=\sqrt{P(t)^2+S(t)^2}$. One recognizes that the superposition
state $\Phi_3(t)$ is decoupled from the coherent interaction in this limit.
Moreover, because $\Phi_3(t)$ does not contain the excited atomic state
\psi_2$, it does not spontaneously radiate and is therefore called a dark
state \cite{Arimondo}. For a counterintuitive sequence of pulses the angle
\theta_0(t)$ vanishes initially and approaches $\pi/2$ for $t\to\infty$.
Thus $\Phi_3(t) $ asymptotically coincides with the initial and target
states for $t\to \pm \infty$ respectively. Therefore ordinary STIRAP can be
understood as a rotation of the adiabatic dark state $\Phi_3(t)$ from the
initial to the target bare atomic state \cite{STIRAP2}.
Non-adiabatic corrections are
contained in the second contribution to $\widetilde {{\sf W}}(t)$
\begin{equation}
i\, \dot{{\sf U}}(t){\sf U}(t)^{-1} = \frac{1}{2} \left[ \matrix{0 & 0 &
0\cr 0 & 0 & 2\dot\theta_0(t) \cr 0 & 2\dot\theta_0(t) & 0} \right] .
\end{equation}
They give rise to a coupling between the dark state $\Phi_3(t)$ and the
so-called bright state $\Phi_2(t)$.
Let us now apply the same transformation to the loop-STIRAP system, i.e.
including the detuning pulse. We find:
\begin{equation}
\widetilde{{\sf W}}(t) =\frac{1}{2} \left[ \matrix{0 & \Omega(t) & 0 \cr
\Omega(t) & {\rm Re}\bigl[D(t)\bigr]\sin 2\theta_0(t) & 2{\dot\theta}_0(t)
+i\bigl[D(t)\sin^2\theta_0(t) - D^*(t)\cos^2\theta_0(t)\bigr] \cr 0 &
2{\dot\theta}_0(t) -i\bigl[D^*(t)\sin^2\theta_0(t) -
D(t)\cos^2\theta_0(t)\bigr] & -{\rm Re}\bigl[D(t)\bigr]\sin 2\theta_0(t)}
\right] .
\end{equation}
If $D(t)$ is real or complex but not strictly imaginary, there is a time dependent energy shift of the superposition
states $\Phi_2(t)$ and $\Phi_3(t)$ and the detuning pulse adds an imaginary
part to the nonadiabatic coupling. If $D(t)$ is imaginary, as in the example
discussed above, there is no detuning but a {\it real} contribution to the
nonadibatic coupling. Let us now assume an imaginary detuning pulse, i.e.
D(t) = i \widetilde{D}(t)$, with $\widetilde{D}(t)$ being real. In this case
the transformed evolution matrix simplifies to
\begin{equation}
{\widetilde{{\sf W}}(t)} = \frac{1}{2} \left[ \matrix{0 & \Omega(t) & 0 \cr
\Omega(t) & 0 & 2 \dot\theta_0(t)-\widetilde{D}(t)\cr 0 &
2\dot\theta_0(t)-\widetilde{D}(t) & 0} \right] . \label{W_dressed}
\end{equation}
If the amplitude of the detuning pulse matches the non-adiabatic term, i.e.
if $\widetilde{D}(t)=2\dot\theta_0(t)$, the dark state $\Phi_3$ is exactly
decoupled even if the adiabaticity condition for pump and Stokes alone (
\Omega(t)$ being much larger than $\dot\theta_0(t)$) is not fulfilled.
However, since $\theta_0(t)$ rotates from $0$ to $\pi/2$, the detuning pulse
would have to be exactly a $\pi$-pulse in such a case.
\begin{equation}
\int_{-\infty}^\infty\!\! dt\, {\widetilde D}(t)=
\int_{-\infty}^\infty\!\! dt\, 2 {\dot\theta}_0(t) = 2
\theta_0(t)\Bigr\vert_{-\infty}^{+\infty} =\pi
\end{equation}
Furthermore no pump or Stokes pulses were required for population transfer
to begin with, since at any time the entire population is kept in the dark
state by the action of the detuning pulse and thus pump and Stokes would not
interact with the atoms. This is consistent with the observation that an
exactly decoupled state $\Phi_3$ implies exactly vanishing (not only
adiabatically small!) probability amplitude of the excited bare state $\psi_2
$ for all times. Since the origin of population transfer in this case is the
well-known phenomenon of $\pi$-pulse coupling, which requires a careful
control of the area and the shape of the detuning pulse, the case
\widetilde{D}(t)=2\dot\theta_0(t)$ is of no further interest here.
On the other hand, if $\widetilde{D}(t)$ is negative, as in the example of
Fig.\ref{loop_pulses}, the non-adiabatic coupling is effectively increased
by the detuning pulse (note that $d\theta_0(t)/dt >0$). Thus the success of
population transfer in Fig.\ref{loop_populations} cannot be understood as
dark-state rotation. This is illustrated in Fig.\ref{loop_dressed_pop},
which shows the populations of the superposition states $\Phi_1=\psi_2$,
\Phi_2$, and $\Phi_3$ for the above example. One clearly sees that about
80\% of the population is driven out of the dark state during the
interaction.
\begin{figure}[tbp]
\begin{center}
\leavevmode \epsfxsize=7 true cm
\epsffile{fig4.eps}
\end{center}
\caption{Population of superposition states $\Phi_1$ (dashed), $\Phi_2$
(dotted), and the dark state $\Phi_3$ (line). Parameters are that of Fig.\ref
{loop_pulses} }
\label{loop_dressed_pop}
\end{figure}
It is worth noting, however, that $\Phi_2$ remains almost unpopulated during
the interaction and all population exchange happens between states $\Phi_1$
and $\Phi_3$. This suggests an interpretation of the process as {\it
adiabatic population return between the superposition states} $\Phi_1$ and
\Phi_3$. In fact comparing the dressed-state evolution matrix $\widetilde
{\sf W}}(t)$, Eq.(\ref{W_dressed}), with the bare-state evolution matrix
{\sf W}(t)$, Eq.(\ref{W_loop}) (without detuning pulse), one recognizes a
formal agreement with the correspondence $P(t) \leftrightarrow \Omega(t)$
and $S(t) \leftrightarrow 2\dot\theta_0(t)-\widetilde {D}(t)$. That is there
exists a {\it generalized trapping state} which is a superposition of the
states $\Phi_1$ and $\Phi_3$. Since here $\Omega(t)=$ const.
and $2\dot{\theta}_0(t)-\widetilde{D}(t)$ vanishes in the asymptotic limits
t\to\pm\infty$, this generalized trapping state coincides with $\Phi_3$ for
t\to\pm\infty$, which in turn coincides with $\psi_1$ and $\psi_3$ in the
respective limits.
To quantify this statement let us introduce a basis of {\it second-order
adiabatic states}. Using now the first-order states $\Phi_1$, $\Phi_2$, and
\Phi_3$ as a basis set instead of the bare atomic states, we introduce in
analogy to Eq.(\ref{dark})
\begin{equation}
\left[ \matrix{\ket{\Phi_1^{(2)}(t)}\cr\ket{\Phi_2^{(2)}(t)}\cr
\ket{\Phi_3^{(2)}(t)}} \right] ={\sf U}_1(t)^* \left[ \matrix{\ket
\Phi_1(t)}\cr\ket{\Phi_2(t)}\cr \ket{\Phi_3(t)}} \right] = {\sf U
_1(t)^*\cdot{\sf U}(t)^* \left[ \matrix{\ket{\psi_1}\cr\ket{\psi_2}\cr
\ket{\psi_3}} \right] .
\end{equation}
The unitary transformation matrix is given by
\begin{eqnarray}
{\sf U}_1(t)= \left[ \matrix{0 & 1 & 0 \cr \enspace\sin\theta_1(t) & 0 &
\quad\cos\theta_1(t) \cr i\cos\theta_1(t) & 0 & -i\sin\theta_1(t) } \right] .
\end{eqnarray}
which has the same form as ${\sf U}(t)$, Eq.(\ref{W_loop})
but here the dynamical angle $\theta_1(t)$ is defined by
\begin{eqnarray}
\tan\theta_1(t) = \frac{\Omega(t)}{2\dot\theta_0(t)-\widetilde{D}(t)}.
\end{eqnarray}
Denoting the vector of probability amplitudes in these generalized adiabatic
states by ${\bf B}^{(2)}(t)$ we find the relation
\begin{equation}
{\bf B}^{(2)}(t) ={\sf U}_1(t) {\bf B}(t).
\end{equation}
One easily verifies that for the above example more than 95\% of the
population remains in the generalized trapping state $\Phi_3^{(2)}(t)
$. Thus the success of the population transfer in loop STIRAP can be
understood as a rotation of the second-order decoupled state $\Phi_3^{(2)}(t)
$ -- which is an approximate constant of motion -- from the initial to the
target bare atomic state.
\section{generalized adiabatic basis and generalized trapping states for
STIRAP}
We now return to the case of ordinary STIRAP, i.e. without a detuning pulse
D$. The formal equivalence of ${\sf W}(t)$ and $\widetilde{{\sf W}}(t)$
suggest an iteration of the procedure introduced in the last section. We
define an $n$th order generalized adiabatic basis by the iteration:
\begin{equation}
\left[ \matrix{\ket{\Phi_1^{(n)}(t)}\cr\ket{\Phi_2^{(n)}(t)}\cr\ket
\Phi_3^{(n)}(t )}} \right] = {\sf U}_{n-1}(t)^* \left[ \matrix{\ket
\Phi_1^{(n-1)}(t)}\cr \ket{\Phi_2^{(n-1)(t)}}\cr \ket{\Phi_3^{(n-1)}(t)}}
\right] ={\sf U}_{n-1}(t)^* \cdot{\sf U}_{n-2}(t)^*\cdots{\sf U}_0(t)^*
\left[ \matrix{\ket{\psi_1}\cr\ket{\psi_2}\cr\ket{\psi_3}} \right] .
\end{equation}
Correspondingly we obtain for the vector of probability amplitudes in the $n
th order basis
\begin{equation}
{\bf B}^{(n)} = {\sf U}_{n-1} {\bf B}^{(n-1)} = {\sf U}_{n-1}\cdot {\sf U
_{n-2} \cdots {\sf U}_0 {\bf C}\equiv {\sf V}_n {\bf C} \label{B-C}
\end{equation}
where we have dropped the time dependence. The $n$th order transformation
matrix is defined as
\begin{equation}
{\sf U}_n(t) \equiv \left[ \matrix{0 & 1 & 0 \cr \enspace\sin\theta_n(t) & 0
& \quad\cos\theta_n(t) \cr i\cos\theta_n(t) & 0 & -i\sin\theta_n(t) }
\right] ,
\end{equation}
with
\begin{eqnarray}
\sin\theta_0(t)&=&\frac{P(t)}{\Omega_0(t)},\qquad\quad\cos\theta_0(t)\;=
\frac{S(t)}{\Omega_0(t)},\qquad\quad\enspace \Omega_0(t)\; =\sqrt
P(t)^2+S(t)^2}, \\
\sin\theta_n(t)&=&\frac{\Omega_{n-1}(t)}{\Omega_n(t)},\qquad\cos\theta_n(t)=
\frac{2{\dot\theta}_{n-1}(t)}{\Omega_n(t)},\qquad \Omega_n(t) =\sqrt
\Omega_{n-1}(t)^2+4{\dot\theta}_{n-1}(t)^2}. \label{thetan}
\end{eqnarray}
The iteration is illustrated in Fig.\ref{iteration}.
\begin{figure}[tbp]
\begin{center}
\leavevmode \epsfxsize=7 true cm
\epsffile{fig5.eps}
\end{center}
\caption{Iterative definition of $n$th order adiabatic basis }
\label{iteration}
\end{figure}
In the $n$th-order basis, the equation of motion has then the form
\begin{equation}
\frac{d}{dt}{\bf B}^{(n)}(t) =-i{\sf W}_n(t) {\bf B}^{(n)}(t),
\end{equation}
with
\begin{eqnarray}
{\sf W}_n(t) &\equiv& \frac{1}{2} \left[ \matrix{ 0 & \Omega_n(t)
\sin\theta_n(t) & 0 \cr \Omega_n(t) \sin\theta_n(t) & 0 &
\Omega_n(t)\cos\theta_n(t) \cr 0 & \Omega_n(t)\cos\theta_n(t) & 0 } \right]
\\
&=& \frac{1}{2} \left[ \matrix{ 0 & \Omega_{n-1}(t) & 0 \cr \Omega_{n-1}(t)
& 0 & 2{\dot\theta}_{n-1}(t) \cr 0 & 2{\dot\theta}_{n-1}(t) & 0 } \right] .
\nonumber
\end{eqnarray}
If $\cos\theta_k(t)$ vanishes, which implies that $\theta_{k-1}$ is
time-independent, the state $\Phi_3^{(k)}$ decouples from the interaction.
In this case exact analytic solutions of the atomic dynamics can be found as
discussed in the next section. The analytic solutions also include cases of
population or coherence transfer. If $\cos\theta_k(t)$ does not vanish but
is small, the corresponding coupling in the evolution matrix can be treated
perturbatively. In such a situation we have a {\it generalized adiabatic
dynamics}.
In conclusion of this section it should be noted, that the iterative
definition of a generalized adiabatic basis is conceptually very similar to
the superadiabatic approach of Berry \cite{super} introduced for two-level
systems.
\section{Generalized matched pulses and analytic solution of atomic dynamics}
If a dynamical angle $\theta_{n-1}$ is a constant, the time-dependent state
\Phi_3^{(n)}(t)$ is decoupled from the interaction (constant of motion). In
this case the dynamical problem reduces to that of a two-state system
interacting via a {\it real} resonant coherent coupling plus a decoupled
state.
\begin{equation}
\frac{d}{dt} \left[ \matrix{B_1^{(n)}(t)\cr B_2^{(n)}(t)\cr B_3^{(n)}(t)}
\right] =-\frac{i}{2} \left[ \matrix{0 & \Omega_{n-1}(t) &0 \cr
\Omega_{n-1}(t) & 0 & 0 \cr 0 & 0 & 0} \right] \left[ \matrix{B_1^{(n)}(t
\cr B_2^{(n)}(t)\cr B_3^{(n)}(t)} \right]
\end{equation}
This equation can immediately be solved
\begin{eqnarray}
B_1^{(n)}(t) &=& B_1^{(n)}(0)\, \cos\phi(t) - i B_2^{(n)}(0)\, \sin\phi(t),
\label{match_sol1} \\
B_2^{(n)}(t) &=& B_2^{(n)}(0)\, \cos\phi(t) - i B_1^{(n)}(0)\, \sin\phi(t),
\label{match_sol2} \\
B_3^{(n)}(t) &=& B_3^{(n)}(0), \label{match_sol3}
\end{eqnarray}
where
\begin{equation}
\phi(t) = \frac{1}{2}\int_0^t\!\! d\tau\, \Omega_{n-1}(\tau).
\end{equation}
In particular if the atom is initially in the trapping state, it will stay in
that state.
For example if $\theta_0$ does not depend on time, the usual dark state
\Phi_3^{(1)}$ is an exact constant of motion. As can be seen from Eq.(\ref
{theta0}), for $\theta_0$ to be time-independent, Stokes and pump need to be
either cw fields or need to have the same envelope function, i.e. have to be
{\it matched pulses} \cite{matched},
\begin{eqnarray}
S(t) &=& \Omega_0(t) \,\cos\theta_0, \\
P(t) &=& \Omega_0(t) \, \sin\theta_0,
\end{eqnarray}
where $\Omega_0(t)$ can be an arbitrary function of time and
$\theta_0=$const. The
atomic dynamics is trivial in this case. Since
\Phi_3^{(1)}$ is time-independent, the trapping state is a constant
superposition of the bare atomic states $1$ and $3$.
On the other hand, if some higher-order dynamical angle $\theta_n$ is
constant, the system remains in a generalized trapping state if initially
prepared in it. The projection of this state onto the bare atomic basis is
in general time-dependent, and one can have a substantial rearrangement of atomic level population including
population transfer. If a higher-order dynamical angle is constant we will
call pump and Stokes pulses {\it generalized matched pulses}.
To obtain an explicit condition for generalized matched pulses in terms of
P(t)$ and $S(t)$ we successively integrate relations (\ref{thetan}). This
leads to the iteration
\begin{eqnarray}
\theta_{k-1}(t) &=& \frac{1}{2}\int_{-\infty}^t\!\!\! dt^\prime\,
\Omega_k(t^\prime)\, \cos\theta_k(t^\prime)+ \theta_k^0,\cr \Omega_{k-1}(t)
&=& \Omega_k(t)\, \sin\theta_k(t), \label{iterate}
\end{eqnarray}
starting with some $\theta_n(t)=\theta_n =$ const.~and $\Omega_n(t)$ as an
arbitrary function of time. Each iteration leads to one constant $\theta_k^0
, which can be freely chosen. The application of generalized matched pulses
to coherent population transfer will be discussed in the next section.
As noted before there may be cases, where for some number $n$ the dynamical
angle $\theta_n(t)$ does depend on time but its time-derivative is much
smaller than the corresponding generalized Rabi-frequency $\Omega_n(t)$,
while the same is not true for all $k< n$. In this case the state
\Phi_3^{(n)}(t)$ is an approximate constant of motion and we have an $n$th
order adiabatic process. The example of loop-STIRAP discussed in the last
section is a realization of a higher-order adiabatic process, which is
non-adiabatic in the first-order basis.
\section{Application of generalized matched pulses to population- and
coherence transfer}
In the following we discuss several examples for a coherent transfer of
population from one non-decaying
state to the other or to the excited state using
generalized matched pulses. We furthermore discuss the possibility to
transfer coherence, for example from the ground state transition to an
optical transition. Since in all cases there exist a generalized
trapping state
which is an exact constant of motion, we can obtain exact analytic results
for the atomic dynamics.
\subsection{Population and coherence transfer with second-order generalized
matched pulses}
\subsubsection{Complete transfer of coherence from a ground-state doublet to
an optical transition}
First we discuss the case when $\Phi_3^{(2)}$ is an exact constant
of motion, i.e. a trapping state. Furthermore we assume that the state
vector $\Psi$ coincides with this trapping state at $t=-\infty$. Then the
system will remain in the trapping state at later times. Therefore $\theta_1=$const. and it is
clear from Fig.\ref{iteration} that $\Psi$ is a time independent
superposition of states $\Phi^{(1)}_1$ and $\Phi^{(1)}_3$ and thus has at
all times a constant probability amplitude of the bare atomic state 2. In
fact from
\begin{equation}
{\bf C}={\sf V}_2^{-1} {\bf B}^{(2)} = {\sf V}_2^{-1} \left[ \matrix{0\cr
0\cr 1} \right]
\end{equation}
we find
\begin{eqnarray}
\left[ \matrix{C_1(t)\cr C_2(t)\cr C_3(t)} \right] =-i\cos\theta_1 \left[
\matrix{\enspace i\tan\theta_1\, \cos\theta_0(t) \cr 1\cr -i \tan\theta_1\,
\sin\theta_0(t) } \right] .
\end{eqnarray}
We now identify state 2 with a lower i.e. non-decaying level and state 3
with an excited states. The pump pulse $P(t)$ then couples two ground states
which could be realized for example by a magnetic coupling. The Stokes
pulse, which couples states 2 and 3 is considered an optical pulse. Due to
the finite and constant admixture of state 2 to the trapping state,
second-order generalized matched pulses are best suited to transfer
coherence for example from the 1-2 transition to the 3-2 transition.
We now want to construct pulses, that would lead to the desired complete
coherence transfer. To achieve this we have to satisfy the initial and final
conditions
\begin{eqnarray}
\theta _{0}(-\infty ) &=&0, \label{cond11} \\
\theta _{0}(+\infty ) &=&\pi /2. \label{cond12}
\end{eqnarray}
On the other hand, the iteration equation (\ref{iterate}) requires for
second-order matched pulses that
\begin{eqnarray}
\theta _{0}(t) &=&\frac{1}{2}\int_{-\infty }^{t}\!\!\!dt^{\prime }\,\Omega
_{1}(t^{\prime })\cos \theta _{1}+\theta _{0}^{0}, \label{theta0_cond} \\
\Omega _{0}(t) &=&\Omega _{1}(t)\sin \theta _{1},
\end{eqnarray}
where $\theta _{1}$ and $\theta _{0}^{0}$ are arbitrary constants and
\Omega _{1}(t)$ an arbitrary positive function of time. To fulfill the
initial condition (\ref{cond11}) we set $\theta _{0}^{0}=0$. In order to
satisfy the final condition (\ref{cond12}) we then have to adjust the total
pulse area (see Eq.(\ref{theta0_cond}))
\begin{equation}
A_{0}=\int_{-\infty }^{\infty }\!\!\!dt\,\Omega _{0}(t)=\pi \tan \theta _{1}.
\label{area1}
\end{equation}
Thus pump and Stokes pulses have the form
\begin{eqnarray}
P(t) &=&\Omega _{0}(t)\sin \biggl[\frac{\pi A\left( t\right) }{2A_{0}
\biggr], \\
S(t) &=&\Omega _{0}(t)\cos \biggl[\frac{\pi A\left( t\right) }{2A_{0}
\biggr].
\end{eqnarray}
with
\begin{equation}
A\left( t\right) =\int_{-\infty }^{t}\!\!\!dt^{\prime }\,\Omega
_{0}(t^{\prime })
\end{equation}
With this choice an initial coherent superposition of states 2 and 1
\begin{equation}
\Psi (-\infty )=-i\cos \theta _{1}\,\psi _{2}+\sin \theta _{1}\,\psi _{1}
\label{initial11}
\end{equation}
can be completely mapped into a coherent superposition of states 2 and 3
\begin{equation}
\Psi (+\infty )=-i\cos \theta _{1}\,\psi _{2}-\sin \theta _{1}\,\psi _{3}.
\label{final11}
\end{equation}
In order to transfer a given ground-state coherence to an optical transition
the pulse area $A_{0}$ should be chosen according to (\ref{area1}),
A_{0}=|C_{1}(-\infty )/C_{2}(-\infty )|$. The shape of $\Omega (t)$ is
otherwise arbitrary. It should be noted that Eq.(\ref{initial11}) requires a
certain fixed phase of the initial coherent superposition. The phase of the
pump pulse, which is included in the definition of $\psi _{1}$ (cf. Sec.II),
may need adjustment to satisfy this condition.
In Fig.\ref{second_order_matched} we have shown the populations of the bare
atomic states for the example $\Omega_0(t) =\sqrt{\pi} \exp(-t^2)$ ($A=\pi$)
and $\Psi(-\infty)=1/\sqrt{2} \bigl(\psi_1 -i \psi_2\bigr)$ from a numerical
solution of the Schr\"odinger equation. One clearly sees that all population
from state 1 is transferred to state 3. This transfer happens without
diabatic losses despite the fact that $A=\pi$ and thus the usual
adiabaticity condition is only poorly fulfilled.
\begin{figure}[tbp]
\begin{center}
\leavevmode \epsfxsize=7 true cm
\epsffile{fig6.eps}
\end{center}
\caption{Example of complete coherence transfer from $1-2$ to $3-2$ with
second-order generalized matched pulses. Plotted are the populations of bare
atomic states for a pair of pulses as shown in the insert. One recognizes
constant population in state 2 and complete transfer of population in 1 to 3.
}
\label{second_order_matched}
\end{figure}
The process discussed here may have some interesting applications, since it
allows to transfer coherence from a robust and long-lived ground state
transition to an optically accessible transition.
The population transfer from 1 to 3 with finite constant state amplitude in
2 discussed here coincides with the solution found by Malinovsky and Tannor
\cite{Malinovsky} with numerical optimization techniques. Assuming a finite
constant amplitude in state 2, these authors numerically optimized the peak
Rabi-frequency (which in this case is the only remaining free parameter) to
achieve maximum population transfer. They found, that in order to maximize
the final amount of population in state 3, the peak Rabi-frequency has to be
larger than a certain critical value. This can very easily be verified from
the generalized matched-pulse solutions (\ref{area1},\ref{final11}).
\begin{eqnarray}
|C_3(\infty)|^2 &=& \sin^2\theta_1=\frac{A^2}{\pi^2+A^2} \\
|C_2(\infty)|^2 &=& \cos^2\theta_1=\frac{\pi^2}{\pi^2+A^2}.
\end{eqnarray}
In the limit $\theta_1\to\pi/2$, which implies $A\to\infty$, the admixture
of level 2 vanishes and we essentially have population transfer from state 1
to state 3.
\subsubsection{Population transfer from 1 to 3 and non-exponential diabatic
losses}
We have seen in the last subsection that second-order matched pulses can be
used to effectively transfer population from state 1 to 3, if there is an
initial admixture of the excited state. This amplitude is inversely
proportional to the square of the pulse area $A$. Therefore one could expect
a good transfer for large $A$ also if all population is initially in state
1. In this case there is some finite amount of population which is not
trapped in the generalized dark state $\Phi _{3}^{(2)}$. Clearly in order to
achieve maximum population transfer, pump and Stokes pulse should be in
counterintuitive order and hence conditions (\ref{cond11}) and (\ref{cond12
) should be fulfilled. Since the pulses are assumed to be second-order
matched pulses, the dynamical problem with the initial condition
\begin{equation}
\left[ \matrix{B_1^{(2)}(-\infty)\cr B_2^{(2)}(-\infty)\cr
B_3^{(2)}(-\infty)}\right] ={\sf U}_{1}\cdot {\sf U}_{0}\,\left[
\matrix{1\cr 0\cr 0}\right] =\left[ \matrix{0\cr i\cos\theta_1\cr
\sin\theta_1}\right]
\end{equation}
can easily be solved (see Eq.(\ref{match_sol1}-\ref{match_sol3})). From Eqs.
\ref{thetan}) we find $\Omega _{1}(t)=\Omega _{0}(t)/\sin \theta _{1}$. Thus
\begin{eqnarray}
B_{1}^{(2)}(\infty ) &=&\frac{\pi }{\sqrt{\pi ^{2}+A^{2}}}\,\sin \left[
\frac{1}{2}\sqrt{\pi ^{2}+A^{2}}\right] , \\
B_{2}^{(2)}(\infty ) &=&i\frac{\pi }{\sqrt{\pi ^{2}+A^{2}}}\,\cos \left[
\frac{1}{2}\sqrt{\pi ^{2}+A^{2}}\right] , \\
B_{3}^{(2)}(\infty ) &=&\frac{A}{\sqrt{\pi ^{2}+A^{2}}},
\end{eqnarray}
where $A$ is the total pulse area defined in (\ref{area1}). From this we
find the asymptotic populations of the bare atomic states
\begin{eqnarray}
\Bigl|C_{1}(\infty )\Bigr|^{2} &=&\frac{\pi ^{2}}{\pi ^{2}+A^{2}}\,\sin
^{2}\left( \frac{1}{2}\sqrt{\pi ^{2}+A^{2}}\right) , \\
\Bigl|C_{2}(\infty )\Bigr|^{2} &=&\frac{4\pi ^{2}A^{2}}{\bigl(\pi ^{2}+A^{2
\bigr)^{2}}\,\sin ^{4}\left( \frac{1}{4}\sqrt{\pi ^{2}+A^{2}}\right) , \\
\Bigl|C_{3}(\infty )\Bigr|^{2} &=&\frac{1}{\bigl(\pi ^{2}+A^{2}\bigr)^{2}
\left[ A^{2}+\pi ^{2}\cos \left( \frac{1}{2}\sqrt{\pi ^{2}+A^{2}}\right)
\right] ^{2}.
\end{eqnarray}
Thus the diabatic losses scale in general with $1/A^{2}$, i.e.
non-exponentially with $A$. Furthermore for
\begin{equation}
\frac{1}{2}\sqrt{\pi ^{2}+A^{2}}=2n\pi \qquad {\rm or}\qquad A=\pi \sqrt
16n^{2}-1}
\end{equation}
with $n=1,2,\dots $ the population transfer is complete. We show in Fig. \ref
{second_order_success} the final population in state 3 as a function of
A/\pi $.
\begin{figure}[tbp]
\begin{center}
\leavevmode \epsfxsize=7 true cm
\epsffile{fig7.eps}
\end{center}
\caption{Final population in state 3 as a function of total pulse
area $A/\pi$ for population transfer from state 1 with second-order
matched pulses. For $A/\pi=\sqrt{16 n^2-1}$ the transfer is complete
(100.00\%)}
\label{second_order_success}
\end{figure}
A special case of the population transfer with second-order matched pulses
discussed in the present section is the analytical model discussed by
Vitanov and Stenholm in \cite{Vitanov}. These authors considered a pulse
sequence with
\begin{equation}
\Omega_0(t)=\frac{\alpha}{2 T}{\rm sech}^2\left(\frac{t}{T}\right),\qquad
\theta_0(t)=\frac{\pi}{4}\left[{\rm tanh}\left(\frac{t}{T}\right)+1\right]
\end{equation}
and thus $\tan\theta_1 = \Omega_0(t)/2{\dot\theta}_0(t) =\alpha/\pi=$ const.
\subsection{Population transfer via large-area third-order matched pulses}
Next we analyze the possibility of population transfer when $\Phi
_{3}^{\left( 3\right) }$ is exactly trapped. In order for $\Phi _{3}^{(3)}$
to be a constant of motion or equivalently to have third-order matched
pulses $\theta _{2}=$const. We assume again that the system state vector
\Psi $ is initially in the trapping state in which it will remain for all
times. In order to realise population transfer from state 1 to state 2 or 3
in this case, we furthermore must satisfy the initial conditions
\begin{equation}
C_{1}\left( -\infty \right) =1,\quad C_{2}\left( -\infty \right) =0,\quad
C_{3}\left( -\infty \right) =0. \label{atominit}
\end{equation}
This can be translated into a condition for the initial values of the
dynamical phases $\theta _{0}$ and $\theta _{1}$ using Eq.(\ref{B-C}). In
fact from
\begin{equation}
{\bf C}={\sf V}_{3}^{-1}{\bf B}^{(3)}={\sf V}_{3}^{-1}\left[ \matrix{0\cr
0\cr 1}\right]
\end{equation}
we find
\begin{eqnarray}
C_{1}(t) &=&i\left( \cos \theta _{0}(t)\sin \theta _{1}(t)\sin \theta
_{2}-\sin \theta _{0}(t)\cos \theta _{2}\right) \label{evolut} \\
C_{2}(t) &=&-i\cos \theta _{1}(t)\sin \theta _{2} \\
C_{3}(t) &=&-i\left( \cos \theta _{0}(t)\cos \theta _{2}+\sin \theta
_{0}(t)\sin \theta _{1}(t)\sin \theta _{2}\right) .
\end{eqnarray}
The initial condition it is fulfilled when
\begin{eqnarray}
\cos \theta _{0}\left( -\infty \right) \sin \theta _{1}\left( -\infty
\right) \sin \theta _{2}-\cos \theta _{2}\sin \theta _{0}\left( -\infty
\right) &=&1, \\
\sin \theta _{2}\cos \theta _{1}\left( -\infty \right) &=&0, \\
\cos \theta _{0}\left( -\infty \right) \cos \theta _{2}+\sin \theta _{2}\sin
\theta _{0}\left( -\infty \right) \sin \theta _{1}\left( -\infty \right)
&=&0.
\end{eqnarray}
The result is
\begin{equation}
\theta _{1}\left( -\infty \right) =\frac{\pi }{2},\quad \theta _{0}\left(
-\infty \right) =\theta _{2}+\frac{\pi }{2}. \label{initcond}
\end{equation}
From Eq.(\ref{thetan}) we find the following differential equation
\begin{equation}
2\frac{d\theta _{1}(t)}{dt}=\alpha \,\Omega _{1}(t),\qquad {\rm where}\qquad
\alpha =(\tan \theta _{2})^{-1}={\rm const.}
\end{equation}
Introducing
\begin{equation}
x(t)=\tan \theta _{1}(t)
\end{equation}
we find furthermore
\begin{eqnarray}
\frac{2x\,\dot{x}}{\left( 1+x^{2}\right) ^{3/2}} &=&\alpha \Omega _{0}(t), \\
2\frac{d\theta _{0}(t)}{dt} &=&\frac{\Omega _{0}(t)}{x(t)}.
\end{eqnarray}
Integrating these equations and taking into account the initial conditions
\ref{initcond}) yields
\begin{eqnarray}
\tan \theta _{1}(t) &=&x(t)=\sqrt{\displaystyle\frac{1}{f^{2}(t)}-1}, \\
\theta _{0}(t) &=&\theta _{2}+\frac{\pi }{2}+\frac{1}{\alpha }\Bigl[1-\sqrt
1-f^{2}(t)}\Bigr],
\end{eqnarray}
where
\begin{equation}
f(t)=\frac{\alpha }{2}\int_{-\infty }^{t}dt^{\prime }\,\Omega _{0}\left(
t^{\prime }\right) .
\end{equation}
$\Omega _{0}(t)$ is an arbitrary smooth function which we assume to vanish
at infinity, $\Omega _{0}\left( \pm \infty \right) =0$. We still have one
free constant $\alpha $, which we can choose. As we will show now, we can
choose $\alpha $ such that the efficiency of the transfer from state $1$ to
states $3$ or $2$ approaches unity.
\subsubsection{Population transfer from ground state to state 3}
In order to transfer the initial population from state $1$ to the target
state $3$, it is necessary to satisfy the final conditions
\begin{equation}
\theta _{1}\left( +\infty \right) =\frac{\pi }{2},\quad \theta _{0}\left(
+\infty \right) =\theta _{2}
\end{equation}
which implies
\begin{eqnarray}
\tan \theta _{1}(\infty ) &=&\sqrt{\frac{4}{\alpha ^{2}A^{2}}-1}\to \infty ,
\\
\theta _{0}(\infty ) &=&\theta _{2}+\frac{\pi }{2}+\frac{1}{\alpha }\Bigl[1
\sqrt{1-\alpha ^{2}A^{2}/4}\Bigr]=\theta _{2},
\end{eqnarray}
where
\begin{equation}
A=\int_{-\infty }^{\infty }\!\!dt\,\Omega _{0}(t)
\end{equation}
is the pulse area. From these condition one finds the constraint
\begin{equation}
\alpha =-\frac{4\pi }{\pi ^{2}+A^{2}},\qquad A\gg \pi . \label{alfa1}
\end{equation}
The diabatic losses in the limit $A\gg 1$ are
\begin{equation}
1-\Bigl|C_{3}(\infty )\Bigr|^{2}\approx \frac{4\pi ^{2}}{2\pi ^{2}+A^{2}}
\end{equation}
and thus in the adiabatic limit we have essentially complete population
transfer from state $1$ to state $3$.
Fig.\ref{third_order_lower} shows an example of population transfer with
third-order matched pulses. Here $\Omega _{0}(t)=A/2\,{\rm sech}^{2}(t)$ and
$A=20\pi $. Pump and Stokes pulses are shown in the upper frame and the
population histories in the lower one. We see that the amplitudes of the
Stokes and pump pulses are unequal. As in ordinary STIRAP the population of
the state $2$ is small during the evolution.
\begin{figure}[tbp]
\begin{center}
\leavevmode \epsfxsize=7 true cm
\epsffile{fig8.eps}
\end{center}
\caption{Population transfer from $1$ to $3$ with third-order matched
pulses. Upper frame shows pulses, lower frame population dynamics. Here
\Omega_0(t)=A/2\, {\rm sech}^2(t)$ and $A/\pi=20$.}
\label{third_order_lower}
\end{figure}
\subsubsection{Population transfer from ground state to the state 2}
In order to transfer the initial population from state $1$ to state $2$, it
is necessary to satisfy the conditions
\begin{equation} \label{cond2}
\theta _1\left( +\infty \right) =0 , \quad \theta _2=\frac{ \pi }{2}
\end{equation}
In this case we have to fix $\alpha $ to be
\begin{equation}
\alpha =\frac{ 2}{A},\qquad A \gg 1.
\end{equation}
Fig.\ref{third_order_upper} shows the pulses $P(t)$ and $S(t)$ and the
evolution of the atomic populations. Here $\Omega _{0}(t)=A/2\,{\rm sech
^{2}(t)$ and $A=16$. We see that the Stokes and pump pulses are in a
counterintuitive sequence. At first the atomic population oscillates between
state $1$ and $3$, but as the pulse sequence proceeds the whole population
is transfered into state $2$. In other words, during the full pulse sequence
there occur several STIRAP transitions, but due to the large nonadiabatic
coupling the population accumulates in state $2$.
\begin{figure}[tbp]
\begin{center}
\leavevmode \epsfxsize=7 true cm
\epsffile{fig9.eps}
\end{center}
\caption{Population transfer from $1$ to $2$ with third-order matched
pulses. Upper frame shows pulses, lower frame population dynamics. Here
\Omega_0(t)=A/2\, {\rm sech}^2(t)$ and $A=16$.}
\label{third_order_upper}
\end{figure}
\section{summary}
We have introduced the concept of generalized dressed states in order to
explain the success of population transfer in stimulated Raman adiabatic
passage with a loop coupling. If the interaction of a three-level system
with a pair of time-dependent pump and Stokes pulses is described in terms
of the so-called dark and bright states instead of the instantaneous
eigenstates of the Hamiltonian, the original three-state--two-field system
is transformed into a system of three states coupled by two effective
interactions \cite{Cohen_Tannoudji,Fleischhauer96}. This allows for an
iteration procedure leading to higher-order adiabatic basis sets \cite{super
. We showed that in the case of loop-STIRAP there is a higher-order trapping
state, which is an approximate constant of motion even when the usual
adiabaticity condition is not fulfilled. This state adiabatically rotates
from the initial to the target quantum state of the atom and thus leads to
efficient population transfer, however, at the expense of placing some
population into the decaying atomic state.
The concept of generalized trapping states allows the construction of pulse
sequences which lead to an optimum population or coherence transfer also for
small pulse areas and allows for solutions for the atomic dynamics. If pump
and Stokes pulses fulfill certain conditions (so-called generalized matched
pulses) the effective $3\times 3$ coupling matrix factorizes at a specific
point of the iteration. The trapping state of the corresponding $n$th-order
adiabatic basis is then an exact constant of motion. In this case the atomic
dynamics reduces to a two-level problem with a real coupling which can be
solved analytically.
For ordinary matched pulses, i.e. if pump and Stokes have the same shape,
the atomic dynamics is rather limited. The corresponding dark state is a
constant superposition of states $1$ and $3$. In the case of generalized
matched pulses, however, the trapping state has a time-dependent overlap
with the bare atomic states and thus population or coherence transfer is
possible. We have discussed with specific example population transfer with
second and third-order matched pulses. We found that for certain values of
the pulse areas complete population or coherence transfer is possible. In
the general case the diabatic losses scale non-exponentially with the
inverse pulse area.
\section*{acknowledgements}
The work of RU is supported by the Alexander von Humboldt Foundation. BWS
thanks the Alexander von Humboldt Stiftung for a Research Award; his work is
supported in part under the auspices of the U.S. Department of Energy at
Lawrence Livermore National Laboratory under contract W-7405-Eng-48. Partial
support by the EU Network ERB-CHR-XCT-94-0603 is also acknowledged.
\frenchspacing
| proofpile-arXiv_065-9336 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{INTRODUCTION}
\label{sec:intro}
Since first introduced by Shifman, Vainshtein and Zakharov~\cite{SVZ},
QCD sum rule has been widely used to study the properties of the
hadrons~\cite{qsr}.
QCD sum rule is a framework which connects a physical parameter to the
parameters of QCD. In this framework, a correlation function is
introduced in terms of interpolating fields, which are
constructed from quark and gluon fields.
Then, the correlation function,
on the one hand, is calculated by Wilson's
operator product expansion (OPE) and, on the other hand, its phenomenological
``ansatz'' is constructed. A physical quantity of
interest is extracted by matching the two descriptions in the
deep Euclidean region ($q^2 = - \infty$) via the dispersion relation.
The extracted value therefore should be independent of the possible ansatz in
order to be phyically meaningful.
The two-point correlation function with pion,
\begin{eqnarray}
\Pi (q, p) = i \int d^4 x e^{i q \cdot x} \langle 0 | T[J_N (x)
{\bar J}_N (0)]| \pi (p) \rangle \ ,
\label{two}
\end{eqnarray}
is often used to calculate the pion-nucleon coupling, $g_{\pi N}$, in QCD sum
rules~\cite{qsr,hat,krippa}.
Reinders, Rubinstein and Yazaki~\cite{qsr} calculated
$g_{\pi N}$ by retaining only the first nonperturbative term in
the OPE. Later Shiomi and Hatsuda (SH)~\cite{hat} improved
the calculation by including higher order terms in the OPE. SH considered
Eq.~(\ref{two})
and evaluated the OPE in the soft-pion limit ($p_\mu \rightarrow 0$).
More recently, Birse and Krippa (BK)~\cite{krippa} pointed out that
the use of the soft-pion limit does not constitute an independent sum
rule from the nucleon sum rule because in the limit the correlation function
is just a chiral rotation of the nucleon correlation function,
\begin{eqnarray}
\Pi (q) = i \int d^4 x e^{i q \cdot x} \langle 0 | T[J_N (x)
{\bar J}_N (0)] | 0 \rangle \ .
\label{ntwo}
\end{eqnarray}
Therefore, BK considered the sum rule beyond the soft-pion limit.
However, as we will discuss below,
there seems to be mistakes in their calculation
which can invalidate their conclusions.
Thus, it is important to re-do their calculation.
In a recent letter~\cite{hung}, we have pointed out that
the previous calculations of the pion-nucleon coupling using
Eq.~(\ref{two}) have dependence on how
one models the phenomenological side; either using the
pseudoscalar (PS) or the
pseudovector (PV) coupling scheme. The two coupling schemes
are equivalent when the participating nucleons are on-shell but
they are
not usually when the nucleons are off-shell.
Since, in QCD sum rules, on-shell properties of a particle are
extracted from the far off-shell point, the extracted $g_{\pi N}$ therefore
could be coupling-scheme dependent.
Going beyond the
soft-pion limit is found to be also natural in obtaining
$g_{\pi N}$ independent of the PS and PV coupling schemes.
In fact, we have proposed that, beyond the
soft-pion limit, there are three
distinct Dirac structures,
(1) $i \gamma_5 \not\!p$,
(2) $i \gamma_5$, (3) $\gamma_5 \sigma_{\mu \nu} {q^\mu p^\nu}$,
each of which can in principle be used to calculate
$g_{\pi N}$. The third structure was found to have the common
double pole structure in the phenomenological
side, independent of the PS and PV
coupling schemes. By studying
this structure, we obtained the coupling close to its empirical
value and relatively stable against the uncertainties from QCD parameters.
Then we ask, can we get similar stable results
from the sum rules constructed from the other Dirac structures ?
If not, what are the
reasons for the differences ?
In this work, we will try to answer these questions
by studying these three sum rules and
investigating the reliability of each sum rule.
QCD sum rules could depend on
a specific Dirac structure considered.
This aspect was suggested by Jin and Tang~\cite{jin2}
in their study of baryon sum rules.
They found that the chiral odd sum rule is more reliable due
to the partial cancellation of the positive and negative-parity
excited baryons in the continuum. Similarly here
we note that the structure (1) has different chirality from the other
two. Therefore it will be interesting to
look into these sum rules more closely and see if
similar cancellation occurs for certain sum rules.
The paper is organized as follows. In Section~\ref{sec:qcdsr},
we construct three sum rules from the three different Dirac structures.
The spectral density for the phenomenological side is constructed
from the double pole, the unknown single pole and the
continuum modeled by a step function.
We motivate this phenomenological spectral density
in Section~\ref{sec:effective}
by using
some effective Lagrangians for the
transitions, $N \rightarrow N^*$ and $N^* \rightarrow N^*$.
In Section~\ref{sec:analysis}, we analyze each sum rule and
try to understand the differences from the formalism constructed
in Section~\ref{sec:effective}. A summary is given in
Section~\ref{sec:summary}.
\section{QCD sum rules for the two-point correlation function}
\label{sec:qcdsr}
In this section, we formulate three different sum rules for the two-point
correlation function with pion beyond the soft-pion limit.
For technical simplicity, we consider the correlation function with
charged pion,
\begin{eqnarray}
\Pi (q,p) = i \int d^4 x e^{i q \cdot x} \langle 0 | T[J_p (x)
{\bar J}_n (0)]| \pi^+ (p) \rangle\ .
\label{two2}
\end{eqnarray}
Here $J_p$ is the proton interpolating field suggested by Ioffe~\cite{ioffe1},
\begin{eqnarray}
J_p = \epsilon_{abc} [ u_a^T C \gamma_\mu u_b ] \gamma_5 \gamma^\mu d_c
\end{eqnarray}
and the neutron interpolating field $J_n$ is obtained by replacing
$(u,d) \rightarrow (d,u)$.
In the OPE, we only keep the diquark component of the
pion wave function and use the vacuum saturation hypothesis
to factor out
higher dimensional operators in terms of the pion wave function and the
vacuum expectation value.
The calculation of the correlator, Eq.~(\ref{two2}), in the coordinate
space contains the following diquark component of the pion wave function,
\begin{eqnarray}
D^{\alpha\beta}_{a a'} \equiv
\langle 0 | u^\alpha_a (x) {\bar d}^\beta_{a'} (0) | \pi^+ (p) \rangle\ .
\end{eqnarray}
Here, $\alpha$ and $\beta$ are Dirac indices, $a$ and $a'$ are
color indices.
The other quarks are contracted to form quark propagators.
This diquark component can be written in terms of three
Dirac structures,
\begin{eqnarray}
D^{\alpha\beta}_{a a'} = && {\delta_{a a'} \over 12}
(\gamma^\mu \gamma_5)^{\alpha \beta}
\langle 0 |
{\bar d} (0) \gamma_\mu \gamma_5 u (x) | \pi^+ (p) \rangle\
+ {\delta_{a a'} \over 12 } (i \gamma_5)^{\alpha \beta}
\langle 0 |
{\bar d}(0) i \gamma_5 u (x) | \pi^+ (p) \rangle\nonumber \\
&& - {\delta_{a a'} \over 24} (\gamma_5 \sigma^{\mu\nu})^{\alpha\beta}
\langle 0 |
{\bar d}(0) \gamma_5 \sigma_{\mu\nu} u (x) | \pi^+ (p) \rangle\ .
\label{dd}
\end{eqnarray}
Each matrix element associated with each Dirac
structure can be written in terms of pion wave
function whose first few moments are relatively well known~\cite{bely}.
We will come back to the second matrix element later. For the other two
elements, we need only the normalization of the pion wave functions since
we are doing the calculation up to the first order in $p_\mu$.
In fact, to leading order in the pion momentum,
the first and third matrix elements are given as~\cite{bely},
\begin{eqnarray}
\langle 0 | {\bar d} (0) \gamma_\mu \gamma_5 u (x) | \pi^+ (p) \rangle\
&=& i \sqrt{2} f_\pi p_\mu + {\rm twist~4~term}\ , \label{d1} \\
\langle 0 |{\bar d}(0) \gamma_5 \sigma_{\mu\nu} u (x) | \pi^+ (p) \rangle\
&=&i \sqrt{2} (p_\mu x_\nu - p_\nu x_\mu)
{f_\pi m_\pi^2 \over 6 (m_u + m_d)}\ .
\label{d3}
\end{eqnarray}
Here we have suppressed terms higher order in pion momentum.
The factor $\sqrt{2}$ is just an isospin factor. The
twist 4 term in Eq.~(\ref{d1}) comes from the second derivative
term in the short distant expansion of the LHS.
Note that in Eq.~(\ref{d3})
the factor $f_\pi m_\pi^2 / (m_u + m_d)$ can
be written as $-\langle {\bar q} q \rangle / f_\pi$ by making use of
Gell-Mann$-$Oakes$-$Renner relation.
Although the operator looks gauge
dependent, it is understood that the fixed point gauge is used throughout
and the final result is gauge independent.
It is then interesting to note that
the LHS of Eq.~(\ref{d3}) can also be expanded in $x$ such
that the matrix element that contributes is effectively one with higher
dimension,
\begin{eqnarray}
\langle 0 |{\bar d}(0) \gamma_5 \sigma_{\mu\nu} D_\alpha
u (0) | \pi^+ (p) \rangle
=i \sqrt{2} (p_\mu g_{\alpha \nu} - p_\nu g_{\alpha \mu})
{f_\pi m_\pi^2 \over 6 (m_u + m_d)}\ .
\label{su}
\end{eqnarray}
It is now straightforward to calculate the OPE. For the
$i\gamma_5\not\!p$ structure, we obtain
\begin{eqnarray}
\sqrt{2} f_\pi \left [ {q^2 {\rm ln} (-q^2) \over 2 \pi^2 }+
{\delta^2 {\rm ln} (-q^2) \over 2 \pi^2} +
{\left \langle {\alpha_s \over \pi} {\cal G}^2
\right \rangle \over 12 q^2} +
{2 \langle {\bar q} q \rangle^2 \over 9 f_\pi^2 q^2} \right ]\ .
\label{bkope}
\end{eqnarray}
The first three terms are obtained by taking the second term in
Eq.~(\ref{dd})\footnote{Note that the second term in Eq.~(\ref{bkope}) has
slightly different coefficient from BK~\cite{krippa}.
Ref.~\cite{krippa} has the factor 5/9 instead of our factor 1/2. The
difference however is small.}, while
the fourth term is obtained by taking the third term in Eq.(\ref{dd})
and replacing one quark propagator with
the quark condensate. The fourth term was not taken into account
in the sum rule studied by BK~\cite{krippa} but
its magnitude is about 4 times larger than the third term. So there is
no reason to neglect the fourth term while keeping the third term.
The second term comes from the twist-4 element of pion wave function.
According to Novikov {\it et. al}~\cite{nov}, $\delta^2 \sim 0.2$ GeV$^2$.
The phenomenological side for the $i\gamma_5\not\!p$ structure
obtained by using the pseudoscalar Lagrangian takes the form,
\begin{eqnarray}
-{\sqrt{2} g_{\pi N} \lambda_N^2 m \over
(q^2 - m^2 +i \epsilon)[(q-p)^2 - m^2 + i \epsilon]} + \cdot \cdot \cdot\ .
\label{bkcor}
\end{eqnarray}
The dots include contributions from the continuum as well as from
the unknown
single pole terms . The latter consists of the
single pole coming from $N \rightarrow N^*$ transition~\cite{ioffe2}.
When the pseudovector Lagrangian is used, there is an additional
single pole coming from $N\rightarrow N$ transition~\cite{hung}.
These single poles are not suppressed by the Borel transformation.
Therefore, interpretation of
the unknown single pole and possibly the continuum
contain some ambiguity due to the coupling scheme adopted.
In principle, $g_{\pi N}$ has $p^2$ dependence as it
contains the pion form factor. As one pion momentum is taken out
by the Dirac structure, $i\gamma_5\not\!p$, we take $p_\mu =0$
in the rest of the correlator as we did in the OPE side.
Then the $p^2$ dependence of
$g_{\pi N}$ can be neglected. Furthermore,
after taking out the factor, $i\gamma_5\not\!p$, the rest of the
correlator is a function of one variable,
$q^2$, and therefore
the single dispersion relation in $q^2$ can be invoked in constructing
the sum rule.
Anyhow, the spectral density can be
written as
\begin{eqnarray}
\rho^{phen} (s) = -\sqrt{2} g_{\pi N} \lambda_N^2 m {d \over ds}
\delta(s-m^2) + A'~ \delta(s-m^2) + \rho^{ope}(s)~ \theta (s -S_\pi)\ .
\label{bkphen}
\end{eqnarray}
Here the second term comes from the single pole terms whose coefficients
are not known.
The continuum contribution is parameterized by a step function
which starts from the threshold, $S_\pi$.
The coefficient of the step function, $\rho^{ope}(s)$, is
determined by the duality of QCD. This is basically the imaginary part
of Eq.~(\ref{bkope}) but, because of the continuum threshold, only the first
two terms in Eq.~(\ref{bkope}) contribute to the coefficient.
The parameterization of the continuum with a step function
is usually adopted in the baryon mass sum rules. This is because
each higher resonance has a single pole structure with a finite width.
Spectral density obtained by adding up all those single poles
can be effectively represented by a step
function starting from a threshold.
But in our case of the correlation function with pion,
this parameterization for the continuum
could be questionable. Therefore, it will be useful to construct
the spectral density explicitly for higher resonances by employing some
effective models for $N^*$ and see if the
parameterization does make sense.
This will be done in the next section. This will
eventually help us to understand how each sum rule
based on a different Dirac structure leads to different results.
To construct QCD sum rule for the $i \gamma_5 \not\!p$ structure,
we integrate $\rho^{ope} (s)$ and $\rho^{phen} (s)$
with the Borel weighting factor $e^{-s/M^2}$ and match both sides. More
specifically, the sum rule equation after the Borel transformation
is given by
\begin{eqnarray}
\int^\infty_0 ds e^{-s/M^2} [\rho^{ope}(s) - \rho^{phen} (s)] =0\ .
\end{eqnarray}
Using $\rho^{ope}(s)$ obtained from
Eq.~(\ref{bkope}) and $\rho^{phen} (s)$ in
Eq.~ (\ref{bkphen}), we obtain
\begin{eqnarray}
&&g_{\pi N} \lambda^2_N (1 + A M^2) \nonumber \\
&&= {f_\pi \over m} e^{m^2/M^2} \left [
{E_1 (x_\pi) \over 2 \pi^2} M^6 + {E_0 (x_\pi) \over 2 \pi^2} M^4 \delta^2
+ M^2 \left ( {1 \over 12}
\left \langle {\alpha_s \over \pi} {\cal G}^2
\right \rangle +
{2 \langle {\bar q} q \rangle^2 \over 9 f_\pi^2 }\right ) \right ]\ .
\label{bksum}
\end{eqnarray}
Here $A$ denotes the unknown single pole contribution, which should be
determined by the best fitting method.
Also $x_\pi = S_\pi/M^2$ and
$E_n (x) = 1 -(1+x+ \cdot \cdot \cdot + x^n/n!)~ e^{-x}$ .
This expression is crucially different from the corresponding expression
in Ref.~\cite{krippa} where the first, second and third terms contain
the factors, $E_2(x_\pi)$, $E_1(x_\pi)$ and
$E_0 (x_\pi)$ respectively. Even though we do not understand how such factors
can be obtained,
we nevertheless reproduce their figure by using their formula
in Ref.~\cite{krippa}
and it is shown in
Fig.~\ref{fig1}~(a)\footnote{In plotting Figs.~\ref{fig1},
we did not include the last term involving $\langle {\bar q} q \rangle^2$
in Eq.~(\ref{bksum}) as this term is new in our calculation.}.
But if Eq.~(\ref{bksum})
is used instead, we get Fig.~\ref{fig1}~(b) using the same parameter set
used in Ref.~\cite{krippa}. The variation scale
of $g_{\pi N}$ in this figure is clearly different from the one
in Fig.~\ref{fig1}~(a). Note that some of their parameters are quite
different from ours used in our analysis later part of this work.
For example, $\delta^2=0.35 $ GeV$^2$ is used
in Ref.~\cite{krippa}, which is quite larger than our value of 0.2 GeV$^2$.
QCD sum rule for the $\gamma_5 \sigma_{\mu\nu} q^\mu p^\nu$ structure can be
constructed similarly.
We have constructed the sum rule for this structure in Ref.~\cite{hung}
so here we simply write down the resulting expression,
\begin{eqnarray}
g_{\pi N} \lambda_N^2 ( 1+ B M^2 )= - {\langle {\bar q} q \rangle \over
f_\pi} e^{m^2/M^2} \left [ {M^4 E_0 (x_\pi) \over 12 \pi^2 }
+{4 \over 3 } f^2_\pi M^2 +
\left \langle {\alpha_s \over \pi} {\cal G}^2
\right \rangle
{1 \over 216 }
-{m_0^2 f^2_\pi \over 6 }
\right ]\ .
\label{hungsum}
\end{eqnarray}
Here $B$ denotes the contribution from the unknown single pole term.
Note that, since one power of the pion momentum is taken out
by the factor, $\gamma_5 \sigma_{\mu\nu} q^\mu p^\nu$,
we take the limit $p_\mu = 0$ in the rest of the correlator
as we did in the $i\gamma_5 \not\!p$ case.
In obtaining the first and third terms in RHS, we have
used Eq.~(\ref{d3}) while
the second is obtained by taking the first
term in Eq.~(\ref{dd})
for the matrix element $D^{\alpha\beta}_{aa'}$ and replacing one
propagator with the quark condensate.
The fourth term is also obtained by taking the first term in
Eq.~(\ref{dd}) but in this case other quarks are used to form
the dimension five mixed condensate,
$\langle {\bar q} g_s \sigma \cdot {\cal G} q \rangle $,
which is usually parameterized in terms of the quark condensate,
$m_0^2 \langle {\bar q} q \rangle $.
We take $m_0^2 \sim 0.8$ GeV$^2$ as obtained from QCD sum rule
calculation~\cite{Ovc}.
Now we construct QCD sum rule for the $i \gamma_5$ structure.
Constructing it beyond the soft-pion limit is more complicated
as the correlator in phenomenological side has definite dependence
on the coupling schemes. To see this, we expand the correlator
for this structure in $p_\mu$ and write
\begin{equation}
\Pi_0 (q^2) + p\cdot q \Pi_1 (q^2)+p^2 \Pi_2 (q^2) + \cdot \cdot \cdot\ .
\label{expansion}
\end{equation}
Since $p_\mu$ is an external momentum, the correlation function at
each order of $p_\mu$ can be used to construct an independent sum rule.
Within the PS coupling scheme, the phenomenological correlator
up to $p^2$ order is
\begin{eqnarray}
\sqrt{2} g_{\pi N} \lambda_N^2 \left [
-{1 \over q^2 - m^2 }
-{ p \cdot q \over
(q^2 - m^2 )^2 }
+{ p^2 \over
(q^2 - m^2 )^2 } \right ]
-{\sqrt{2} \lambda_N^2 p^2 \over q^2 - m^2 } {d g_{\pi N} \over d p^2}(p^2 =0)
\cdot \cdot \cdot\ .
\label{shphen1}
\end{eqnarray}
The dots here represent not only the contribution from higher resonances
but also terms higher than $p^2$.
The last term is related to the slope of the pion form factor at $p^2 =0$.
Even though this can be absorbed into the unknown single pole term such
as $A$ or $B$ above, we specify it here since this
possibility is new.
This correlator can be compared with the corresponding expression
in the PV coupling scheme,
\begin{eqnarray}
\sqrt{2} g_{\pi N} \lambda_N^2 {p^2/2 \over
(q^2 - m^2 )^2} + \cdot \cdot \cdot\ .
\label{shphen2}
\end{eqnarray}
Note here that there are no terms corresponding to $\Pi_0$ and $\Pi_1$.
No such terms can be constructed from $N\rightarrow N^*$ or
$N^* \rightarrow N^*$ transitions within the PV scheme.
The single pole in Eq.(\ref{shphen1}) survives in the soft-pion limit,
which has been used by SH~\cite{hat} for their sum rule
calculation of $g_{\pi N}$. However, if the
phenomenological correlator in PV scheme is used,
such sum rule cannot be constructed.
Thus, going beyond
the soft-pion limit seems to be natural for the independent determination of
the coupling.
However, similarly for $\Pi_0$ case, a sum rule can not be
constructed for $\Pi_1$. For $\Pi_2$, a sum rule can be constructed
either in the PS or PV coupling scheme, but the residue of the
double pole in Eq.~(\ref{shphen2}) is a factor of two smaller
than the corresponding term in Eq.~(\ref{shphen1}).
So the coupling-scheme independence can not be achieved in any of
these sum rules.
This is true for even higher orders of $p_\mu$.
A sum rule, independent of the coupling schemes, can be constructed
by imposing the kinematical condition,
\begin{equation}
p^2 = 2 p \cdot q\ .
\label{cond}
\end{equation}
With this condition,
the two double pole terms in Eq.~(\ref{shphen1}) can be combined
to yield the same expression as in Eq.~(\ref{shphen2}),
thus providing a sum rule independent of the
coupling schemes.
This condition comes from the on-shell conditions
for the participating nucleons,
$q^2 = m^2 $ and $(q-p)^2 = m^2$, at which the physical $\pi NN$
coupling should be defined.
The sum rule constructed with the kinematical condition, Eq.~(\ref{cond}),
is equivalent to consider $\Pi_1(q^2)/2+\Pi_2 (q^2)$.
This sum rule seems fine in the PS coupling scheme as
there are nonzero terms corresponding to $\Pi_1$ and $\Pi_2$.
In the OPE, the diquark component contributing to $i \gamma_5$ structure is
the second element of Eq.~(\ref{dd}) which can be written in terms of twist-3
pion wave function as~\cite{bely}
\begin{eqnarray}
\langle 0 |
{\bar d}(0) i \gamma_5 u (x) | \pi^+ (p) \rangle =
{\sqrt{2} f_\pi m_\pi^2 \over m_u + m_d}
\int^1_0 du e^{-i u p\cdot x} \varphi_p (u)\ .
\label{psope}
\end{eqnarray}
The terms linear and quadratic in $p_\mu$ in the RHS
constitute the OPE correlator for
$\Pi_1$ and $\Pi_2$.
Therefore, within the PS scheme, $\Pi_1$ and $\Pi_2$ are well
defined in both sides.
Situation becomes subtle when the PV coupling scheme is employed.
Before the condition of Eq.~(\ref{cond})
is imposed, a sum rule can be
constructed only for $\Pi_2$ as there is no $\Pi_1$ part in the
phenomenological part.
But after the condition, the phenomenological side has only
$\Pi_2^{phen}$ which should be matched with
$\Pi_1^{ope}/2 + \Pi_2^{ope}$. This seems a little awkward.
Nevertheless, to achieve the independence of the coupling
schemes, we construct a QCD sum rule for $i\gamma_5$ within
the kinematical condition, Eq.~(\ref{cond}).
To be consistent with the expansion in the phenomenological side,
we take the terms up to the order $p^2$ in the expansion of Eq.~(\ref{psope}).
Using the parameterization for $\varphi_p (u)$
given in Ref.~\cite{bely},
we obtained up to $p^2$,
\begin{eqnarray}
\langle 0 |
{\bar d}(0) i \gamma_5 u (x) | \pi^+ (p) \rangle =
{\sqrt{2} f_\pi m_\pi^2 \over m_u + m_d}
\left ( 1 -
i {1 \over 2} p \cdot x -{0.343 \over 2} (p\cdot x)^2 \right )\ .
\label{psope1}
\end{eqnarray}
A different parameterization given in Ref.~\cite{bely}
changes the numerical factors very slightly .
Using the diquark component of Eq.~(\ref{psope1}), the OPE side
for $\Pi_1/2 + \Pi_2$ is
calculated straightforwardly. By matching with its phenomenological
counterpart and taking the Borel transformation, we get
\begin{eqnarray}
g_{\pi N} \lambda^2_N (1 + C M^2) = {\langle {\bar q} q \rangle
\over f_\pi} e^{m^2/M^2} \left [
{0.0785 E_0 (x_\pi) \over \pi^2} M^4
- 0.314\times {1 \over 24}
\left \langle {\alpha_s \over \pi} {\cal G}^2
\right \rangle \right ]\ .
\label{shsum}
\end{eqnarray}
Here $C$ again denotes the unknown single pole term which is not
suppressed by the Borel transformation.
This sum rule is different from the other two sum rules as its first
term in the OPE is
negative. Each term contains very small
numerical factors due to the cancellation between $\Pi_1^{ope}$
and $\Pi_2^{ope}$.
Up to now, we have presented three different sum rules from Eq.~(\ref{two}).
All these sum rules, in principle, can be used to determine the
pion-nucleon coupling constant, $g_{\pi N}$. We will discuss the
reliability of each sum rule below. An alternative approach is
to consider the nucleon
correlation function in an external axial field as done in Ref~\cite{bely1}.
The nucleon axial charge, $g_A$, calculated in Ref.~\cite{bely1}, agree
well with experiment. Subsequently, by using the Goldberger-Treiman relation,
$g_{\pi N}$ can be also well determined.
In the approach by Ref.~\cite{bely1}, sum rules for $g_A -1$ is obtained
by replacing some part of the OPE with the nucleon mass sum rule.
The connection between the sum rules using Eq.~(\ref{two}) and the ones
in Ref.~\cite{bely1}
is not clear at this moment.
An important observation made in Ref.~\cite{bely1} is to note that
some (dominant) terms of
the OPE correspond to a sum rule with $g_A=1$. This observation allows
the construction of the sum rules for $g_A -1$. In our sum rules, this kind
of observation is not possible. Also the OPE expression from
Eq.~(\ref{two}) is not simply related to the sum rules
with the external field. Therefore, Eq.~(\ref{two}) seems to
provide independent sum rules from the ones in Ref.~\cite{bely1}.
In future, however, further study is necessary to clarify the connection
between these two sets
of independent sum rules as it might provide important aspects for the
nonperturbative nature of hadrons.
\section{Construction of the unknown single pole and the continuum}
\label{sec:effective}
In this section, we construct the unknown single pole term and the continuum
by using effective models for the
higher resonances. This will provide a better understanding of the
parameterization for the continuum in Eq.~(\ref{bkphen})
and give further insights for the unknown single pole term.
Later, this construction will help us to understand the
differences between each sum rule based on a different Dirac structure.
There are two possible sources for the unknown single pole term and
the continuum. One is from the transition,
$N\rightarrow N^*$ and the other is from the transition, $N^* \rightarrow N^*$.
Of course, as we pointed out in Ref.~\cite{hung}, there could
be additional single pole of nucleon coming from $N \rightarrow N$,
which however, in the first order of the pion momentum,
appears only in the sum rule for the $i\gamma_5 \not\!p$
structure within the PV coupling
scheme. First we avoid such possibility by constructing
effective models within the PS coupling scheme.
Later we will discuss
the case with the PV coupling scheme.
Moreover, we will discuss only the two Dirac structures,
$i\gamma_5 \not\!p$ and $\gamma_5 \sigma_{\mu\nu} q^\mu p^\nu$.
The correlator for the $i\gamma_5$ structure with the kinematical
condition of Eq.~(\ref{cond}) takes almost the same form as
the one for the $\gamma_5 \sigma_{\mu\nu} q^\mu p^\nu$ structure.
A slight difference is the appearance of terms containing the
derivative of pion form factor as indicated in Eq.~(\ref{shphen1}).
Note that this difference is only specific to the PS coupling scheme.
As the form factor is a smooth function of $p^2$ around $p^2 = 0$,
this difference is not expected to be crucial.
Within the PS coupling scheme, $N \rightarrow N^*$ contributions to
the correlator, Eq.~(\ref{two2}), can be constructed by
using the effective Lagrangians for the positive ($\psi_+$)
and negative ($\psi_-$) parity resonances,
\begin{eqnarray}
g_{\pi NN_+} {\bar \psi} i \gamma_5 {\mbox{\boldmath $\tau$}} \cdot
{\mbox {\boldmath $\pi$}} \psi_+
&+&
g_{\pi NN_+} {\bar \psi}_+ i \gamma_5 {\mbox{\boldmath $\tau$}} \cdot
{\mbox {\boldmath $\pi$}} \psi \nonumber\ ,\\
g_{\pi NN_-} {\bar \psi} i {\mbox{\boldmath $\tau$}} \cdot
{\mbox {\boldmath $\pi$}} \psi_-
&-&
g_{\pi NN_-} {\bar \psi}_- i {\mbox{\boldmath $\tau$}} \cdot
{\mbox {\boldmath $\pi$}} \psi\ .
\end{eqnarray}
The nucleon field is denoted by $\psi$ here.
These terms contribute to the correlator because
the nucleon interpolating field can couple to the positive and negative
parity resonances via,
\begin{eqnarray}
\langle 0 | J_N | N_+ (k, s) \rangle = \lambda_+ U(k,s)\;; \quad
\langle 0 | J_N | N_- (k, s) \rangle = \lambda_- \gamma_5 U(k,s)\ ,
\end{eqnarray}
where $U(k,s)$ denotes the baryon Dirac spinor and $\lambda_{\pm}$
indicates the coupling strength of the interpolating field to
each resonance with specified parity.
The $\gamma_5 \sigma_{\mu\nu} q^\mu p^\nu$ structure of the correlator
takes the form,
\begin{eqnarray}
{2 \lambda_N \lambda_- g_{\pi NN_-} \over
(q^2 - m^2)(q^2 - m_-^2)} +
{2 \lambda_N \lambda_+ g_{\pi NN_+} \over
(q^2 - m^2)(q^2 - m_+^2)}\ ,
\end{eqnarray}
which can be compared with the $i\gamma_5 \not\!p$ structure
\begin{eqnarray}
{2 \lambda_N \lambda_- g_{\pi NN_-} (m_- - m)\over
(q^2 - m^2)(q^2 - m_-^2)} -
{2 \lambda_N \lambda_+ g_{\pi NN_+} (m_+ + m )\over
(q^2 - m^2)(q^2 - m_+^2)}\ .
\end{eqnarray}
By separating as
\begin{eqnarray}
{1 \over (q^2 - m^2)(q^2 - m_{\pm}^2)} \rightarrow
-{1 \over m_{\pm}^2 - m^2} \left [ {1 \over q^2 - m^2} -
{1 \over q^2 - m_{\pm}^2} \right ]
\label{sepa}
\end{eqnarray}
we can see that the transitions, $N\rightarrow N^*$, involve the two single
poles, one with the nucleon pole and the other with the
resonance pole. The former constitutes the unknown single pole as it
involves the undetermined parameters, $\lambda_{\pm}$ and $g_{\pi NN_{\pm}}$.
In the latter,
the finite width of the resonances can be incorporated
by replacing $m_{\pm} \rightarrow
m_{\pm} -i \Gamma_{\pm}/2$ in the denominator. Then when it is
combined with other such single poles from
higher resonances, it produces the spectral density which can be parameterized
by a step function as written in Eq.~(\ref{bkphen}). This also implies
that the continuum threshold, $S_\pi$, does not need to be different from
the one appearing in the usual nucleon sum rule.
It is now easy to obtain the spectral density for the two Dirac structures
by incorporating the decay width of the resonances.
For the $i \gamma_5 \not\!p$ structure, we have
\begin{eqnarray}
\rho_S (s)&=& 2 \left (-{\lambda_+ g_{\pi NN_+} \over m_+ -m}
+ {\lambda_- g_{\pi NN_-} \over m_- +m} \right ) \lambda_N \delta (s-m^2)
\nonumber \\
&+&{2 \lambda_N \lambda_+ g_{\pi NN_+} \over m_+ - m} G(s,m_+)
-{2 \lambda_N \lambda_- g_{\pi NN_-} \over m_- + m} G(s,m_-)
\label{spec1}
\end{eqnarray}
where
\begin{eqnarray}
G(s,m_{\pm}) = {1 \over \pi}
{m_{\pm} \Gamma_{\pm} \over (s-m_{\pm}^2)^2 + m_{\pm}^2 \Gamma_{\pm}^2}\ .
\end{eqnarray}
Note that the contribution from the positive-parity resonance is enhanced
by the factor $1/(m_+ - m)$ while the one from the negative-parity
resonance is
suppressed by the factor $1/(m_- + m)$ .
Similarly for the $\gamma_5 \sigma_{\mu\nu} q^\mu p^\nu$ structure,
we obtain
\begin{eqnarray}
\rho_S (s)&=& 2 \left ({\lambda_+ g_{\pi NN_+} \over m_+^2 -m^2}
+ {\lambda_- g_{\pi NN_-} \over m_-^2 -m^2} \right )
\lambda_N \delta (s-m^2)
\nonumber \\
&-&{2 \lambda_N \lambda_+ g_{\pi NN_+} \over m_+^2 - m^2} G(s,m_+)
-{2 \lambda_N \lambda_- g_{\pi NN_-} \over m_-^2 - m^2} G(s,m_-)\ .
\label{spec2}
\end{eqnarray}
Note that the superficial relative sign between
the positive- and negative-parity resonances
are opposite to that in Eq.~(\ref{spec1}). It means,
depending on the relative sign between $\lambda_+ g_{\pi NN_+}$
and $\lambda_- g_{\pi NN_-}$, the two contributions add up
in one case or cancel each other in the other case.
In other words, we can say something about the coefficients
of $\delta (s - m^2)$ and $G(s,m_{\pm})$,
by studying the sensitivity of the sum rules
to the continuum or to the single pole.
Additional contribution to the continuum may come from
$N^*\rightarrow N^*$ transitions. For the off-diagonal transitions
between two parities,
$N_+ \rightarrow N_- $ and $N_- \rightarrow N_+$,
we use the effective Lagrangians,
\begin{eqnarray}
g_{\pi N_+N_-} {\bar \psi}_+ i {\mbox{\boldmath $\tau$}} \cdot
{\mbox {\boldmath $\pi$}} \psi_-
&-&
g_{\pi N_+N_-} {\bar \psi}_- i {\mbox{\boldmath $\tau$}} \cdot
{\mbox {\boldmath $\pi$}} \psi_+\ ,
\end{eqnarray}
to construct the correlator.
These off-diagonal transitions lead to the spectral density of
\begin{eqnarray}
\rho_{OD} (s) \propto \lambda_- \lambda_+ [ G(s, m_+) - G(s, m_-)]\ ,
\end{eqnarray}
which is therefore suppressed by the cancellation between the two parity
resonances.
For the diagonal transitions,
$N_+ \rightarrow N_+ $ and $N_- \rightarrow N_-$,
we use
the effective Lagrangians,
\begin{eqnarray}
g_{\pi N_+N_+} {\bar \psi}_+ i \gamma_5 {\mbox{\boldmath $\tau$}} \cdot
{\mbox {\boldmath $\pi$}} \psi_+\;; \quad
g_{\pi N_-N_-} {\bar \psi}_- i \gamma_5 {\mbox{\boldmath $\tau$}} \cdot
{\mbox {\boldmath $\pi$}} \psi_- \ .
\end{eqnarray}
These diagonal transitions produce only the double pole for the correlator,
$1/(q^2 - m^2_{\pm} + i m_{\pm} \Gamma_{\pm} )^2$,
which is then translated into the spectral density,
\begin{eqnarray}
\rho_D (s) \sim \cases { -m_{\pm} g_{\pi N_{\pm} N_{\pm}}
\lambda_{\pm}^2 {d \over ds} G(s,m_{\pm})~~{\rm for}~~i\gamma_5 \not\!p \cr
\cr
\pm g_{\pi N_{\pm} N_{\pm}}
\lambda_{\pm}^2 {d \over ds} G(s,m_{\pm})~~{\rm for}~~\gamma_5
\sigma_{\mu\nu} q^\mu p^\nu\ .\cr }
\label{specd}
\end{eqnarray}
First note that, because of the derivative, each spectral density
has a node at $s=m^2_{\pm}$,
positive below the resonance and negative above the
resonance.
Then under the integration over $s$,
the spectral
density from the double pole
is partially canceled, leaving attenuated contribution
coming from the $s$ dependent Borel weight.
Indeed, one can numerically check that, for the Roper resonance,
$\int ds e^{-s/M^2} G(s,m_+)$ is always larger
than $\int ds e^{-s/M^2} dG(s,m_+)/ds$ for $M^2 \ge 0.7 $ GeV$^2$ and
the cancellation is more effective as $M^2$ increases.
In general, the continuum contributes more to a sum rule for larger $M^2$.
Hence the double pole is more suppressed than the single pole
in the region where the continuum is large.
Further suppression of the double pole continuum can be observed,
for example, by comparing the first equation of Eq.~(\ref{specd}) with
Eq.~(\ref{spec1}).
Even if one assumes
\footnote{
The nucleon interpolating field, $J_N$, is constructed such that
it couples strongly to the nucleon but weakly to excited states.
Therefore, $\lambda_{+}$ is expected to be smaller than
$\lambda_N$.
This assumption, therefore, may be regarded as assuming strong
coupling to the excited baryon.}
that $g_{\pi NN_+} \lambda_N \lambda_+
\sim g_{\pi N_+ N_+} \lambda_+^2$
, then Eq.~(\ref{spec1})
has the enhancing factor of $1/(m_+ - m)$ while the first equation in
Eq.~(\ref{specd})
contains only $m_+$. Thus, the double pole contribution is
much suppressed than the single pole, which can
be checked also from numerical calculations. The similar suppression
can be expected for the second equation in Eq.~(\ref{specd}).
Therefore, we expect that the continuum mainly comes from the
single pole of
$1/(q^2 -m^2_{\pm} + i m_{\pm} \Gamma_{\pm})$ which is generated
only from the $N \rightarrow N^*$ transitions.
This will justify the ``step-like'' parameterization of the
continuum as given in Eq.~(\ref{bkphen}).
Now we discuss the case with the PV coupling scheme.
We use the following Lagrangians
\begin{eqnarray}
&&{g_{\pi N_+N_+}\over 2 m_+}
{\bar \psi}_+ \gamma_5 \gamma_\mu {\mbox{\boldmath $\tau$}} \cdot
\partial^\mu {\mbox {\boldmath $\pi$}} \psi_+
\;; \quad
{g_{\pi N_-N_-}\over 2 m_-}
{\bar \psi}_- \gamma_5 \gamma_\mu {\mbox{\boldmath $\tau$}} \cdot
\partial^\mu {\mbox {\boldmath $\pi$}} \psi_-\ ,
\nonumber \\
&&{g_{\pi NN_+}\over m + m_+}
{\bar \psi}_+ \gamma_5 \gamma_\mu {\mbox{\boldmath $\tau$}} \cdot
\partial^\mu {\mbox {\boldmath $\pi$}} \psi + (H. C.)\ ,
\nonumber \\
&&{g_{\pi NN_-}\over m_- - m}
{\bar \psi}_- \gamma_5 \gamma_\mu {\mbox{\boldmath $\tau$}} \cdot
\partial^\mu {\mbox {\boldmath $\pi$}} \psi + (H. C.)\ ,
\nonumber \\
&&{g_{\pi N_+N_-}\over m_- - m_+}
{\bar \psi}_- \gamma_5 \gamma_\mu {\mbox{\boldmath $\tau$}} \cdot
\partial^\mu {\mbox {\boldmath $\pi$}} \psi_+ + (H. C.)\ .
\end{eqnarray}
These effective Lagrangians in the PV scheme
are constructed such that the action is the
same as the PS case when the resonances are on-shell .
In this case, complications arise from the possible single pole term
coming from $N \rightarrow N$~\cite{hung} contribution which was
absent in the PS scheme. This also means that
there could be additional single poles coming from
$N \rightarrow N^*$ and $N^* \rightarrow N^*$ transitions.
Note,
this kind of complication arises only in the $i\gamma_5\not\! p$
case. That is, for the $\gamma_5 \sigma_{\mu\nu} q^\mu p^\nu$
case, we have the same spectral density as given in Eq.~(\ref{spec2}).
As we mentioned above, because the double pole type contribution,
$1/(q^2 - m^2_{\pm} + i m_{\pm} \Gamma_{\pm} )^2$,
to the continuum is suppressed, only single poles are important in
constructing the spectral density for the unknown single pole and
the ``step-like'' continuum.
To construct the single poles, we consider all possibilities,
$N\rightarrow N$, $N\rightarrow N^*$ and $N^*\rightarrow N^*$.
The coefficient of
$\lambda_N \delta(s -m^2)$, namely the unknown single pole term for
the $i\gamma_5\not\! p$ structure,
can be collected from $N\rightarrow N$ and $N\rightarrow N^*$
transitions,
\begin{eqnarray}
{g_{\pi N} \lambda_N \over 2 m} -
2m \left ({\lambda_+ g_{\pi NN_+} \over m_+^2 -m^2}
+ {\lambda_- g_{\pi NN_-} \over m_-^2 -m^2} \right )\ .
\label{vspec1}
\end{eqnarray}
Compared with the corresponding term in Eq.~(\ref{spec2}),
this term is differed by
the first term associated with $N\rightarrow N$. The second
and third terms are the same except for the overall factor,
$-2 m$.
Also the continuum contributions are collected from the terms
containing $1/(q^2 -m_{\pm}^2)$ in the correlator.
We thus obtain the spectral density for the continuum,
\begin{eqnarray}
&&\left ( g_{\pi N_+ N_+} {\lambda_+^2 \over 2 m_+} +
g_{\pi N N_+} {2 m_+ \lambda_N \lambda_+ \over m^2_+ - m^2} -
g_{\pi N_+ N_-} {2 m_+ \lambda_+ \lambda_- \over m^2_- - m^2_+}
\right )
G(s, m_+) \nonumber\ \\
&+&
\left ( g_{\pi N_- N_-} {\lambda_-^2 \over 2 m_-} -
g_{\pi N N_-} {2 m_- \lambda_N \lambda_- \over m^2_- - m^2} -
g_{\pi N_+ N_-} {2 m_- \lambda_+ \lambda_- \over m^2_- - m^2_+}
\right )
G(s, m_-)\ .
\label{vspec2}
\end{eqnarray}
\section{Reliability of QCD sum rules and possible interpretation}
\label{sec:analysis}
In section~\ref{sec:qcdsr}, we have constructed three sum rules, each for
the $i\gamma_5\not\!p$, the $i\gamma_5$ and the
$\gamma_5 \sigma_{\mu\nu} q^\mu p^\nu$
structures beyond the soft-pion limit.
Ideally, all
three sum rules should yield
the same result for $g_{\pi N}$.
In reality, each sum rule could have uncertainties due to the
truncation in the
OPE side or large contributions from the continuum. Therefore, depending
on Dirac structures, there could be large or small
uncertainties in the determination of the physical parameter.
This can be checked by looking into the Borel curves
and seeing whether or not they are stable functions of the Borel mass. In
the QCD sum rules for baryon masses, the ratio of two different sum rules
is usually taken in extracting a physical mass
without explicitly checking the stability of each sum rule.
As pointed out by Jin and Tang~\cite{jin2}, this could
be dangerous. In this section, we will demonstrate this issue further by
considering three sum rules provided in section~\ref{sec:qcdsr}.
In Eqs. (\ref{bksum}), (\ref{shsum}) and (\ref{hungsum}), LHS can
be written in the form, $c + bM^2$. The parameter $c$ denotes the same
quantity, {\it i.e.} $g_{\pi N} \lambda_N^2$,
but $b$ could be different in
each sum rule. We can determine $c$ and $b$
by fitting RHS by a straight line within the appropriately chosen
Borel window. Usually, the maximum Borel mass is determined by restricting
the continuum contribution to be less than, say, 30 $\sim$ 40 \% of the
first term of the OPE and the minimum Borel mass is chosen by
restricting the highest dimensional term of the OPE to be less than,
say 10 $\sim$ 20 \% of the total OPE. These criteria lead to the
Borel window centered around the Borel mass $M^2 \sim 1 $ GeV$^2$.
Further notice that $c$ determined in this way
does not depend on the PS and PV coupling schemes while the interpretation
of $b$ could be scheme-dependent.
In the analysis below, we use the following standard
values for the QCD parameters,
\begin{eqnarray}
&&\langle {\bar q} q\rangle = -(0.23~{\rm GeV})^3\;; \quad
\left \langle {\alpha_s \over \pi} {\cal G}^2
\right \rangle = (0.33~{\rm GeV})^4 \nonumber \ ,\\
&&\delta^2 = 0.2~{\rm GeV}^2\;; \quad m_0^2 = 0.8~{\rm GeV}^2\ .
\end{eqnarray}
Uncertainties in these parameters do not significantly
change our discussion below.
For the nucleon mass $m$ and the pion decay constant $f_\pi$, we
use their physical values, $m=0.94$ GeV and $f_\pi = 0.093$ GeV.
In Figure~\ref{fig2}~(a), we plot the Borel curves obtained from
Eqs.(\ref{bksum}), (\ref{shsum}) and (\ref{hungsum}).
The thick
solid line is from Eq.(\ref{hungsum}), the thick dot-dashed line from
Eq.(\ref{shsum}) and the thick dashed line from Eq.(\ref{bksum}).
In all three curves, we use $S_\pi=2.07$ GeV$^2$ corresponding to
the mass squared of the Roper resonance. To check
the sensitivity on $S_\pi$, we have increased the continuum threshold
by 0.5 GeV$^2$ and plotted in the same figure denoted by respective
thin lines.
In extracting some physical values, one has to fit the curves within
the appropriate Borel window using the function $c + b M^2$.
The unknown single pole term, $b$, is represented by the slope of
each Borel curve. The intersection of the best fitting curve with
the vertical axis gives the value of $c$. Figure~\ref{fig2}~(b)
shows the best fitting curves within the Borel window,
$0.8 \le M^2 \le 1.2 $ GeV$^2$. This window is chosen
following the criteria mentioned above. But as the Borel curves
are almost linear around $M^2 \sim 1 $ GeV$^2$, the qualitative aspect
of our results does not change significantly even if we use the slightly
different window.
The $\gamma_5 \sigma_{\mu\nu} q^\mu p^\nu$ sum rule yields
$c \sim 0.00308$ GeV$^6$.
To determine $g_{\pi N}$,
the unknown parameter $\lambda_N$ needs to be eliminated by
combining with the nucleon odd sum rule~\cite{hung}.
According to the analysis in Ref.~\cite{hung},
this sum rule yields $g_{\pi N} \sim 10 $ relatively close to
its empirical value.
As can be seen from the thin solid curve which is almost indistinguishable from
the thick solid curve
in Fig.~\ref{fig2}~(b), this result
is not sensitive to the continuum threshold, $S_\pi$.
Also note from Table~\ref{tab} that the unknown single pole term
represented by $b$ is relatively small in this sum rule.
The result from the $i\gamma_5$ sum rule is $c \sim -0.0003 $ GeV$^6$,
which is
obtained from linearly fitting the thick dot-dashed curve
in Fig.~\ref{fig2}~(a).
Even though the thin dot-dashed curve is almost indistinguishable from the
thick dot-dashed curve, the best fitting value for $c$
with $S_\pi =2.57 $ GeV$^2$ is about 50 \% smaller than the one with
$S_\pi =2.07 $ GeV$^2$. This is because the total OPE strength of
this sum rule is very small.
The negative value of $c$ indicates that $g_{\pi N}$ is negative.
Also the magnitude of this is about a factor of ten smaller than the
corresponding value from the $\gamma_5 \sigma_{\mu\nu} q^\mu p^\nu$ sum rule.
When this result is combined with the nucleon odd sum rule, then
the extracted $\pi NN$ coupling would be a lot smaller than
its empirical value and therefore it can not be acceptable as a reasonable
prediction.
As we discussed in Section~\ref{sec:qcdsr},
the problem might due to the kinematical condition, Eq.~(\ref{cond}).
Though we have introduced this condition in order to achieve the
independence from the coupling scheme employed, this condition
inevitably combines two independent sum rules, $\Pi_1$ and $\Pi_2$
in Eq.~(\ref{expansion}), which reduces the OPE strength.
This reduction makes the $i\gamma_5$ sum rule less reliable because of the
cancellation of the main terms.
Nevertheless, this study shows that one could get a totally different
result depending on how the sum rule is constructed.
For the $i\gamma_5\not\!p$ sum rule,
the Borel curve around $M^2 \sim 1 $ GeV$^2$ is almost a linear
function of $M^2$. By linearly fitting the thick dashed curve
($S_\pi= 2.07$ GeV$^2$),
we get $c\sim -0.00022$ GeV$^6$.
But with using $S_\pi= 2.57$ GeV$^2$, we obtain
$c\sim -0.0023$ GeV$^6$, a factor
of ten larger in magnitude.
Thus, there is a strong sensitivity on $S_\pi$ which changes
the result substantially. Again $c$ is
negative in this sum rule, indicating that $g_{\pi N}$ is negative.
The sign of this result however depends on the Borel window chosen.
Restricting the Borel window to smaller Borel masses, the extracted
$c$ becomes positive though small in magnitude.
The slope of the Borel curve is also large, indicating that
there is a large contribution from the undetermined single pole terms.
The thin dashed curve ( for $S_\pi = 2.57$ GeV$^2$)
in Fig.~\ref{fig2}~(b) is steeper than the
thick dashed curve (for $S_\pi = 2.07$ GeV$^2$).
In a sum rule, the larger continuum threshold usually suppresses the
continuum contribution further.
Since a more steeper curve is expected as we further suppress the continuum,
this sum rule contains very large unknown single pole terms.
This provides a very important issue which should be properly addressed
in the construction of a sum rule.
{\it The unknown single pole terms could be small or large
depending on a specific sum rule one considers.}
From the three results, we showed that the extracted parameter, here $c$,
could be totally different depending on how we construct a sum rule.
Even the sign of the parameter is not well fixed.
Certainly the
$\gamma_5 \sigma_{\mu\nu} q^\mu p^\nu$ sum rule has nice features,
such as small contributions from the continuum and the unknown single pole.
And when it is combined with the nucleon odd sum rule, it provides
$g_{\pi N}$ reasonably close to its empirical value~\cite{hung}.
But the other sum rules do not provide a reasonable or stable
result.
It is not clear if this is due to the lack of convergence in the OPE or
due to the limitations in the sum rule method itself. To answer such
questions, it would be useful to analyze the OPE side further.
However our analysis raises an issue whether or not a sum rule
based on one specific Dirac structure is reliable.
Still, regarding the sensitivity of $S_\pi $ and the unknown single
pole contribution, we can provide a reasonable explanation based on
effective model formalism developed in Section~\ref{sec:effective}.
Results from the two sum rules, $i\gamma_5$ and
$\gamma_5 \sigma_{\mu\nu} q^\mu p^\nu$ structures,
share similar properties.
As can be seen from Table~\ref{tab},
for the $i\gamma_5$ sum rule, the extracted $c$ is $-0.00033$ GeV$^6$
when $S_\pi =2.07$
GeV$^2$ is used. For $S_\pi =2.57$ GeV$^2$, $c =-0.00016$ GeV$^6$.
So the difference is 0.00017 GeV$^6$. This difference is close to
the difference from the $\gamma_5 \sigma_{\mu\nu} q^\mu p^\nu$
case. Furthermore, the magnitude of $b$ is relatively close in the
two sum rules.
These common behaviors of the two sum rules
are expected because, as we briefly mentioned in section~\ref{sec:effective},
their phenomenological structures for the
continuum and the unknown single poles are almost the same except for
the possible small term containing the derivative of the pion form factor.
[See Eq.~(\ref{shphen1}).]
The similar slope and the similar contribution from $S_\pi$ are
actually related as can be seen from Eq.~(\ref{spec2}).
In Eq.~(\ref{spec2}), the terms corresponding to the unknown single
poles have the same relative sign between the positive- and
negative-parity resonances as the terms corresponding to
the continuum. If we assume that the sign of $\lambda_+ g_{\pi N N_+}$
is opposite to that of $\lambda_- g_{\pi N N_-}$, then there
is a cancellation between the two resonances.
Thus, with this sign assignment, we expect both terms,
unknown single pole and the ``step-like'' continuum,
contribute less to the sum rules. This is what Fig.~\ref{fig2}
indicates.
As Eq.~(\ref{spec2}) is independent of the coupling schemes,
this explanation is valid even for the PV case.
The sign assignment, within the PS coupling scheme, also explains
the large slope and strong sensitivity of $S_\pi$ in
the $i\gamma_5\not\!p$ sum rule.
From Eq.~(\ref{spec1}) with the
sign assignment,
negative- and positive-parity
resonances add up for the undetermined single pole and the continuum,
yielding large contribution to the two.
This explanation for the $i\gamma_5\not\!p$ sum rule can be changed
for the PV coupling scheme.
For the case with the undetermined single pole,
as can be seen from Eq.~(\ref{vspec1}), resonances with different
parities cancel each other also for the $i\gamma_5\not\!p$ case under
the sign assignment introduced above.
However, there is an additional single pole coming from $N \rightarrow N$
which could explain the large slope. Its contribution to
$A$ in Eq.~(\ref{bksum}) can be calculated to be $-1/2m$.
In terms of magnitude,
it contributes 50\% of the LHS at $M^2\sim 1$ GeV$^2$ with the opposite sign
from the first term. Since $c$ is negative as we showed in
Table~\ref{tab}, $g_{\pi N}$ is also negative. Since $b \sim g_{\pi N} A$,
the unknown single pole term is positive which can explain the
large and positive slope in this sum rule.
As for the continuum,
Eq.~(\ref{vspec2}) shows that
there are other contributions
associated with $N^* \rightarrow N^*$ whose magnitudes
can not be estimated.
Even though we can not say that
the large continuum only comes from adding up the positive- and
negative-parity resonances, this is not contradictory to the
sign assignment for $g_{\pi NN_+} \lambda_+$ and
$g_{\pi NN_-} \lambda_-$.
Note however that the negative sign of $c$ is not firmly established in this
sum rule for the $i\gamma_5\not\!p$ structure as there is a possibility
that $c$ can be positive for different Borel window chosen.
In this case, the positive and large slope of the Borel curve can
not be well explained within the effective model.
Nevertheless, our study in this work, though it was specific to
the two-point nucleon correlation function with pion,
raises important issues in
applying QCD sum rules in calculating various physical quantities.
Most QCD sum rule calculations are performed based
on a specific Dirac structure without justifying the use of
the structure. As we presented in this work, a sum rule result
could have a strong dependence on the specific Dirac structure
one considers. This dependence is driven by the
way how the sum rule is constructed or by the difference in
the continuum contributions or the unknown single pole terms.
The continuum and the unknown single pole terms are large in
some case while they are small in other cases.
\section{Summary}
\label{sec:summary}
In this work, we have presented three different sum rules for
the two-point correlation function with pion,
$i\int d^4x e^{iq\cdot x} \langle 0| T J_N(x) {\bar J}_N(0)|\pi(p)\rangle$,
beyond the soft-pion limit. The PS and PV coupling scheme independence
has been imposed in the construction of the sum rules.
We have corrected an error in the previous sum rule in Ref.~\cite{krippa}
and found that the sum rule contains
large contribution from the unknown single pole, $b$,
and the continuum.
On the other hand, the sum rules for
$i\gamma_5$ and
$\gamma_5 \sigma_{\mu\nu} q^\mu p^\nu$ structures share similar
properties, relatively similar contributions from the continuum and the
unknown single pole.
By making specific models for higher resonances, we have explained
how the latter two sum rules are different from
the $i\gamma_5\not\!p$ sum rule.
Within the PS coupling scheme, the difference can be well explained by
the cancellation or addition of
the positive- and negative-parity resonances in higher mass states.
Within the PV coupling scheme, the large slope of the Borel curve
in the $i\gamma_5\not\!p$ sum rule can be attributed to the single
pole coming from $N\rightarrow N$ transition even though
this explanation is limited to the case with negative value of $g_{\pi N}$.
The value of $c$ extracted from the $i\gamma_5$ and
$\gamma_5 \sigma_{\mu\nu} q^\mu p^\nu$ sum rules are different.
For the $i\gamma_5$ sum rule, in order to eliminate the coupling
scheme dependence, we need to impose the on-mass-shell condition before
the matching the OPE and phenomenological correlators. Then a significant
cancellation occurs and it makes the $i \gamma_5 $ sum rule less
reliable.
We have stressed that in the construction of a sum rule, a care must be
taken.
\acknowledgments
This work is supported in part by the
Grant-in-Aid for JSPS fellow, and
the Grant-in-Aid for scientific
research (C) (2) 08640356
of the Ministry of Education, Science, Sports and Culture of Japan.
The work of H. Kim is also supported by Research Fellowships of
the Japan Society for the Promotion of Science.
The work of S. H. Lee is supported by KOSEF through grant no. 971-0204-017-2
and 976-0200-002-2 and by the
Korean Ministry of Education through grant no. 98-015-D00061.
| proofpile-arXiv_065-9353 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{Introduction}
This paper grew out from attempts to understand better the
homological mirror symmetry for elliptic curves.
The general homological mirror conjecture formulated by
M.~Kontsevich in \cite{Kon} asserts that the derived category
of coherent sheaves on a complex variety is equivalent to (the
derived category of) the Fukaya category of the mirror dual symplectic
manifold.
This equivalence was proved in \cite{PZ} for the case of elliptic curves
and dual symplectic tori.
However, the proof presented in \cite{PZ} is rather computational and does not
give a conceptual construction of a functor between two categories.
In the present paper we fill up this gap by providing such a
construction. We also get a glimpse of what is going on in the
higher-dimensional case.
The idea is to use a version of the Fourier transform for families of real tori
which generalizes the well-known correspondence between smooth functions
on a circle and rapidly decreasing sequences of numbers (each function
corresponds to its Fourier coefficients). On the other hand, this
transform can be considered as a $C^{\infty}$-version of the Fourier-Mukai
transform.
Roughly speaking, given a symplectic manifold $M$ with a Lagrangian tori
fibration, one introduces a natural complex structure on the dual
fibration $M\dl$. We say that $M\dl$ is mirror dual to $M$.
Then our transform produces a holomorphic vector bundle on $M\dl$
starting from a Lagrangian submanifold $L$ of $M$ transversal to all fibers and
a local system on $L$. We prove that the Dolbeault complex of this
holomorphic vector bundle is isomorphic to some modification of
the de Rham complex of the local system on $L$.
In the case of an elliptic curve, we check that all holomorphic vector bundles
on $M\dl$ are obtained in this way. Also we construct a quasi-isomorphism
of our modified de Rham complex with the complex
that computes morphisms in the Fukaya category between $L$ and some
fixed Lagrangian submanifold (which corresponds to the trivial line
bundle on $M\dl$). One can construct a
similar quasi-isomorphism for arbitrary pair of Lagrangian submanifolds in
$M$ (which
are transversal to all fibers). The most natural way to do it would
be to use tensor structures on our categories. The slight
problem is that we are really dealing with dg-categories rather than
with usual categories and the axiomatics of tensor dg-categories
does not seem to be understood well enough.
Hence, we restrict ourself to giving a brief sketch of how these
structures look in our case in Sections \ref{RemTen}, \ref{RemTenAbs}, and
\ref{RemTenRel}.
It seems that to compare Fukaya complex with our modified de Rham
complex in higher-dimensional case we need a generalization
of Morse theory for closed $1$-forms (cf. \cite{N}, \cite{Pa}) together
with a version of the result of Fukaya and Oh in \cite{FO} comparing
Witten complex with Floer complex.
The study of mirror symmetry via Lagrangian fibrations originates from
the conjecture of \cite{SYZ} that all mirror dual pairs of Calabi-Yau
are equipped with dual special Lagrangian tori fibration.
The geometry of such fibrations and their compactifications is studied
in \cite{G1}, \cite{G2} and \cite{H}. In particular, the construction
of a complex structure on the dual fibration can be found in these papers.
On the other hand, K.~Fukaya explains in \cite{F-ab} how to construct
a complex structure (locally) on the moduli space of Lagrangian submanifolds
(equipped with rank $1$ local systems)
of a symplectic manifold $M$, where Lagrangian submanifolds are considered
up to Hamiltonian diffeomorphisms of $M$. Presumably these two constructions
are compatible and one can hope that for some class of Lagrangian submanifolds
the speciality condition picks a unique representative in each orbit of
Hamiltonian diffeomorphisms group. Our point of view is closer to that
of Fukaya: we do not equip our symplectic manifold with a complex structure,
so we cannot consider special geometry.
However, we do not consider the problem of
compactifying the dual fibration and we do not know how to deal with
Lagrangian submanifolds which intersect some fibers non-transversally.
So it may well happen that special geometry will come up in relation
with one of these problems.
The simplest higher-dimensional case in which our construction
can be applied is that of a (homogeneous) symplectic torus
equipped with a Lagrangian fibration by
\select{affine} Lagrangian subtori. The corresponding construction of the
mirror complex torus and of holomorphic bundles associated with
affine Lagrangian subtori intersecting fibers transversally
coincides with the one given by Fukaya in \cite{F-ab}.
However, even in this case the homological mirror conjecture
still seems to be far from reach (for dimensions greater than $2$).
Note that the construction of the mirror dual complex torus to a
given (homogeneous) symplectic torus $T$ requires a choice of a
linear Lagrangian subtorus in $T$. For different choices we obtain
different complex tori. The homological mirror conjecture would
imply that the derived categories on all these complex tori
are equivalent (to be more precise, some of these categories should be
twisted by a class in $H^2(T\dl,\mathcal O^*)$). This is indeed the case
and follows from the main theorem of \cite{P}. The corresponding
equivalences are generalizations of the Fourier-Mukai transform.
While we were preparing this paper, N.~C.~Leung and E.~Zaslow told us that
they invented the same construction of a holomorphic bundle coming
from a Lagrangian submanifold.
\subsection{Organization}
Section 1 contains the basic definitions and a sketch of the results of this
paper.
In Section 2, we deal with a single real torus. We define the Poincar\'e bundle that
lives on the product of our torus
and the dual torus, and then use it to define a modified Fourier transform,
which in this simple case is just the
correspondence between bundles with unitary connections on a torus and sky-scraper
sheaves on the dual torus.
Section 3 contains generalization of these results to families of tori. We describe
the holomorphic sections of a vector bundle on the
complex side in terms of rapidly decreasing
sections of some bundle corresponding to its ``Fourier transform'' (notice that
not every holomorphic vector
bundle has this ``Fourier transform''). Here we also analyze the case of elliptic
curve.
Section 5 is devoted to interpreting the Floer cohomologies in our terms
(i.e., using the spaces of rapidly
decreasing sections of some bundles). This result is valid for elliptic curves
only.
\subsection{Notation}
We work in the category of real $C^\infty$-manifolds. The words ``a bundle on a
manifold $X$'' mean a
(finite-dimensional) $C^\infty$-vector bundle over $\C$ on $X$.
We usually identify a vector bundle with the corresponding sheaf of $C^\infty$-
sections.
For a manifold $X$, $T_X\to X$ (resp. $T\dl_X\to X$) is the real
tangent (resp. cotangent) bundle, $\Vect(X)$ (resp. $\Omega^1(X)$) is the space of
complex vector fields
(resp. complex differential forms). If $X$ carries a complex structure,
$T^{0,1}_X\subset T_X\otimes\C$ stands for the subbundle of anti-holomorphic vector
fields.
$\Diff(X)$ is the algebra of differential operators on $X$ with
$C^\infty(X)\otimes\C$-coefficients.
Let $F$ be a vector bundle on a manifold $X$, $\nabla_F:F\to
F\otimes\Omega^1(X)$ a connection.
We define the \select{curvature} $\curv\nabla_F\in\Omega^2(X)\otimes\BOX{End}(F)$ of
$\nabla_F$ by the usual formula:
$\langle(\curv\nabla_F),\tau_1\wedge\tau_2\rangle=
(\nabla_F)_{[\tau_1,\tau_2]}-[(\nabla_F)_{\tau_1},(\nabla_F)_{\tau_2}]$ for any
$\tau_1,\tau_2\in\Vect(X)$. Here
$\langle\al,\al\rangle$ stand for the
natural pairing $\bigwedge^2\Omega^1(X)\times\bigwedge^2\Vect(X)\to
C^\infty(X)$ defined by
$\langle\mu_1\wedge\mu_2,\tau_1\wedge\tau_2\rangle=
\langle\mu_1,\tau_2\rangle\langle\mu_2,\tau_1\rangle-
\langle\mu_1,\tau_1\rangle\langle\mu_2,\tau_2\rangle$.
A \select{local system} ${\cal L}$ on a manifold $X$ is a vector bundle $F_{\cal L}$
together with a
connection $\nabla_{\cal L}$ in $F_{\cal L}$ such that $\curv\nabla_{\cal L}=0$ (in other
words, $\nabla_{{\cal L}}$ is \select{flat}).
The fiber ${\cal L}_x$ of a local system ${\cal L}$ over $x\in X$ equals the fiber
$(F_{\cal L})_x$.
For any $x\in X$, a local system ${\cal L}$ defines the \select{monodromy}
$\mon({\cal L},x):\pi_1(X,x)\to GL({\cal L}_x)$.
We say that a local system is \select{unitary} if for any $x\in X$, there is
a Hermitian form on ${\cal L}_x$ such that $\mon({\cal L},x)(\gamma)$ are unitary
for all $\gamma\in\pi_1(X,x)$ (it is enough to check the condition for one
point on each connected component of $X$).
For a manifold $X$ and $\tau\in\Omega^1(X)$, we denote by $O_X(\tau)$ the
trivial line bundle together
with the
connection $\nabla=d+\tau$. In particular, $O_X:=O_X(0)$ stands for the trivial
local system on $X$.
\section{Main results}
\label{MainSec}
\subsection{}
Let $(M,\omega)$ be a symplectic manifold, $p:M\to B$ a surjective smooth map
with Lagrangian fibers.
Suppose that the fibers
of $M\to B$ are isomorphic to a torus $(\R/\Z)^n$. Fix a Lagrangian section
$0_M:B\to M$. We call such collection
$(p:M\to B,\omega,0_M)$ (or, less formally, the map $p:M\to B$) a
\select{symplectic family of tori}.
The symplectic form induces a natural flat connection on $T_B$
(using the canonical isomorphism
$R^1p_*\R=T_B$) and an identification
$M=T\dl_B/\Gamma$, where $\Gamma$ is a horizontal lattice in $T\dl_B$ ($\Gamma$
is dual to
$\Gamma\dl:=R^1p_*\Z\subset R^1p_*\R=T_B$). This identification agrees with the
symplectic structure, so $\Gamma\subset T\dl_B$ is Lagrangian.
Hence the connection on $T_B$ is symmetric (in
the sense of \cite{Ml}). Recall that
a connection $\nabla$ on $T_B$ is called \select{symmetric} if
$\nabla_{\tau_1}(\tau_2)-\nabla_{\tau_2}(\tau_1)=
[\tau_1,\tau_2]$ for any $\tau_1,\tau_2\in\Vect(B)$.
\begin{Rem} In particular, we see that $M\to B$ is locally (on $B$)
isomorphic to $(V/\Gamma)\times U$ for some vector space $V$, a lattice $\Gamma\subset V$,
and an open subset $U\subset V\dl$.
Besides, we see that the connection on $T_B$ induces a natural flat connection
on $T_M$.
\end{Rem}
Consider the family of dual tori $M\dl:=T_B/\Gamma\dl$. The connection on $T_B$
yields a natural
isomorphism $T_{M\dl}=(p\dl)^*T_B\oplus (p\dl)^*T_B$ such that the differential
of $p\dl:M\dl\to B$ coincides with
the first projection. So one can define a complex structure on $M\dl$ using the
operator
$J:T_{M\dl}\to T_{M\dl}:(\xi_1,\xi_2)\mapsto(-\xi_2,\xi_1)$. The complex
manifold $M\dl$ is called
the \select{mirror dual} of $M$.
For any torus $X=V/\Gamma$, the dual torus $X\dl=V\dl/\Gamma\dl$ can be
interpreted as a moduli space of
one-dimensional unitary local systems on $X$. So there is a natural universal
$X\dl$-family of local systems
on $X$. We can interpret this family as a bundle with a connection on $X\times
X\dl$ (see Section \ref{Poin} for details).
If we apply this constructions to fibers of $p$, we get a canonical bundle ${\cal P}$
on $M\times_BM\dl$ together with
a connection $\nabla_{\cal P}$ on ${\cal P}$ ($\nabla_{\cal P}$ is not flat).
Suppose we are given a Lagrangian submanifold $i:L\hookrightarrow M$ which is
transversal to fibers of $p$, and a local
system ${\cal L}$ on $L$. We also assume that $p|_L:L\to B$ is proper. Define the
\select{Fourier transform} of $(L,{\cal L})$ by
the formula
\begin{equation}
\Four(L,{\cal L}):=(p_{M\dl})_*
(((i\times id)^*{\cal P})\otimes((p_L)^*{\cal L}))
\label{relFour}
\end{equation}
Here $p_{M\dl}:L\times_B M\dl\to M\dl$, $(i\times id):L\times_BM\dl\to
M\times_BM\dl$, and
$p_L:L\times_B M\dl\to L$ are the natural maps. The map $p_{M\dl}$ is a proper
unramified covering, so
$\Four(L,{\cal L})$ is a bundle with connection on $M\dl$.
\begin{Th} (i) The $\overline\partial$-component of the connection on
$\Four(L,{\cal L})$ is flat
(so $\Four(L,{\cal L})$ can be considered as a holomorphic vector bundle on $M\dl$);
(ii) If $B\simeq(\R/\Z)$, any holomorphic vector bundle on $M\dl$ is isomorphic
to $\Four(L,{\cal L})$ for
some $(L,{\cal L})$.
\label{dflat}
\end{Th}
\begin{Rem} There is an analogue of the above theorem for the case when
the fibration does not have a global Lagrangian section. In this case,
the dual complex manifold $M\dl$ carries a canonical cohomology class
$e\in H^2(M\dl,O_{M\dl}^*)$, hence one has the corresponding twisted category
of coherent sheaves (cf. \cite{G}). The analogue of $\Four(L,{\cal L})$ will
be an object in this twisted category. We will consider this generalization
in more details elsewhere. Also it would be interesting to find an
analogue of our construction for Lagrangian foliations. In the case of a
torus, this should lead to the functor considered by Fukaya in \cite{F-nc}.
\end{Rem}
\subsection{}
Let $(L,{\cal L})$ be as before.
Consider the natural map $u:T\dl_B\to M$ (the ``fiberwise universal cover''). Set
${\widetilde L}:=u^{-1}(L)$.
Denote by $u_L^*{\cal L}$ the pull-back of ${\cal L}$ to ${\widetilde L}$ and by $\tau$ the restriction
of the canonical $1$-form from
$T\dl_B$ to ${\widetilde L}$. Since ${\widetilde L}\subset T\dl_B$ is Lagrangian, $\tau$ is
closed, so by adding
$-2\pi\tau$ to the connection on $u_L^*{\cal L}$ we get a new local system
$\tcL:=(u_L^*{\cal L})\otimes O_{{\widetilde L}}(-2\pi\tau)$.
Denote by $C^\infty(\tcL)$ the space of $C^\infty$-sections of $\tcL$.
Since ${\widetilde L}\to B$ is an unramified covering,
we have an embedding $\Diff(B)\to\Diff({\widetilde L})$. Set
\begin{equation}
{\cal S}(\tcL):=\{s\in C^\infty(\tcL)|Ds\mbox{ is rapidly decreasing for any
}D\in\Diff(B)\}
\end{equation}
Here a section $s$ of $\tcL$ is
called \select{rapidly decreasing} if $\lim_{||g||\to\infty,
g\in\Gamma_x}s((x,\tau+g))||g||^k=0$
for any $(x,\tau)\in L\times_M T\dl_B={\widetilde L}$ and $k>0$. Here $\Gamma_x$ stands for the
fiber of $\Gamma\subset T\dl_B$ over $x\in B$. Since $s((x,\tau+g))\in
\tcL_{(x,\tau+g)}={\cal L}_{(x,\tau)}$, the definition makes sense. Besides, it does not
depend on the choice of a norm
$||\al||$ on $T\dl B$. Clearly, ${\cal S}(\tcL)$ is a $\Diff(B)$-module.
\begin{Th} The de Rham complex $\DR(\tcL)$ of the $\Diff(B)$-module
${\cal S}(\tcL)$ is isomorphic
to the Dolbeault complex of $\Four(L,{\cal L})$.
\label{ThdR}
\end{Th}
\subsection{}
\label{FukSec}
Suppose $B\simeq\R/\Z$. Fix an orientation on $B$.
Let $L,{\cal L}$ be as before. Moreover, we suppose that ${\cal L}$ is \select{quasi-unitary},
that is, for any $x\in L$ all eigenvalues of $\mon({\cal L},x)$ are of absolute value $1$
(it follows from Lemma \ref{QU} that this condition
is not too restrictive). We also assume that $L$ meets the zero section
$0_M(B)\subset M$
transversally.
As before, ${\widetilde L}=u^{-1}(L)\subset T\dl_B$. Suppose $\tilde c\in{\widetilde L}$ lies on the zero section
$0_{T\dl_B}(B)\subset T\dl_B$. Then in a neighborhood of $\tilde c$,
${\widetilde L}\subset T\dl_B$ is the graph of some $\mu\in\Omega^1(B)$, $d\mu=0$. Denote by $b\in B$ the image
of $\tilde c\in{\widetilde L}$. In a neighborhood of $b$, $\mu=df$ for some $f\in C^\infty(B)$. We say that $\tilde c$
is \select{positive} (resp. \select{negative}) if $f$ has a local minimum (resp. maximum) at $b$.
Denote by $\{\tilde c_k^+\}\subset{\widetilde L}$ (resp. $\{\tilde c_l^-\}\subset{\widetilde L}$) the set of all positive (resp. negative)
points of intersection with the zero section.
Let $\gamma\subset{\widetilde L}$ be an arc with endpoints $\tilde c_k^+$ and $\tilde c_l^-$. We say that $\gamma$
is \select{simple} if it does not intersect the zero section.
Denote by $M(\gamma):\tcL_{\tilde c_k^+}\to\tcL_{\tilde c_l^-}$ the monodromy of $\tcL$ along
$\gamma$ (the monodromy is the product of the monodromy of $u_L^*{\cal L}$ and $\exp(2\pi A)$,
where $A$ is the oriented area of the domain restricted by $\gamma$ and the zero section).
Set $d(\gamma)=M(\gamma)$ if the direction from $\tilde c_k^+$ to $\tilde c_l^-$ along
$\gamma$ agrees with the orientation of $B$, and $d(\gamma)=-M(\gamma)$ otherwise.
Set $F^0:=\oplus_k\tcL_{\tilde c_k^+}$, $F^1:=\oplus_l\tilde {\cal L}_{\tilde c_l^-}$. Consider the operator
$d:F^0\to F^1$ whose ``matrix elements'' are
$d_{kl}:\tcL_{\tilde c_k^+}\to\tcL_{\tilde c_l^-}=\sum_\gamma d(\gamma)$. Here the sum is taken
over all simple arcs $\gamma$
with endpoints $\tilde c_k^+$, $\tilde c_l^-$ (there are at most two of them).
\begin{Rem} Since $L$ meets the fibers of $M\to B$ transversely, there is a
canonical choice of lifting
$L\to M$ to $L\to\widetilde{GrL}(T_M)$. Here $GrL(T_M)\to M$ is the fibration whose fiber
over $m\in M$ is the manifold
of Lagrangian subspaces in $T_M(m)$ (the \select{Lagrangian Grassmanian} of
$T_M(m)$), $\widetilde{GrL}(T_M)\to GrL(T_M)$ is its fiberwise universal cover. This implies that the corresponding
Floer cohomologies are
equipped with a natural $\Z$-grading.
Since $L$ is also transversal
to the zero section $0_M(B)\subset M$, we may compute the space (or, more precisely, the complex) of morphisms
for the pair $L$, $0_M(B)$ in the Fukaya category. It is easy to see
that the complex coincides with
${\cal F}({\cal L}):F^0\to F^1$.
\end{Rem}
\begin{Th}
The complex ${\cal F}({\cal L})$ is quasi-isomorphic to $\DR(\tcL)$.
\label{ThFuk}
\end{Th}
{\it Construction of a quasi-isomorphism ${\cal F}({\cal L})\to\DR(\tcL)$}.
Consider distributions with values in $\tcL$ that
are rapidly decreasing smooth sections of $\tcL$ outside some compact set.
Let ${\cal S}(\tcL)^D$ be the space of such distributions.
Denote by $\DR(\tcL)^D$ the de Rham
complex associated with the $\Diff(B)$-module ${\cal S}(\tcL)^D$. The inclusion
${\cal S}(\tcL)\hookrightarrow{\cal S}(\tcL)^D$
induces a quasi-isomorphism $\DR(\tcL)\to\DR(\tcL)^D$. Now let us define a
morphism ${\cal F}({\cal L})\to\DR(\tcL)^D$.
For $\tilde c_k^+$, denote by $C_k^+$ the maximal (open) subinterval
$I\subset{\widetilde L}$ such that
$\tilde c_k^+\in C_k^+$ and $\tilde c_l^-\notin C_k^+$ for any $l$ ($I$ may be
infinite).
The morphism $F^0\to{\cal S}(\tcL)^D$
sends $v\in\tcL_{\tilde c_k^+}$ to $f$ such that $f$ vanishes outside $C_k^+$,
$f$ is horizontal on $C_k^+$,
and $f(\tilde c_k^+)=v$. The morphism $F^1\to{\cal S}(\tcL)^D\otimes\Omega^1(B)$
sends $v\in\tcL_{\tilde c_l^-}$ to
$v\otimes\delta_{\tilde c_l^-}$. Here $\delta_{\tilde c_l^-}$ is the delta-function at $\tilde c_l^-$.
\begin{Rem} All this machinery works in a more general situation. Namely,
we can consider a
symplectic family of tori $M\to B$ together with a closed purely imaginary horizontal
form $\omega^{I}$. Then we can work with the category of submanifolds
$L\subset M$ together with a bundle ${\cal L}$ on $L$ and a connection $\nabla_{{\cal L}}$
such that $L\to B$ is a finite unramified covering and
$\curv\nabla_{\cal L}=2\pi(\omega+\omega^I)|_L$.
\end{Rem}
\subsection{}
\label{RemTen}
The pairs $(L,{\cal L})$ of the kind considered above form a category. One can define the (fiberwise)
convolution product in this
category using the group structure on the fibers. However, the support of the
convolution product does not
need to be a smooth Lagrangian submanifold, so to have a tensor category, we
have to consider a slightly
different kind of objects (see Section \ref{ImLag}).
After these precautions, we have a tensor category $\Sky(M/B)$.
One easily sees that there is a canonical (i.e., functorial) choice
of the dual object $c\dl$ for any $c\in\Sky(M/B)$. For any $c\in\Sky(M/B)$, we
have the de Rham complex
$\DR(c)$ (defined in a way similar to what we do for $(L,{\cal L})$). Now we can use
these data
to define another ``category'' $\tSky(M/B)$:
we set $Ob(\tSky(M/B)):=Ob(\Sky(M/B))$,
$\BOX{Hom}_{\tSky(M/B)}(c_1,c_2):=\DR(c_2\star c_1\dl)$, where $\star$
stands for the convolution product. It is not a ``plain'' category, but a
``dg-category''. Similarly,
the category of holomorphic vector bundles on $M\dl$ has a structure of a tensor
dg-category
(the morphism complex from $L_1$ to $L_2$ is the Dolbeault complex of
$L_2\otimes L_1\dl$).
Then the isomorphism of Theorem \ref{ThdR} induces a fully
faithful tensor functor between tensor dg-categories.
\section{Fourier transform on tori}
\subsection{Poincar\'e bundle}
\label{Poin}
Let $X$ be a torus (that is, a compact commutative real Lie group).
Then $X=V/\Gamma$ for $V:=H_1(X,\R)$, $\Gamma:=H_1(X,\Z)$.
The \select{dual torus} is $X\dl:=V\dl/\Gamma\dl$
($V\dl:=\BOX{Hom}(V,\R)=H^1(X,\R)$, $\Gamma\dl:=\BOX{Hom}(\Gamma,\Z)=H^1(X,\Z)$).
\begin{Df} A \select{Poincar\'e bundle} for $X$ is a line bundle ${\cal P}$ on
$X\times X\dl$
together with a connection $\nabla_{\cal P}$ such that the following conditions are
satisfied:
$(i)$ $\nabla_{\cal P}$ is flat on $X\times\{x\dl\}$, and the monodromy is
$\pi_1(X)=H_1(X,\Z)\to U(1):
\gamma\mapsto \exp(2\pi \sqrt{-1}\langle x\dl,\gamma\rangle)$
(we denote by $\langle\al,\al\rangle$ not only the natural pairing
$V\dl\times V\to\R$, but also the induced pairings $\Gamma\dl\times
V/\Gamma\to\R/\Z$
and $V\dl/\Gamma\dl\times\Gamma\to\R/\Z$);
$(i\dl)$ $\nabla_{\cal P}$ is flat on $\{x\}\times X\dl$, and the monodromy is
$\pi_1(X\dl)=H^1(X,\Z)\to U(1):\gamma\dl\mapsto \exp(-2\pi \sqrt{-1}
\langle \gamma\dl,x\rangle)$;
$(ii)$ For any $(x,x\dl)\in X\times X\dl $, $\delta v\in V=T_xX$,
$\delta v\dl\in V\dl=T_{x\dl} X\dl$, we have
$\langle\curv(\nabla_{\cal P}),\delta v\wedge \delta v\dl\rangle=-2\pi\sqrt{-1}\langle\delta v\dl,\delta v\rangle$.
\end{Df}
Clearly, $({\cal P},\nabla_{\cal P})$ is defined up to an isomorphism by $(i)$, $(i\dl)$,
$(ii)$. Furthermore, we always fix an identification $\iota:{\cal P}_{(0,0)}{\widetilde\to}\C$,
so the collection $({\cal P},\nabla_{\cal P},\iota)$ is defined up to a canonical
isomorphism.
A Poincar\'e bundle allows us to identify $X\dl$ with the moduli space of
unitary
local system on $X$ (and vice versa).
\begin{Rem} Suppose $V$ carries a complex structure $J:V\to V$. Define the complex
structure on $V\dl$ using $-J\dl$. Then $X$, $X\dl$, and $X\times X\dl$ are complex manifolds.
Let ${\cal P}$ be a Poincar\'e bundle for $X$. It is easy to see that $\nabla_{\cal P}$ is
``flat in $\overline\partial$-direction'' (i.e., the $\curv\nabla_{{\cal P}}$ vanishes on
$\bigwedge^2T^{0,1}_{X\times X\dl}$). Hence ${\cal P}$ can be considered as a holomorphic line
bundle on $X\times X\dl$. Actually, ${\cal P}$ is in this case isomorphic to the ``complex'' Poincar\'e
bundle (i.e., the universal bundle that comes from the interpretation of $X\dl$ as a moduli space
of holomorphic line bundles on $X$).
\end{Rem}
The following lemma is straightforward.
\begin{Lm} Consider the local system
$F:=O_{V\times X\dl}(2\pi\sqrt{-1}\langle dx\dl,v\rangle)$. Here $dx\dl\in\Omega^1(X\dl)\otimes V\dl$
is the natural form with values in $V\dl$.
Lift the natural action of $\Gamma=H_1(X,\Z)$ on $V$ to $F$ by
$(g(f))(v,x\dl)=\exp(-2\pi\sqrt{-1}\langle x\dl,g\rangle)f(v-g,x\dl)$.
Then the corresponding line bundle with connection on $X\times X\dl$
is a Poincar\'e bundle.
\end{Lm}
Consider the natural projection
$u\times id:V\times X\dl\to X\times X\dl$.
Then $(u\times id)^*{\cal P}$
is identified with $F$.
We denote by $\ex{v}{x\dl}$ the section of $(u\times id)^*{\cal P}$ that corresponds
to
$1\in F$.
\begin{Rem}
Let ${\cal P}$ be a Poincar\'e bundle for $X$, $\sigma':X\dl\times X\to
X\times X\dl:(x\dl,x)\mapsto(-x,x\dl)$. Then $(\sigma')^*{\cal P}$ is a Poincar\'e
bundle for $X\dl$.
\end{Rem}
\subsection{Sky-scraper sheaves}
Given a finite set $S\subset X$ and (finite-dimensional) $\C$-vector spaces
$F_s$ for all $s\in S$,
we can define the corresponding \select{(finite semisimple) sky-scraper sheaf}
$F$ on $X$ by
$F(U)=\oplus_{s\in S\cap U}F_s$ for $U\subset X$. Denote by $\Sky(X)$ the
category of sky-scraper sheaves on $X$
($\Sky(X)$ is a full subcategory of the category of sheaves of vector spaces on
$X$).
Any sky-scraper sheaf is naturally a
$C^\infty(X)$-module, and morphisms of sky-scraper sheaves agree with the
action of $C^\infty(X)$.
For $F\in\Sky(X)$, define the Fourier transform of $F$ by
\begin{equation}
\Four F:=(p_{X\dl})_*((p_X^* F)\otimes{\cal P})
\end{equation}
Here $p_X:X\times X\dl\to X$ and $p_{X\dl}:X\times X\dl\to X\dl$ are
the natural projections.
$\Four F$ is a locally free sheaf of rank $\dim H^0(X,F)$, so we interpret
$\Four F$ as a vector bundle
on $X\dl$. The connection $\nabla$ on ${\cal P}$ induces
a flat unitary connection on $\Four F$. So $\Four$ can be considered as a
functor
$\Sky(X)\to\LU(X\dl)$, where $\LU(X\dl)$ is the category of unitary local
systems on $X\dl$.
This functor is an equivalence of categories.
\subsection{Rapidly decreasing sections}
For a sheaf $F\in\Sky(X)$, set ${\widetilde F}:=u^*F$, where $u:V\to X$ is the universal
cover.
The group $\Gamma:=H_1(X,\Z)$
acts on $V=H_1(X,\R)$ and ${\widetilde F}$ is $\Gamma$-equivariant. We say that a section
$s\in H^0(V,{\widetilde F})$ is
rapidly decreasing if $\lim_{||g||\to\infty,g\in\Gamma} s(x+g)||g||^{k}=0$ for any
$x\in V$, $k>0$ (the definition does not depend
on the choice of a norm $||\al||$ on $V$). Denote
by ${\cal S}({\widetilde F})$ the space of all rapidly decreasing sections of ${\widetilde F}$.
Take $F\in\Sky(X)$, $f\in{\cal S}({\widetilde F})$. Set
\begin{equation}
\Four_F f(x\dl)=\sum_{v\in V} f(v)\ex{v}{x\dl}
\label{sectFour}
\end{equation}
The following lemma is clear:
\begin{Lm} Let $F\in\Sky(X)$. Then $\Four_F:{\cal S}({\widetilde F})\to C^\infty(\Four(F))$
is an isomorphism. Here $C^\infty(\Four(F))$
is the space of $C^\infty$-sections of the local system $\Four(F)$.
\label{cSLm}
\end{Lm}
\subsection{Convolution}
\label{RemTenAbs}
For $F_1,F_2\in\Sky(X)$, one can define their \select{convolution product}
by $F_1\star F_2:=m_*((p_1^*F_1)\otimes(p_2^*F_2))$, where $m,p_1,p_2:X\times
X\to X$ are the group law,
the first projection, and the second projection respectively. This gives a
structure of a tensor category on
$\Sky(X)$ (the unit, dual element, and commutativity and associativity
isomorphisms are easily defined).
Then $\Four:\Sky(X)\to\LU(X\dl)$ is naturally a tensor functor (the tensor
structure on $\LU(X\dl)$ is the ``usual''
tensor product). Moreover, $\Four(F_1\star F_2)=\Four(F_1)\otimes\Four(F_2)$ for any $F_1,F_2\in\Sky(X)$.
Besides, it is easy to define the natural convolution product
${\cal S}(\widetilde{\star\vphantom{F}}):{\cal S}({\widetilde F}_1)\otimes{\cal S}({\widetilde F}_2)\to{\cal S}(\widetilde{(F_1\star
F_2)})$.
This makes ${\cal S}(\widetilde{\vphantom{F}\al})$ a tensor functor.
One can check that $\Four_\al:{\cal S}(\widetilde{\vphantom{F}\al})
\to C^\infty(\Four(\al))$ is actually an isomorphism
of tensor functors (i.e., for any $F_1,F_2\in\Sky(X)$ the diagram
\begin{equation}
\label{comm}
\begin{array}{ccc}
{\cal S}({\widetilde F}_1)\otimes{\cal S}({\widetilde F}_1)&{\widetilde\to}&C^\infty(\Four(F_1))\otimes
C^\infty(\Four(F_2))\\
\downarrow&\;&\downarrow\\
{\cal S}(\widetilde{F_1\star F_2})&{\widetilde\to}&C^\infty(\Four(F_1)\otimes\Four(F_2))
\end{array}
\end{equation}
commutes).
\begin{Exm} Let $F$ be the \select{unit object} in $\Sky(X)$ (i.e., $\BOX{supp}
F=\{0\}$ and $F_0=\C$). Then
${\widetilde F}$ is a trivial sheaf on $\Gamma=H^1(X,\Z)$. Clearly, $\Four(F)=O_X$ is the
trivial local system on $X$.
In this case, the isomorphism $(\Four_F)^{-1}:C^\infty(X\dl)\to{\cal S}({\widetilde F})$ maps
any $C^\infty$-function to its Fourier
coefficients. Since $F\star F=F$, the commutativity of (\ref{comm})
in this case is the
well-known formula for the Fourier coefficients of the product.
\end{Exm}
\section{Relative sky-scraper sheaves}
Let $p:M\to B$ be a symplectic family of tori.
In this section, we construct ``relative versions'' of the objects from the
previous section.
\subsection{}
\label{ImLag}
A \select{transversally immersed Lagrangian manifold} is a couple $(L,i)$,
where $i:L\to M$ is a morphism of $C^\infty$-manifolds such that
$p\circ i:L\to B$ is a proper finite unramified covering and $i^*(\omega)=0$.
Consider the category $\Sky(M/B)$, whose objects are
triples $(L,i,{\cal L})$, where $(L,i)$ is a transversally immersed
Lagrangian submanifold, and ${\cal L}$ is a local system on $L$.
\begin{Rem
Take any $(L_1,i_1,{\cal L}_1),(L_2,i_2,{\cal L}_2)\in\Sky(M/B)$. Consider $L_{1\to2}':=
L_1\times_M L_2$. Denote by $L_{1\to2}\subset L_{1\to2}'$
the maximal closed submanifold whose images in $L_1$, $L_2$ are open
(if $L_1$ and $L_2$ are just ``usual'' Lagrangian submanifolds, $L_{1\to2}$ is the
union of common
connected components of $L_1$ and $L_2$). Let $p_1:L_{1\to2}\to L_1$,
$p_2:L_{1\to2}\to L_2$ be the
natural projections.
By definition, morphisms from $(L_1,i_1,{\cal L}_1)$ to $(L_2,i_2,{\cal L}_2)$ are
horizontal morphisms
$p_1^*{\cal L}_1\to p_2^*{\cal L}_2$. The composition is defined in the natural way.
\end{Rem}
\subsection{}
Let $p\dl:M\dl\to B$ be the mirror dual of $M\to B$. Take
$(L,i,{\cal L})\in\Sky(M/B)$.
One can easily define the (relative) Poincar\'e bundle ${\cal P}$ on $M\times_BM\dl$.
It carries a natural
connection $\nabla_{\cal P}$. Now define the (fiberwise) Fourier transform
$\Four(L,i,{\cal L})$ by the formula (\ref{relFour}).
\begin{proof}[Proof of Theorem \ref{dflat}(i)]
The natural map
$L\times_BM\dl\to M\dl$ is an unramified covering, so the complex structure on $M\dl$
induces a complex structure on $L\times_BM\dl$. Let $({\cal P}_M,\nabla_{{\cal P}_M})$ be the
Poincar\'e bundle on $M$. It is enough to prove $\curv(\nabla_{{\cal P}_M})$ vanishes on
$T_{L\times_BM\dl}^{0,1}$.
The statement is local on $B$, so we may assume $M=X\times B$, $M\dl=X\dl\times B$ for
a torus $X$. Denote by $p_{X\times X\dl}:M\times_BM\dl\to X\times X\dl$ the natural projection,
and by $({\cal P}_X,\nabla_{{\cal P}_X})$ the Poincar\'e bundle of $X$.
$p_{X\times X\dl}^*({\cal P}_X,\nabla_{{\cal P}_X})=({\cal P}_M,\nabla_{{\cal P}_M})$, so
$\curv\nabla_{{\cal P}_M}=p_{X\times X\dl}^*\curv\nabla_{{\cal P}_X}$. Since $\curv\nabla_{{\cal P}_X}$ is a scalar multiple
of the natural symplectic form on $X\times X\dl$, it is enough to notice that
$p_{X\times X\dl}$ maps $T^{0,1}_{L\times_BM\dl}(x)$ to a Lagrangian subspace of
$T_{X\times X\dl}(p_{X\times X\dl}(x))\otimes\C$ for any $x\in L\times_BM\dl$.
\end{proof}
\subsection{Proof of Theorem \ref{ThdR}}
Consider the ``fiberwise universal cover'' $u:T\dl_B\to M$. For any
$(L,i,{\cal L})\in\Sky(M/B)$, set
${\widetilde L}:=L\times_MT\dl_B$.
Recall that $\tcL=u_L^*({\cal L})\otimes O_{\widetilde L}(-2\pi\tau)$, where $u_L:{\widetilde L}\to L$ is
the natural homomorphism and $\tau$
is the pull-back of the natural $1$-form on $T\dl_B$.
For any $D\in\Diff(B)$, we consider its pull-back
$\tilde p^*D\in\Diff({\widetilde L})$ (since $\tilde p:{\widetilde L}\to B$ is an unramified covering, the pull-back
is well defined).
Since $\tcL$ carries a canonical
flat connection, we can apply $\tilde p^*D$ to $s\in C^\infty(\tcL)$.
Denote by ${\cal S}(\tcL)$ the set of all sections $s\in C^\infty(\tcL)$ such that
$(\tilde p^*D)s$ is (fiberwise) rapidly decreasing for any $D\in\Diff(B)$.
${\cal S}(\tcL)$ is a $\Diff(B)$-module.
Just like in the ``absolute'' case (Lemma \ref{cSLm}), the Fourier transform
(formula (\ref{sectFour})) yields a canonical isomorphism
${\cal S}(\tcL){\widetilde\to} C^\infty(\Four(L,i,{\cal L}))$ for any $L,i,{\cal L}$.
The natural morphism
$(dp\dl)\otimes\C:T_{M\dl}\otimes\C\to(p\dl)^*T_B\otimes\C$ induces an
isomorphism
$T^{0,1}_{M\dl}{\widetilde\to}(p\dl)^*T_B\otimes\C$.
So we have an embedding of Lie algebras
$\Vect(B)\to C^\infty(T^{0,1}_{M\dl})\subset\Vect(M\dl)$.
The Lie algebra $C^\infty(T^{0,1}_{M\dl})$ of anti-holomorphic vector fields
acts on
$C^\infty(\Four(L,i,{\cal L}))$ (by Theorem \ref{dflat}(i)),
so $C^\infty(\Four(L,i,{\cal L}))$ has a natural structure of a $\Diff(B)$-module.
One easily checks that the de Rham
complex associated with this $\Diff(B)$-module is identified with the Dolbeault
complex of $\Four(L,i,{\cal L})$.
The following lemma implies Theorem \ref{ThdR}.
\begin{Lm} The isomorphism ${\cal S}({\widetilde L}){\widetilde\to} C^\infty(\Four(L,i,{\cal L}))$ agrees with
the structures of $\Diff(B)$-modules.
\end{Lm}
\begin{proof}
Again, we may assume $M=B\times X$ for a torus $X=V/\Gamma$. Consider the natural maps
$p_{V\times X\dl}:{\widetilde L}\times_BM\dl\to T\dl_B\times_B M\dl\to V\times X\dl$,
$p_M:{\widetilde L}\times_BM\dl\to T\dl_B\to M$, and $p_{T\dl_B}:{\widetilde L}\times_BM\dl\to
T\dl_B\times_BM\dl\to T\dl_B$.
$\ex{v}{x\dl}$ can be considered as a horizontal section of $p_M^*({\cal P}_M)\otimes
p^*_{V\times X\dl}(O_{V\times X\dl}(-2\pi\sqrt{-1}\langle dx\dl,v\rangle))$. Now the statement follows
from the fact that $1$ is a holomorphic section of
$O_{{\widetilde L}\times_BM\dl}(-2\pi p_{T\dl_B}^*\tau-2\pi\sqrt{-1}p_{V\times X\dl}^*\langle dx\dl,v\rangle)$
(i.e., the $\overline\partial$ component of the connection vanishes on $1$). Here $\tau$ stands for the
natural $1$-form on $T\dl_B$, and the complex structure on ${\widetilde L}\times_BM\dl$ is that induced by
${\widetilde L}\times_BM\dl\to M\dl$.
\end{proof}
\subsection{Proof of Theorem \ref{dflat}(ii)}
This result is actually proved in \cite{PZ}. Our proof is slightly different in
that it makes use of connections.
Let $F$ be a holomorphic bundle on the elliptic curve $M\dl$.
It is enough to consider the case of indecomposable $F$.
The following statement is a reformulation of \cite[Proposition 1]{PZ} (which in
turn is a consequence of
M.~Atiyah's results \cite{At}).
\begin{Pp} An indecomposable bundle $F$ on $M\dl$ is isomorphic to
$\pi_{r,*}(L\otimes N)$,
where $\pi_r:M_r\dl\to M\dl$ is the isogeny corresponding to an (unramified)
cover $B_r\to B$,
$L$ is a line bundle on $M\dl_r$, and $N$ is a unipotent bundle on $M\dl_r$
(i.e., $N$ admits a filtration
with trivial factors).
\end{Pp}
$\Four$ agrees with passing to unramified
covers $B_r\to B$, besides, $\Four$ transforms the convolution product in $\Sky(M/B)$ to the
tensor product of holomorphic vector bundles (see Section \ref{RemTenRel} for the definition of the
convolution product). So it suffices to
consider the following cases:
{\it Case 1.} Let $F=l$ be a line bundle on $M\dl$. Our statement in this case
follows from the following easy
lemma:
\begin{Lm} $l$ carries a $C^\infty$-connection $\nabla_l$ such that the
following conditions are satisfied:
$i)$ $\nabla_l$ agrees with the holomorphic structure on $l$ (i.e., the
$\overline\partial$-component of
$\nabla_l$ coincide with the canonical $\overline\partial$-differential);
$ii)$ The curvature $\curv\nabla_l$ is a horizontal (1-1)-form on $M\dl$ (in
terms of the canonical connection);
$iii)$ The monodromies of $\nabla_l$ along the fibers of $M\dl\to B$ are
unitary.
\end{Lm}
{\it Case 2.} Let $F=N$ be a unipotent bundle on $M\dl$. To complete the proof,
it is enough to notice that $N$ carries
a flat connection $\nabla_N$ such that $\nabla_N$ agrees with the holomorphic
structure and
$\nabla_N$ is trivial along the fibers of $M\dl\to B$.
\subsection{Remarks on tensor dg-categories}
\label{RemTenRel}
For any $(L_1,i_1,{\cal L}_1),(L_2,i_2,{\cal L}_2)\in\Sky(M/B)$, set $L:=L_1\times_B L_2$,
${\cal L}:=p_1^*({\cal L}_1)\otimes p_2^*({\cal L}_2)$ (here $p_i:L\to L_i$ is the natural
projection).
Consider the composition $i:=m\circ(i_1\times i_2):L_1\times_B L_2\to M\times_B
M\to M$,
where $m:M\times_B M\to M$ is the group law $(x_1,x_2)\mapsto x_1+x_2$.
Clearly, $(L,i,{\cal L})\in\Sky(M/B)$.
$(L_1,i_1,{\cal L}_1)\star(L_2,i_2,{\cal L}_2):=(L,i,{\cal L})$ is the convolution product
of $(L_1,i_1,{\cal L}_1)$ and $(L_2,i_2,{\cal L}_2)$. The convolution product naturally
extends to a structure of tensor category
on $\Sky(M/B)$ (the unit object, dual objects, and associativity/commutativity
constraints are defined in a natural
way). Notice that there is a functorial choice of dual object.
Just as in Section \ref{RemTenAbs}, the convolution product induces a functorial
morphism of $\Diff(B)$-modules
${\cal S}(\tcL_1)\otimes{\cal S}(\tcL_2)\to{\cal S}(\tcL_3)$ for any
$(L_1,i_1,{\cal L}_1),(L_2,i_2,{\cal L}_2)\in\Sky(M/B)$,
$(L_3,i_3,{\cal L}_3):=(L_1,i_1,{\cal L}_1)\star(L_2,i_2,{\cal L}_2)$. So
${\cal S}(\widetilde{\vphantom{L}\al})$ is a tensor
functor from $\Sky(M/B)$ to the category of $\Diff(B)$-modules.
Just as we say in Section \ref{RemTen}, we define a tensor dg-category
$\tSky(M/B)$ by setting
$Ob(\tSky(M/B)):=Ob(\Sky(M/B))$, $\BOX{Hom}_{\tSky(M/B)}(c_1,c_2):=\DR(c_2\star
c_1\dl)$.
\section{Connection with the Fukaya category}
\subsection{Hamiltonian diffeomorphisms}
In this section, we prove some results about tensor dg-category $\tSky(M/B)$. We
do not use these facts anywhere,
so the part may be skipped. However, the results give some clarification to the
connection between
$\tSky(M/B)$ and the original category considered by Fukaya \cite{Fuk}.
Fix $\mu\in\Omega^1(B)$ such that $d\mu=0$. $\mu$ can be considered as a section
of $T\dl_B$. Denote by
$i_\mu:B\to M$ the image of this section via the fiberwise universal cover
$T\dl_B\to M$.
Set $c_\mu:=(B,i_\mu,O_B(2\pi\mu))\in\tSky(M/B)$. The following
statement follows from the
definitions:
\begin{Pp}
$c_\mu\simeq 1_{\tSky(M/B)}$ in $\tSky(M/B)$.
\end{Pp}
Now let $A:M\to M$ be any symplectic diffeomorphism that preserves the fibration
$M\to B$. It is easy to see
that $A$ preserves the action of $T\dl_B$ on $M$, so $A$ corresponds to some
$\mu\in\Omega^1(B)$. Since
$A$ preserves the symplectic structure, $d\mu=0$. Now we can consider the
``automorphism'' $c\mapsto c_\mu\star c$
of $\tSky(M/B)$. Note that if $(L',i_{L'},{\cal L}')=(L,i_L,{\cal L})\star c_\mu$, then
$i_{L'}L'=A(i_LL)$.
In particular, if $A$ is Hamiltonian (that is, there is $f\in C^\infty(B)$ such
that $\mu=df$),
we get the following statement:
\begin{Co}
The map $(L,i_L,{\cal L})\mapsto(L,A\circ i_L,{\cal L})$ extends to an automorphism of
$\tSky(M/B)$.
\end{Co}
\begin{proof}
It is enough to notice that $O_B(2\pi\mu)$ is a trivial local system
if $\mu=df$,
so $c_\mu\simeq(B,i_\mu,O_B)$ and $(L,i_L,{\cal L})\star(B,i_\mu,O_B)=(L,A\circ
i_L,{\cal L})$ for any
$(L,i_L,{\cal L})\in\tSky(M/B)$.
\end{proof}
From now on, we suppose that $B$ is a torus.
Denote by $\tSky(M/B)^{QU}$ the full subcategory of $\tSky(M/B)$ formed by
triples
$(L,i,{\cal L})$ with quasi-unitary ${\cal L}$ (that is, all the eigenvalues of all the
monodromy operators
are of absolute value $1$).
\begin{Lm}
The natural embedding $\tSky(M/B)^{QU}\to\tSky(M/B)$ is an equivalence of
categories.
\label{QU}
\end{Lm}
\begin{proof}
We should prove that for any $(L,i,{\cal L})\in\tSky(M/B)$ there is
$(L',i',{\cal L}')\in\tSky^{QU}(M/B)$ such that
$(L,i,{\cal L})\simeq(L',i',{\cal L}')$ in $\tSky(M/B)$. It is enough to prove this
statement for indecomposable ${\cal L}$ and
connected $L$.
Choose a point $x\in L$. For $\gamma\in\pi_1(L)$,
we denote the monodromy along $\gamma$ by $\mon(\gamma)\in GL(L_x)$.
For any loop $\gamma\in\pi_1(L)$, all the eigenvalues of $\mon(\gamma)$ are of
the same absolute value
(otherwise ${\cal L}$ is decomposable).
Consider the homomorphism $\mu:\pi_1(L)\to\R_+:=\{a\in
R|a>0\}:\gamma\mapsto|\det(\mon(\gamma))|^{-1/d}$.
Since $L\to B$
is a finite covering, $\pi_1(L)\subset H_1(B,\Z)\subset H_1(B,\R)$ is a lattice.
So $\mu$ induces $\log\mu\in\BOX{Hom}(\pi_1(L),\R)=H^1(B,\R)$.
Choose an invariant $1$-form $\tilde\mu$ on $B$ that represents $-\frac{\log\mu}{2\pi}\in H^1(B,\R)$. Clearly,
$(L,i,{\cal L})\star c_{\tilde\mu}\in\tSky(M/B)^{QU}$.
\end{proof}
\begin{Rem} Suppose $M$ and $B$ are tori (in particular, they have a Lie group structure), and $p:M\to B$
is a group homomorphism.
Assume also $\omega$ is translation invariant.
Let $i:L\to M$ be a transversally immersed Lagrangian submanifold. We say that $(L,i)$ is \select{linear}
if for any connected component $L_j\subset L$, one has $i(L_j)=m+L'$ for some $m\in M$ and some Lie subgroup
$L'\subset M$.
Consider the full subcategory $\tSky(M/B)^{LN}\subset\tSky(M/B)$ that consists of $(L,i,{\cal L})$ such that
$(L,i)$ is linear and ${\cal L}$ is quasi-unitary. It can be proved (in a way similar to the proof of Lemma \ref{QU})
that $\tSky(M/B)^{LN}\to\tSky(M/B)$ is an equivalence of categories.
\end{Rem}
\subsection{Proof of Theorem \ref{ThFuk}}
In this section we give a different construction of the quasi-isomorphism
between ${\cal F}({\cal L})$ and $\DR(\tcL)$.
Identification of this quasi-isomorphism with that constructed in Section
\ref{FukSec} is left to reader.
Consider the de Rham complex
$\DR({\cal L}):={\cal S}(\tcL)\toup{d}{\cal S}(\tcL)\otimes_{C^\infty(B)}\Omega^1(B)$.
Recall that $\{\tilde c_l^-\}\subset{\widetilde L}$ is the set of all ``negative''
points whose images lie on the zero section $0_{T\dl_B}(B)\subset T\dl_B$.
Denote by $F^{(0)}$ the set of all $f\in{\cal S}(\tcL)$ such that $f$ is
horizontal in a neighborhood of
$\{\tilde c_l^-\}$ and by $F^{(1)}$ the set of all
$\mu\in{\cal S}(\tcL)\otimes_{C^\infty(B)}\Omega^1(B)$ such that
$\mu$ vanishes in a neighborhood of $\{\tilde c_l^-\}$. Since
$d(F^{(0)})\subset F^{(1)}$,
we have a complex $\DR'({\cal L}):F^{(0)}\toup{d}F^{(1)}$. Moreover, the natural
map $\DR'({\cal L})\to\DR({\cal L})$ is a
quasi-isomorphism.
Now let ${\widetilde L}_j\subset{\widetilde L}$ be a connected component of ${\widetilde L}\setminus\{\tilde
c_l^-\}$. Set ${\cal S}(\tcL)_j:=
\{s\in{\cal S}(\tcL):\BOX{supp} s\subset{\widetilde L}_i\}$,
$F^{(1)}_j:={\cal S}(\tcL)_j\otimes_{C^\infty(B)}\Omega^1(B)$. Denote by
$F_j'$ the set of all sections $s\in C^\infty(\tcL|_{{\widetilde L}_j})$ such that
$\BOX{supp} s$ is contained in a compact set
$C\subset L$ and $s$ is horizontal in some neighborhood of $\{\tilde x_l^-\}$. Set
$F^{(0)}_j:={\cal S}(\tcL)_j+F_j'$. Clearly $d$ yields a morphism $d_j: F
_j^{(0)}\to F _j^{(1)}$, so we have
a complex $\DR({\cal L})_j: F_j^{(0)}\toup{d_j} F_j^{(1)}$.
The restriction map induces a morphism of complexes $\DR'({\cal L})\to\DR({\cal L})_j$. So
we have a map
$\DR'({\cal L})\to\oplus_j\DR({\cal L})_j$. Moreover, one can see that this map is
included into a short exact sequence
\begin{equation}
0\to \DR'({\cal L})\to\oplus_j\DR({\cal L})_j\to F^1\to 0
\label{exseq}
\end{equation}
(see Section \ref{FukSec} for the definition of $F^1$).
\begin{Pp} $i)$ The map $d_j: F_j^{(0)}\to F_j^{(1)}$ is surjective for any $j$;
$iia)$ If the image of ${\widetilde L}_j\subset{\widetilde L}$ does not intersect $0_{T\dl_B}(B)\subset T\dl_B$,
then $d_j$ is injective;
$iib)$ Suppose the image of $\tilde c_k^+\in{\widetilde L}_j$ lies on $0_{T\dl_B}(B)\subset T\dl_B$.
Set $F_j'':=\ker d_j$. Then the map
$F''_j\to\tcL_{\tilde c_k^+}:s\mapsto s(\tilde c_k^+)$ is bijective.
\end{Pp}
\begin{proof}
Choose an isomorphism $t:B{\widetilde\to}\R/\Z$. We may assume that $t$ agrees with the
natural connection on $T_B$.
There are three possibilities:
\select{Case 1:} $M\to B\toup{t}\R/\Z$ induces an isomorphism
$t:{\widetilde L}_j{\widetilde\to}\R/m\Z$ for some m.
In this case, the image of ${\widetilde L}_j$ does not intersect $0_{T\dl_B}(B)\subset T\dl_B$.
It is easy to see that
the monodromy of $\tcL|_{{\widetilde L}_j}$ does not have $1$ as its eigenvalue, hence
the de Rham cohomology groups
of $\tcL|_{{\widetilde L}_j}$ vanish. So $d_j$ is bijective and $i)$, $iia)$ follow.
\select{Case 2:} $M\to B\toup{t}\R/\Z$ induces an isomorphism $t:{\widetilde L}_j{\widetilde\to}
(t_1,t_2):=\{t\in\R:t_1<t<t_2\}$ for some
$t_1,t_2\in\R$.
In this case, there is a unique $\tilde c_k^+\in{\widetilde L}_j$ whose image lies on $0_{T\dl_B}\subset T\dl_B$.
Besides,
$F_j^{(1)}=H^0_c({\widetilde L}_j,\tcL|_{{\widetilde L}_j}\otimes_{C^\infty({\widetilde L}_j)}\Omega^1({\widetilde L}_j))$
(here $H^0_c$ stands for the space of sections with
compact support). Now $i)$, $iib)$ are obvious.
\select{Case 3:} $M\to B\toup{t}\R/\Z$ induces an isomorphism $t:{\widetilde L}_j{\widetilde\to}
(t_1,t_2)$ where either $t_1=-\infty$,
or $t_2=\infty$ (or both). Without loss of generality, we assume $t_1=-\infty$.
Denote by $\tau\in\Omega^1({\widetilde L}_j)$ the pull-back of the natural $1$-form on $T_B$.
It is easy to see there are (unique) $a,b\in\R$ ($a\ne0$) such that
$\tau_0:=-2\pi\tau-(ta+b)dt$ is ``bounded''
in the following sense: there is $C\in\R$ such that for any connected closed
subset $U\subset{\widetilde L}_j$ we have
\begin{equation}
\left|\int_U\tau_0\right|<C
\label{eqbound}
\end{equation}
Choose an isomorphism $\phi:\tcL|_{{\cal L}_j}{\widetilde\to}(O_{{\cal L}_j}((at+b)dt))^n$
(where $n$ is the dimension of
$\tcL|_{{\cal L}_j}$). Set $\widehat{\frac{d}{dt}}:=\frac{d}{dt}+at+b$. Denote by
$\hat{\cal S}(t_1,t_2)$ the space of
$f\in C^\infty(t_1,t_2)$ such that $\lim_{t\to\infty}x^l\frac{d^kf}{dt^k}=0$
for any $k,l\ge0$.
If $t_2<\infty$, we
denote by $\hat{\cal S}^0(t_1,t_2)\subset\hat{\cal S}(t_1,t_2)$ (resp.
$\hat{\cal S}^1(t_1,t_2)\subset\hat{\cal S}(t_1,t_2)$) the subspace
of functions $f$ such that $\widehat{\frac{d}{dt}}f=0$ (resp. $f=0$) in a neighborhood of
$t_2$. If $t_2=\infty$, we set $\hat{\cal S}^0(t_1,t_2)=\hat{\cal S}^1(t_1,t_2)=\hat{\cal S}(t_1,t_2)$.
(\ref{eqbound}) implies that $\phi$
induces isomorphisms $F^{(0)}_j{\widetilde\to}(\hat{\cal S}^0(t_1,t_2))^n$,
$F^{(1)}_j{\widetilde\to}(\hat{\cal S}^1(t_1,t_2))^ndt$.
The differential $d_j$ corresponds to $\widehat{\frac{d}{dt}}dt$.
There are two possibilities:
\select{Case 3a:} $a>0$, the image of ${\widetilde L}_j$ intersects $0_{T\dl_B}(B)\subset T\dl_B$ in exactly one
point. Without loss of generality we
may assume this point corresponds to $t=0$. Now for any
$g\in(\hat{\cal S}^1(t_1,t_2))^n$, a generic
solution to $\widehat{\frac{d}{dt}}f=g$ is given by
\begin{equation}
f(x)=\exp(-(ax^2/2+bx))(\int_0^x g(t)\exp(at^2/2+bt)dt+C)
\end{equation}
where $C\in\C^n$. It is easy to see $f\in(\hat{\cal S}^0(t_1,t_2))^n$ for any $C$.
$i)$ and $iib)$ follow.
\select{Case 3b:} $a<0$, the image of ${\widetilde L}_j$ does not meet the zero section,
$t_2<\infty$. For any
$g\in(\hat{\cal S}^1(t_1,t_2))^n$ the formula
\begin{equation}
f(x)=\exp(-(ax^2/2+bx))(\int_{-\infty}^x g(t)\exp(at^2/2+bt)dt+C)
\end{equation}
gives a generic solution to $\widehat{\frac{d}{dt}}f=g$ ($C\in\C^n$).
However, $f\in(\hat{\cal S}^0(t_1,t_2))^n$ if and only if $C=0$. This implies
$i)$ and $iia)$.
\end{proof}
Hence $H^i(\oplus_j\DR({\cal L})_j)=\begin{cases}F^0,j=0\cr 0,\mbox{
otherwise}\end{cases}$.
To complete the proof, it is enough to notice that the map
$F^0=H^0(\oplus_j\DR({\cal L})_j)\to F^1$ induced
by (\ref{exseq}) coincides with that defined in Section \ref{FukSec}.
| proofpile-arXiv_065-9373 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{section1}
The purpose of this paper is to establish $H^1$-convergence rates in periodic homogenization and to establish interior Lipschitz estimates at the macroscopic scale for solutions to systems of linear elasticity in domains periodically perforated at a microscopic scale $\varepsilon$. To be precise, we consider the operator
\begin{equation}\label{one}
\mathcal{L}_{\varepsilon}=-\text{div}\left({A^\varepsilon(x)\nabla}\right)=-\dfrac{\partial}{\partial x_i}\left(a_{ij}^{\a\b}\left(\dfrac{x}{\varepsilon}\right)\dfrac{\partial}{\partial x_j}\right),\,\,\,x\in\varepsilon\omega,\,\varepsilon>0,
\end{equation}
where $A^\varepsilon(x)=A(x/\varepsilon)$, $A(y)=\{a_{ij}^{\a\b}(y)\}_{1\leq i,j,\a,\b\leq d}$ for $y\in \omega$, $d\geq 2$, and $\omega\subseteq\mathbb{R}^d$ is an unbounded Lipschitz domain with 1-periodic structure, i.e., if $\textbf{1}_+$ denotes the characteristic function of $\omega$, then $\textbf{1}_+$ is a 1-periodic function in the sense that
\[
\textbf{1}_+(y)=\textbf{1}_+(z+y)\,\,\,\text{ for }y\in\mathbb{R}^d,\,z\in\mathbb{Z}^d.
\]
The summation convention is used throughout. We write $\varepsilon\omega$ to denote the $\varepsilon$-homothetic set $\{x\in\mathbb{R}^d\,:x/\varepsilon\in\omega\}$. We assume $\omega$ is connected and that any two connected components of $\mathbb{R}^d\backslash\omega$ are separated by some positive distance. This is stated more precisely in Section~\ref{section2}. We also assume each connected component of $\mathbb{R}^d\backslash\omega$ is bounded.
We assume the coefficient matrix $A(y)$ is real, measurable, and satisfies the elasticity conditions
\begin{align}
&a_{ij}^{\a\b}(y)=a_{ji}^{\b\a}(y)=a_{\a j}^{i\b}(y), \label{two}\\
&\kappa_1|\xi|^2\leq a_{ij}^{\a\b}(y)\xi_i^\a\xi_j^\b\leq \kappa_2|\xi|^2,\label{three}
\end{align}
for $y\in\omega$ and any symmetric matrix $\xi=\{\xi_i^\a\}_{1\leq i,\a\leq d}$, where $\kappa_1,\kappa_2>0$. We also assume $A$ is 1-periodic, i.e.,
\begin{equation}\label{fiftyseven}
A(y)=A(y+z)\,\,\,\text{ for }y\in\omega,\,z\in\mathbb{Z}^d.
\end{equation}
The coefficient matrix of the systems of linear elasticity describes the linear relation between the stress and strain a material experiences during relatively small elastic deformations. Consequently, the elasticity conditions~\eqref{two} and~\eqref{three} should be regarded as physical parameters of the system, whereas $\varepsilon$ is clearly a geometric parameter.
For a bounded domain $\Omega\subset\mathbb{R}^d$, we write $\Omega_\varepsilon$ to denote the domain $\Omega_\varepsilon=\Omega\cap\varepsilon\omega$. In this paper, we consider the mixed boundary value problem given by
\begin{equation}\label{five}
\begin{cases}
\mathcal{L}_\varepsilon({u_{\varepsilon}})={0}\,\,\,\text{ in }\Omega_\varepsilon, \\
\sigma_\varepsilon(u_\varepsilon)=0\,\,\,\text{ on }S_\varepsilon:=\partial\Omega_\varepsilon\cap\Omega \\
u_{\varepsilon}=f\,\,\,\text{ on }\Gamma_\varepsilon:=\partial\Omega_\varepsilon\cap\partial\Omega,
\end{cases}
\end{equation}
where $\sigma_\varepsilon=-nA^{\varepsilon}(x)\nabla$ and $n$ denotes the outward unit normal to $\Omega_\varepsilon$. We say $u_{\varepsilon}$ is a weak solution to~\eqref{five} provided
\begin{equation}\label{six}
\displaystyle\int_{\Omega_\varepsilon}a_{ij}^{\a\b\varepsilon}\dfrac{\partial u_{\varepsilon}^\b}{\partial x_j}\dfrac{\partial w^\a}{\partial x_i}=0,\,\,\,w=\{w^\a\}_\a\in H^1(\Omega_\varepsilon,\Gamma_\varepsilon;\mathbb{R}^d),
\end{equation}
and $u_{\varepsilon}-f\in H^1(\Omega_\varepsilon,\Gamma_\varepsilon;\mathbb{R}^d)$, where $H^1(\Omega_\varepsilon,\Gamma_\varepsilon;\mathbb{R}^d)$ denotes the closure in $H^1(\Omega_\varepsilon;\mathbb{R}^d)$ of $C^\infty(\mathbb{R}^d;\mathbb{R}^d)$ functions vanishing on $\Gamma_\varepsilon$. The boundary value problem~\eqref{five} models relatively small elastic deformations of composite materials subject to zero external body forces (see~\cite{yellowbook}).
If $\omega=\mathbb{R}^d$---the case when $\Omega_\varepsilon=\Omega$---then the existence and uniqueness of a weak solution $u_\varepsilon\in H^1(\Omega_\varepsilon;\mathbb{R}^d)$ to~\eqref{five} for a given $f\in H^{1}(\Omega;\mathbb{R}^d)$ follows easily from the Lax-Milgram theorem and Korn's first inequality. If $\omega\subsetneq\mathbb{R}^d$, then the existence and uniqueness of a weak solution to~\eqref{five} still follows from the Lax-Milgram theorem but in addition Korn's first inequality for perforated domains (see Lemma~\ref{fortysix}).
One of the main results of this paper is the following theorem. For any measurable set $E$ (possibly empty) and ball $B(x_0,r)\subset\mathbb{R}^d$ with $r>0$, denote
\[
-\!\!\!\!\!\!\displaystyle\int_{B(x_0,r)\cap E}f(x)\,dx=\dfrac{1}{r^d}\int_{B(x_0,r)\cap E}f(x)\,dx
\]
\begin{thmm}\label{nineteen}
Suppose $A$ satisfies~\eqref{two},~\eqref{three}, and~\eqref{fiftyseven}. Let $u_\varepsilon$ denote a weak solution to $\mathcal{L}_\varepsilon(u_\varepsilon)=0$ in $B(x_0,R)\cap\varepsilon\omega$ and $\sigma_\varepsilon(u_\varepsilon)=0$ for $B(x_0,R)\cap\partial(\varepsilon\omega)$ for some $x_0\in\mathbb{R}^d$ and $R>0$. For $\varepsilon\leq r<R/3$, there exists a constant $C$ depending on $d$, $\omega$, $\kappa_1$, and $\kappa_2$ such that
\begin{equation}\label{seventyfour}
\left(-\!\!\!\!\!\!\displaystyle\int_{B(x_0,r)\cap\varepsilon\omega}|\nabla u_\varepsilon|^2\right)^{1/2}\leq C\left(-\!\!\!\!\!\!\displaystyle\int_{B(x_0,R)\cap\varepsilon\omega}|\nabla u_\varepsilon|^2\right)^{1/2}.
\end{equation}
\end{thmm}
The scale-invariant estimate in Theorem~\ref{nineteen} should be regarded as a Lipschitz estimate for solutions $u_\varepsilon$, as under additional smoothness assumptions on the coefficients $A$ we may deduce interior Lipschitz estimate for solutions to~\eqref{five} from local Lipschitz estimates for $\mathcal{L}_1$ and a ``blow-up argument'' (see the proof of Lemma~\ref{thirtyseven}). In particular, if $A$ is H\"{o}lder continuous, i.e., there exists a $\tau\in (0,1)$ with
\begin{equation}\label{fiftyfour}
|A(x)-A(y)|\leq C|x-y|^\tau\,\,\,\text{ for }x,y\in\omega
\end{equation}
for some constant $C$ uniform in $x$ and $y$, we may deduce the following corollary.
\begin{corr}\label{fiftyfive}
Suppose $A$ satisfies~\eqref{two},~\eqref{three},~\eqref{fiftyseven}, and~\eqref{fiftyfour}, and suppose $\omega$ is an unbounded $C^{1,\a}$ domain for some $\a>0$. Let $u_\varepsilon$ denote a weak solution to $\mathcal{L}_\varepsilon(u_\varepsilon)=0$ in $B(x_0,R)\cap\varepsilon\omega$ and $\sigma_\varepsilon(u_\varepsilon)=0$ for $B(x_0,R)\cap\partial(\varepsilon\omega)$ for some $x_0\in\mathbb{R}^d$ and $R>0$. Then
\begin{equation}\label{sixtyone}
\|\nabla u_\varepsilon\|_{L^\infty(B(x_0,R/3)\cap\varepsilon\omega)}\leq C\left(-\!\!\!\!\!\!\displaystyle\int_{B(x_0,R)\cap\varepsilon\omega}|\nabla u_\varepsilon|^2\right)^{1/2},
\end{equation}
where $C$ depends on $d$, $\omega$, $\kappa_1$, $\kappa_2$, $\tau$, and $\a$.
\end{corr}
Another consequence of Theorem~\ref{nineteen} is the following Liouville type property for systems of linear elasticity in unbounded periodically perforated domains. In particular, we have the following corollary.
\begin{corr}\label{fiftyeight}
Suppose $A$ satisfies~\eqref{two},~\eqref{three}, and~\eqref{fiftyseven}, and suppose $\omega$ is an unbounded Lipschitz domain with 1-periodic structure. Let $u$ denote a weak solution of $\mathcal{L}_1(u)=0$ in $\omega$ and $\sigma_1(u)=0$ on $\partial\omega$. Assume
\begin{equation}\label{sixtytwo}
\left(-\!\!\!\!\!\!\displaystyle\int_{B(0,R)\cap\omega}|u|^2\right)^{1/2}\leq CR^{\nu},
\end{equation}
for some $\nu\in (0,1)$, some constant $C:=C(u)>0$, and for all $R>1$. Then $u$ is constant.
\end{corr}
Interior Lipschitz estimates for the case $\omega=\mathbb{R}^d$ were first obtained \textit{indirectly} through the method of compactness presented in~\cite{avellaneda}. Interior Lipschitz estimates for solutions to a single elliptic equation in the case $\omega\subsetneq\mathbb{R}^d$ were obtained indirectly in~\cite{yeh} through the same method of compactness. The method of compactness is essentially a ``proof by contradiction'' and relies on the qualitative convergence of solutions $u_\varepsilon$ (see Theorem~\ref{seventyone}). The method relies on sequences of operators $\{\mathcal{L}_{\varepsilon_k}^k\}_k$ and sequences of functions $\{u_k\}_k$ satisfying $\mathcal{L}_{\varepsilon_k}^k(u_k)=0$, where $\mathcal{L}_{\varepsilon_k}^k=-\text{div}(A_k^{\varepsilon_k}\nabla )$, $\{A_k^{\varepsilon_k}\}_k$ satisfies~\eqref{two},~\eqref{three}, and~\eqref{fiftyseven} in $\omega+s_k$ for $s_k\in\mathbb{R}^d$.
In the case $\omega=\mathbb{R}^d$, then $\omega+s_k=\mathbb{R}^d$ for any $s_k\in\mathbb{R}^d$, and so it is clear that estimate~\eqref{seventyfour} is uniform in affine transformations of $\omega$. In the case $\omega\subsetneq\mathbb{R}^d$, affine shifts of $\omega$ must be considered, which complicates the general scheme.
Interior Lipschitz estimates for the case $\omega=\mathbb{R}^d$ were obtained \textit{directly} in~\cite{shen} through a general scheme for establishing Lipschitz estimates at the macroscopic scale first presented in~\cite{smart} and then modified for second-order elliptic systems in~\cite{armstrong} and~\cite{shen}. We emphasize that our result is unique in that Theorem~\ref{nineteen} extends estimates presented in~\cite{shen}---i.e., interior Lipschitz estimates for systems of linear elasticity---to the case $\omega\subsetneq\mathbb{R}^d$ while \textit{completely avoiding the use of compactness methods}.
The proof of Theorem~\ref{nineteen} (see Section~\ref{section4}) relies on the quantitative convergence rates of the solutions $u_\varepsilon$. Let $u_0\in H^1(\Omega;\mathbb{R}^d)$ denote the weak solution of the boundary value problem for the homogenized system corresponding to~\eqref{five} (see~\eqref{seven}), and let $\chi=\{\chi_j^\b\}_{1\leq j,\b\leq d}\in H^1_{\text{per}}(\omega;\mathbb{R}^d)$ denote the matrix of correctors (see~\eqref{nine}), where $H^1_{\text{per}}(\omega;\mathbb{R}^d)$ denotes the closure in $H^1(Q\cap\omega;\mathbb{R}^d)$ of the set of 1-periodic $C^\infty(\mathbb{R}^d;\mathbb{R}^d)$ functions and $Q=[-1/2,1/2]^d$. In the case $\omega\subsetneq\mathbb{R}^d$, the estimate
\[
\|u_\varepsilon-u_0-\varepsilon\chi^\varepsilon\nabla u_0\|_{H^1(\Omega_\varepsilon)}\leq C\varepsilon^{1/2}\|u_0\|_{H^3(\Omega)}
\]
was proved in~\cite{book2} under the assumption that $\chi_j^\b\in W^{1,\infty}_{\text{per}}(\omega;\mathbb{R}^d)$ for $1\leq j,\b\leq d$, where $W^{1,\infty}_{\text{per}}(\omega;\mathbb{R}^d)$ is defined similarly to $H^1_{\text{per}}(\omega;\mathbb{R}^d)=W^{1,2}_{\text{per}}(\omega;\mathbb{R}^d)$. However, if it is only assumed that the coefficients $A$ are real, measurable, and satisfy~\eqref{two},~\eqref{three}, and~\eqref{fiftyseven}, then the first-order correctors are not necessarily Lipschitz. Consequently, the following theorem is another main result of this paper. Let $K_\varepsilon$ denote the smoothing operator at scale $\varepsilon$ defined by~\eqref{eighteen}, and let $\eta_\varepsilon\in C_0^\infty(\Omega)$ be the cut-off function defined by~\eqref{eleven}. The use of the smoothing operator $K_\varepsilon$ (details are discussed in Section~\ref{section2}) is motivated by work in~\cite{suslina}.
\begin{thmm}\label{ten}
Let $\Omega$ be a bounded Lipschitz domain and $\omega$ be an unbounded Lipschitz domain with 1-periodic structure. Suppose $A$ is real, measurable, and satisfies~\eqref{two},~\eqref{three}, and~\eqref{fiftyseven}. Let $u_\varepsilon$ denote a weak solution to~\eqref{five}. There exists a constant $C$ depending on $d$, $\Omega$, $\omega$, $\kappa_1$, and $\kappa_2$ such that
\[
\|u_\varepsilon-u_0-\varepsilon\chi^\varepsilon\smoothtwo{(\nabla u_0)\eta_\varepsilon}{\varepsilon}\|_{H^1(\Omega_\varepsilon)}\leq C\varepsilon^{1/2}\|f\|_{H^1(\partial\Omega)}.
\]
\end{thmm}
This paper is structured in the following manner. In Section~\ref{section2}, we establish notation and recall various preliminary results from other works. The convergence rate presented in Theorem~\ref{ten} is proved in Section~\ref{section3}. In Section~\ref{section4}, we prove the interior Lipschitz esitmates given by Theorem~\ref{nineteen} and provide the proof of Corollary~\ref{fiftyfive}. To finish the section, we prove the Liouville type property Corollary~\ref{fiftyeight}.
\section{Notation and Preliminaries}\label{section2}
Fix $\zeta\in C_0^\infty (B(0,1))$ so that $\zeta\geq 0$ and $\int_{\mathbb{R}^d}\zeta=1$. Define
\begin{equation}\label{eighteen}
\smoothone{g}{\varepsilon}(x)=\displaystyle\int_{\mathbb{R}^d}g(x-y)\zeta_\varepsilon(y)\,dy,\,\,\,f\in L^2(\mathbb{R}^d)
\end{equation}
where $\zeta_\varepsilon(y)=\varepsilon^{-d}\zeta(y/\varepsilon)$. Note $K_\varepsilon$ is a continuous map from $L^2(\mathbb{R}^d)$ to $L^2(\mathbb{R}^d)$. A proof for each of the following two lemmas is readily available in~\cite{shen}, and so we do not present either here. For any function $g$, set $g^\varepsilon(\cdot)=g(\cdot/\varepsilon)$.
\begin{lemm}\label{sixteen}
Let $g\in H^1(\mathbb{R}^d)$. Then
\[
\|g-\smoothone{g}{\varepsilon}\|_{L^2(\mathbb{R}^d)}\leq C\varepsilon\|\nabla g\|_{L^2(\mathbb{R}^d)},
\]
where $C$ depends only on $d$.
\end{lemm}
\begin{lemm}\label{fifteen}
Let $h\in L_{\text{loc}}^2(\mathbb{R}^d)$ be a 1-periodic function. Then for any $g\in L^2(\mathbb{R}^d)$,
\[
\|h^\varepsilon\smoothone{g}{\varepsilon}\|_{L^2(\mathbb{R}^d)}\leq C\|h\|_{L^2(Q)}\|g\|_{L^2(\mathbb{R}^d)}
\]
\end{lemm}
A proof of Lemma~\ref{sixtynine} can be found in~\cite{book2}.
\begin{lemm}\label{sixtynine}
Let $\Omega\subset\mathbb{R}^d$ be a bounded Lipschitz domain. For any $g\in H^1(\Omega)$,
\[
\|g\|_{L^2(\mathcal{O}_r)}\leq Cr^{1/2}\|g\|_{H^1(\Omega)},
\]
where $C$ depends on $d$ and $\Omega$, and $\mathcal{O}_{r}=\{x\in\Omega\,:\,\text{dist}(x,\partial\Omega)<r\}$.
\end{lemm}
A proof of Lemma~\ref{fourteen} can be found in~\cite{yellowbook}.
\begin{lemm}\label{fourteen}
Suppose $B=\{b_{ij}^{\a\b}\}_{1\leq i,j,\a,\b\leq d}$ is 1-periodic and satisfies $b_{ij}^{\a\b}\in L_{\text{loc}}^2(\mathbb{R}^d)$ with
\[
\dfrac{\partial}{\partial y_i}b_{ij}^{\a\b}=0,\,\,\,\text{ and }\,\,\,\displaystyle\int_Q b_{ij}^{\a\b}=0.
\]
There exists $\pi=\{\pi_{kij}^{\a\b}\}_{1\leq i,j,k,\a,\b\leq d}$ with $\pi_{kij}^{\a\b}\in H^1_{\text{loc}}(\mathbb{R}^d)$ that is 1-periodic and satisfies
\[
\dfrac{\partial}{\partial y_k}\pi_{kij}^{\a\b}=b_{ij}^{\a\b}\,\,\,\text{ and }\,\,\,\pi_{kij}^{\a\b}=-\pi_{ikj}^{\a\b}.
\]
\end{lemm}
Theorem~\ref{thirteen} is a classical result in the study of periodically perforated domains. It can be used to prove Korn's first inequality in perforated domains (see Lemma~\ref{fortysix}), which is needed together with the Lax-Milgram theorem to prove the existence and uniqueness of solutions to~\eqref{five}. For a proof of Theorem~\ref{thirteen}, see~\cite{book2}.
\begin{thmm}\label{thirteen}
Let $\Omega$ and $\Omega_0$ be a bounded Lipschitz domains with $\overline{\Omega}\subset\Omega_0$ and $\text{dist}(\partial\Omega_0,\Omega)>1$. For $0<\varepsilon<1$, there exists a linear extension operator $P_\varepsilon: H^1(\Omega_\varepsilon,\Gamma_\varepsilon;\mathbb{R}^d)\to H_0^1(\Omega_0;\mathbb{R}^d)$ such that
\begin{align}
&\|P_\varepsilon w\|_{H^1(\Omega_0)}\leq C_1\|w\|_{H^1(\Omega_\varepsilon)}, \label{thirtyfive}\\
&\|\nabla P_\varepsilon w\|_{L^2(\Omega_0)}\leq C_2\|\nabla w\|_{L^2(\Omega_\varepsilon)}, \label{thirtyeight}\\
&\|e(P_\varepsilon w)\|_{L^2(\Omega_0)}\leq C_3\|e(w)\|_{L^2(\Omega_\varepsilon)},
\end{align}
for some constants $C_1$, $C_2$, and $C_3$ depending on $\Omega$ and $\omega$, where $e(w)$ denotes the symmetric part of $\nabla w$, i.e.,
\begin{equation}\label{fortyseven}
e(w)=\dfrac{1}{2}\left[\nabla w+(\nabla w)^T\right].
\end{equation}
\end{thmm}
Korn's inequalities are classical in the study of linear elasticity. The following lemma is essentially Korn's first inequality but formatted for periodically perforated domains. Lemma~\ref{fortysix} follows from Theorem~\ref{thirteen} and Korn's first inequality. For an explicit proof of Lemma~\ref{fortysix}, see~\cite{book2}.
\begin{lemm}\label{fortysix}
There exists a constant $C$ independent of $\varepsilon$ such that
\[
\|w\|_{H^1(\Omega_\varepsilon)}\leq C\|e(w)\|_{L^2(\Omega_\varepsilon)}
\]
for any $w\in H^1(\Omega_\varepsilon,\Gamma_\varepsilon;\mathbb{R}^d)$, where $e(w)$ is given by~\eqref{fortyseven}.
\end{lemm}
If $\omega=\mathbb{R}^d$, it can be shown that the weak solution to~\eqref{five} converges weakly in $H^1(\Omega;\mathbb{R}^d)$ and consequently strongly in $L^2(\Omega;\mathbb{R}^d)$ as $\varepsilon\to 0$ to some $u_0$, which is a solution of a boundary value problem in the domain $\Omega$ (see~\cite{book1} or~\cite{yellowbook}). Indeed, we have the following known qualitative convergence.
\begin{thmm}\label{seventyone}
Suppose $\omega=\mathbb{R}^d$ and that $\Omega$ is a bounded Lipschitz domain. Suppose $A$ satisfies ~\eqref{two},~\eqref{three}, and~\eqref{fiftyseven}. Let $u_\varepsilon$ satisfy $\mathcal{L}_\varepsilon(u_\varepsilon)=0$ in $\Omega$, and $u_\varepsilon=f$ on $\partial\Omega$. Then there exists a $u_0\in H^1(\Omega;\mathbb{R}^d)$ such that
\[
u_\varepsilon\rightharpoonup u_0\,\,\,\text{ weakly in }H^1(\Omega;\mathbb{R}^d).
\]
Consequently, $u_\varepsilon\to u_0$ strongly in $L^2(\Omega;\mathbb{R}^d)$.
\end{thmm}
For a proof of the previous theorem, see~\cite{book1}, Section 10.3. The function $u_0$ is called the homogenized solution and the boundary value problem it solves is the homogenized system corresponding to~\eqref{five}.
If $\omega\subsetneq\mathbb{R}^d$, then it is difficult to qualitatively discuss the convergence of $u_\varepsilon$, as $H^1(\Omega_\varepsilon;\mathbb{R}^d)$ and $L^2(\Omega_\varepsilon;\mathbb{R}^d)$ depend explicitly on $\varepsilon$. Qualitative convergence in this case is discussed in~\cite{siam},~\cite{cioranescu}, and others. The homogenized system of elasticity corresponding to~\eqref{five} and of which $u_0$ is a solution is given by
\begin{equation}\label{seven}
\begin{cases}
\mathcal{L}_0\left({u_0}\right)={0}\,\,\,\text{ in }\Omega \\
u_0=f\,\,\,\text{ on }\partial\Omega,
\end{cases}
\end{equation}
where $\mathcal{L}_0=-\text{div}(\widehat{A}\nabla)$, $\widehat{A}=\{\widehat{a}_{ij}^{\a\b}\}_{1\leq i,j,\a,\b\leq d}$ denotes a constant matrix given by
\begin{equation}\label{eight}
\widehat{a}_{ij}^{\a\b}=-\!\!\!\!\!\!\displaystyle\int_{Q\cap\omega}a_{ik}^{\a\gamma}\dfrac{\partial\mathbb{X}_{j}^{\gamma\b}}{\partial y_k},
\end{equation}
and $\mathbb{X}_j^\b=\{\mathbb{X}_{j}^{\gamma\b}\}_{1\leq\gamma\leq d}$ denotes the weak solution to the boundary value problem
\begin{equation}\label{nine}
\begin{cases}
\mathcal{L}_{1}(\mathbb{X}_j^\b)=0\,\,\,\text{ in }Q\cap\omega \\
\sigma_{1}(\mathbb{X}_j^\b)=0\,\,\,\text{ on }\partial\omega\cap Q \\
\chi_j^\b:=\mathbb{X}_j^\b-y_je^\b\text{ is 1-periodic},\,\,\,\displaystyle\int_{Q\cap\omega}\chi_j^\b=0,
\end{cases}
\end{equation}
where $e^\b\in\mathbb{R}^d$ has a 1 in the $\b$th position and 0 in the remaining positions. For details on the existence of solutions to~\eqref{nine}, see~\cite{book2}. The functions $\chi^{\b}_j$ are referred to as the first-order correctors for the system~\eqref{five}.
It is assumed that any two connected components of $\mathbb{R}^d\backslash\omega$ are separated by some positive distance. Specifically, if $\mathbb{R}^d\backslash\omega=\cup_{k=1}^\infty H_k$ where $H_k$ is connected and bounded for each $k$, then there exists a constant $\mathfrak{g}^\omega$ so that
\begin{equation}\label{sixtysix}
0< \mathfrak{g}^\omega\leq \underset{i\neq j}{\inf}\left\{\underset{\substack{x_i\in H_i \\ \,x_j\in H_j}}{\inf}|x_i-x_j|\right\}.
\end{equation}
\section{Convergence Rates in $H^1(\Omega_\varepsilon)$}\label{section3}
In this section, we establish $H^1(\Omega_\varepsilon)$-convergence rates for solutions to~\eqref{five} by proving Theorem~\ref{ten}. It should be noted that if $A$ satisfies~\eqref{two} and~\eqref{three}, then $\widehat{A}$ defined by~\eqref{eight} satisfies conditions~\eqref{two} and~\eqref{three} but with possibly different constants $\widehat{\kappa}_1$ and $\widehat{\kappa}_2$ depending on $\kappa_1$ and $\kappa_2$. In particular, we have the following lemma. For a proof of Lemma~\ref{sixtyeight}, see either~\cite{book1},~\cite{yellowbook}, or~\cite{book2}.
\begin{lemm}\label{sixtyeight}
Suppose $A$ satisfies~\eqref{two},~\eqref{three}, and~\eqref{fiftyseven}. If $\mathbb{X}_j^\b=\{\mathbb{X}_{j}^{\gamma\b}\}_\gamma$ denote the weak solutions to~\eqref{nine}, then $\widehat{A}=\{\widehat{a}_{ij}^{\a\b}\}$ defined by
\[
\widehat{a}_{ij}^{\a\b}=\displaystyle\int_{Q\cap\omega}a_{ik}^{\a\gamma}\dfrac{\partial\mathbb{X}_{j}^{\gamma\b}}{\partial y_k}
\]
satisfies $\widehat{a}_{ij}^{\a\b}=\widehat{a}_{ji}^{\b\a}=\widehat{a}_{\a j}^{i\b}$ and
\begin{align*}
\widehat{\kappa}_1|\xi|^2\leq \widehat{a}_{ij}^{\a\b}\xi_i^\a\xi_j^\b\leq \widehat{\kappa}_2|\xi|^2
\end{align*}
for some $\widehat{\kappa}_1,\widehat{\kappa}_2>0$ depending $\kappa_1$ and $\kappa_2$ and any symmetric matrix $\xi=\{\xi_i^\a\}_{i,\a}$.
\end{lemm}
We assume $A$ satisfies~\eqref{two},~\eqref{three} and~\eqref{fiftyseven}. We assume $\Omega\subset\mathbb{R}^d$ is a bounded Lipschitz domain and $\omega\subseteq\mathbb{R}^d$ is an unbounded Lipschitz domain with 1-periodic structure such that $\mathbb{R}^d\backslash\omega$ is not connected but each connected component is separated by a positive distance $\mathfrak{g}^\omega$. We also assume that each connected component of $\mathbb{R}^d\backslash\omega$ is bounded.
Let $K_\varepsilon$ be defined as in Section~\ref{section2}. Let $\eta_\varepsilon\in C_0^\infty(\Omega)$ satisfy
\begin{equation}\label{eleven}
\begin{cases}
0\leq \eta_\varepsilon(x)\leq 1\,\,\,\text{ for }x\in\Omega, \\
\text{supp}(\eta_\varepsilon)\subset \{x\in\Omega\,:\,\text{dist}(x,\partial\Omega)\geq 3\varepsilon\}, \\
\eta_\varepsilon=1\,\,\,\text{ on }\{x\in\Omega\,:\,\text{dist}(x,\partial\Omega)\geq 4\varepsilon\}, \\
|\nabla\eta_\varepsilon|\leq C\varepsilon^{-1}.
\end{cases}
\end{equation}
If $P_\varepsilon$ is the linear extension operator provided by Theorem~\ref{thirteen}, then we write $\widetilde{w}=P_\varepsilon w$ for $w\in H^1(\Omega_\varepsilon,\Gamma_\varepsilon;\mathbb{R}^d)$. Throughout, $C$ denotes a harmless constant that may change from line to line.
\begin{lemm}\label{twenty}
Let
\[
r_{\varepsilon}=u_\varepsilon-u_0-\varepsilon\chi^\varepsilon\smoothtwo{(\nabla u_0)\eta_\varepsilon}{\varepsilon}.
\]
Then
\begin{align*}
&\displaystyle\int_{\Omega_\varepsilon} A^\varepsilon\nabla r_{\varepsilon}\cdot\nabla w \\
&\hspace{10mm}= |Q\cap\omega|\displaystyle\int_{\Omega}\widehat{A}\nabla u_0\cdot\nabla\eta_\varepsilon\widetilde{w}-|Q\cap\omega|\displaystyle\int_{\Omega}(1-\eta_\varepsilon)\widehat{A}\nabla u_0\cdot\nabla\widetilde{w} \\
&\hspace{20mm}+\displaystyle\int_\Omega\left[|Q\cap\omega|\widehat{A}-\textbf{1}_+^\varepsilon A^\varepsilon\right]\left[\nabla u_0-\smoothtwo{(\nabla u_0)\eta_\varepsilon}{\varepsilon}\right]\cdot\nabla \widetilde{w} \\
&\hspace{20mm}+\displaystyle\int_\Omega\left[|Q\cap\omega|\widehat{A}-\textbf{1}_+^\varepsilon A^\varepsilon\nabla\mathbb{X}^\varepsilon\right]\smoothtwo{(\nabla u_0)\eta_\varepsilon}{\varepsilon}\cdot\nabla \widetilde{w} \\
&\hspace{20mm}-\varepsilon\displaystyle\int_{\Omega_\varepsilon}A^\varepsilon\chi^\varepsilon\nabla\smoothtwo{(\nabla u_0)\eta_\varepsilon}{\varepsilon}\cdot\nabla w
\end{align*}
for any $w\in H^1(\Omega_\varepsilon,\Gamma_\varepsilon;\mathbb{R}^d)$.
\end{lemm}
\begin{proof}
Since $u_\varepsilon$ and $u_0$ solve~\eqref{five} and~\eqref{seven}, respectively,
\[
\displaystyle\int_{\Omega_\varepsilon}A^\varepsilon\nabla u_\varepsilon\cdot\nabla w=0
\]
and
\[
|Q\cap\omega|\displaystyle\int_{\Omega}\widehat{A}\nabla u_0\cdot\nabla (\widetilde{w}\eta_\varepsilon)=0
\]
for any $w\in H^1(\Omega_\varepsilon,\Gamma_\varepsilon;\mathbb{R}^d)$. Hence,
\begin{align*}
&\displaystyle\int_{\Omega_\varepsilon}A^\varepsilon\nabla r_\varepsilon\cdot\nabla w \\
&\hspace{10mm}= \displaystyle\int_{\Omega_\varepsilon}A^\varepsilon\nabla u_\varepsilon\cdot\nabla w-\displaystyle\int_{\Omega_\varepsilon}A^\varepsilon\nabla u_0\cdot\nabla w \\
&\hspace{20mm}-\displaystyle\int_{\Omega_\varepsilon}A^\varepsilon\nabla\left[\varepsilon\chi^\varepsilon\smoothtwo{(\nabla u_0)\eta_\varepsilon}{\varepsilon}\right]\cdot\nabla w \\
&\hspace{10mm}= |Q\cap\omega|\displaystyle\int_{\Omega}\widehat{A}\nabla u_0\cdot\nabla (\widetilde{w}\eta_\varepsilon)-\displaystyle\int_{\Omega_\varepsilon}A^\varepsilon\nabla u_0\cdot\nabla w \\
&\hspace{20mm}-\displaystyle\int_{\Omega_\varepsilon}A^\varepsilon\nabla \chi^\varepsilon\smoothtwo{(\nabla u_0)\eta_\varepsilon}{\varepsilon}\cdot\nabla w \\
&\hspace{20mm}-\varepsilon\displaystyle\int_{\Omega_\varepsilon}A^\varepsilon\chi^\varepsilon\nabla\smoothtwo{(\nabla u_0)\eta_\varepsilon}{\varepsilon}\cdot\nabla w \\
&\hspace{10mm}= |Q\cap\omega|\displaystyle\int_\Omega\widehat{A}\nabla u_0\cdot\nabla\eta_\varepsilon\widetilde{w}-|Q\cap\omega|\displaystyle\int_\Omega (1-\eta_\varepsilon)\widehat{A}\nabla u_0\cdot\nabla\widetilde{w} \\
&\hspace{20mm}+\displaystyle\int_{\Omega}\left[|Q\cap\omega|\widehat{A}-\textbf{1}_+^\varepsilon A^\varepsilon\right]\left[\nabla u_0-\smoothtwo{(\nabla u_0)\eta_\varepsilon}{\varepsilon}\right]\cdot\nabla\widetilde{w} \\
&\hspace{20mm}+\displaystyle\int_{\Omega}\left[|Q\cap\omega|\widehat{A}-\textbf{1}_+^\varepsilon A^\varepsilon-\textbf{1}_+^\varepsilon A^\varepsilon\nabla\chi^\varepsilon\right]\smoothtwo{(\nabla u_0)\eta_\varepsilon}{\varepsilon}\cdot\nabla\widetilde{w} \\
&\hspace{20mm}-\varepsilon\displaystyle\int_{\Omega_\varepsilon}A^\varepsilon\chi^\varepsilon\nabla\smoothtwo{(\nabla u_0) \eta_\varepsilon}{\varepsilon}\cdot\nabla w,
\end{align*}
which is the desired equality.
\end{proof}
Lemmas~\ref{sixtyfour} presented below is used in the proof of Lemma~\ref{twentyone}, which establishes a Poincar\'{e} type inequality for the perforated domain. We use the notation $\Delta(x,r)=B(x,r)\cap\partial \Omega$ to denote a surface ball of $\partial\Omega$.
\begin{lemm}\label{sixtyfour}
For sufficiently small $\varepsilon$, there exist $r_0,\rho_0>0$ depending only on $\omega$ such that for any $x\in\partial\Omega$,
\[
\Delta\left(y,\varepsilon\rho_0\right)\subset\Delta(x,\varepsilon r_0)\text{ and }\overline{\Delta\left(y,\varepsilon\rho_0\right)}\subset\Gamma_\varepsilon
\]
for some $y\in\Gamma_\varepsilon$.
\end{lemm}
\begin{proof}
Write $\mathbb{R}^d\backslash\omega=\cup_{j=1}^\infty H_j$, where each $H_j$ is connected and bounded by assumption (see Section~\ref{section2}). Since $\omega$ is 1-periodic, there exists a constant $M<\infty$ such that
\[
\underset{j\geq 1}{\sup}\,\{\text{diam}\,H_j\}\leq M.
\]
Take
\begin{equation}\label{seventytwo}
r_0=2\max\left\{\mathfrak{g}^\omega,M\right\},
\end{equation}
where $\mathfrak{g}^\omega$ is defined in Section~\ref{section2}. Set $\rho_0=\frac{1}{16}\mathfrak{g}^\omega$. Let
\[
\widetilde{H}_j=\left\{z\in\mathbb{R}^d\,:\,\text{dist}(z,H_j)<\frac{1}{4}\mathfrak{g}^\omega\right\}\text{ for each }j,
\]
and fix $x\in\partial\Omega$. If $x\in\partial\Omega\backslash(\cup_{j=1}^\infty \varepsilon \widetilde{H}_j)$, then take $y=x$. Indeed, for any $z\in\Delta(y,\varepsilon \rho_0)\subset\Delta(x,\varepsilon r_0)$ and any positive integer $k$,
\begin{align*}
\text{dist}(z,\varepsilon H_k) &\geq \text{dist}(y,\varepsilon H_k)-|y-z| \\
&\geq \varepsilon\frac{1}{4}\mathfrak{g}^\omega-\varepsilon\rho_0 \\
&\geq \varepsilon\left\{\frac{1}{4}\mathfrak{g}^\omega-\frac{1}{16}\mathfrak{g}^\omega\right\} \\
&\geq \varepsilon\frac{3}{16}\mathfrak{g}^\omega,
\end{align*}
and so $\overline{\Delta(y,\varepsilon\rho_0)}\subset\Gamma_\varepsilon$.
Suppose $x\in\partial\Omega\cap(\cup_{j=1}^\infty \varepsilon\widetilde{H}_j)$. There exists a positive integer $k$ such that $x\in\varepsilon \widetilde{H}_k$. Moreover, $\varepsilon\widetilde{H}_k\subset B(x,\varepsilon r_0)$ since for any $z\in\varepsilon\widetilde{H}_k$ we have
\begin{align*}
|x-z| &\leq \text{dist}(x,\varepsilon H_k)+\text{diam}\,(\varepsilon H_k)+\text{dist}(z,\varepsilon H_k) \\
&\leq \varepsilon\frac{1}{4}\mathfrak{g}^\omega+\varepsilon M+\varepsilon\frac{1}{4}\mathfrak{g}^\omega \\
&< \varepsilon\mathfrak{g}^\omega+\varepsilon M \\
&< \varepsilon r_0.
\end{align*}
In this case, choose $y\in \varepsilon(\widetilde{H}_k\backslash H_k)$ so that $\text{dist}(y,\varepsilon H_k)= \varepsilon(1/8)\mathfrak{g}^\omega$ and $y\in\partial\Omega$. Then for any $z\in\Delta(y,\varepsilon\rho_0)\subset[\partial\Omega\cap\varepsilon(\widetilde{H}_k\backslash H_k)]\subset\Delta(x,\varepsilon r_0)$,
\begin{align*}
\text{dist}(z,\varepsilon H_k) &\geq \text{dist}(y,\varepsilon H_k)-|y-z| \\
&\geq \varepsilon\frac{1}{8}\mathfrak{g}^\omega-\varepsilon\dfrac{1}{16}\mathfrak{g}^\omega \\
&\geq \varepsilon\dfrac{1}{16}\mathfrak{g}^\omega,
\end{align*}
and so $\overline{\Delta(y,\varepsilon\rho_0)}\subset\Gamma_\varepsilon$.
\end{proof}
\begin{lemm}\label{twentyone}
For $w\in H^1(\Omega_\varepsilon,\Gamma_\varepsilon;\mathbb{R}^d)$,
\[
\|\widetilde{w}\|_{L^2(\mathcal{O}_{4\varepsilon})}\leq C\varepsilon\|\nabla \widetilde{w}\|_{L^2(\Omega)},
\]
where $\mathcal{O}_{4\varepsilon}=\{x\in\Omega\,:\,\text{dist}(x,\partial\Omega)<4\varepsilon\}$ and $C$ depends on $d$, $\Omega$, and $\omega$.
\end{lemm}
\begin{proof}
We cover $\partial\Omega$ with the surface balls $\Delta(x,\varepsilon r_0)$ provided in Lemma~\ref{sixtyfour} and partition the region $\mathcal{O}_{4\varepsilon}$. In particular, let $r_0$ denote the constant given by Lemma~\ref{sixtyfour}, and note $\cup_{x\in\partial\Omega}\Delta(x,\varepsilon r_0)$ covers $\partial\Omega$, which is compact. Then there exists $\{x_i\}_{i=1}^{N}$ with $\partial\Omega\subset\cup_{i=1}^{N}\Delta(x_i,\varepsilon r_0)$, where $N=N(\varepsilon)$. Write
\[
\mathcal{O}_{4\varepsilon}^{(i)}=\{x\in\Omega\,:\,\text{dist}(x,\Delta_i)<4\varepsilon\},\,\,\,\text{where }\Delta_i=\Delta(x_i,\varepsilon r_0).
\]
Given that $\Omega$ is a Lipschitz domain, there exists a positive integer $M<\infty$ independent of $\varepsilon$ such that $\mathcal{O}_{4\varepsilon}^{(i)}\cap\mathcal{O}_{4\varepsilon}^{(j)}\neq\emptyset$ for at most $M$ positive integers $j$ different from $i$.
Set $W(x)=\widetilde{w}(\varepsilon x)$. Note for each $1\leq i\leq N$, by Lemma~\ref{sixtyfour} there exists a $y_i\in \mathcal{O}_{4\varepsilon}^{(i)}$ such that $\widetilde{w}\equiv 0$ on $\Delta(y_i,\varepsilon\rho_0)\subset\Delta_i$. Hence, by Poincar\'{e}'s inequality (see Theorem 1 in~\cite{meyers}),
\begin{equation}\label{seventythree}
\left(\displaystyle\int_{\mathcal{O}_{4\varepsilon}^{(i)}/\varepsilon}|W|^2\right)^{1/2}\leq C\left(\displaystyle\int_{\mathcal{O}_{4\varepsilon}^{(i)}/\varepsilon}|\nabla W|^2\right)^{1/2},
\end{equation}
where $C$ depends on $\Omega$, $r_0$, and $\rho_0$ but is independent of $\varepsilon$ and $i$. Specifically,
\[
\displaystyle\int_{\mathcal{O}_{4\varepsilon}}|\widetilde{w}(x)|^2\,dx\leq C\varepsilon^2\sum_{i=1}^N\displaystyle\int_{\mathcal{O}_{4\varepsilon}^{(i)}}|\nabla \widetilde{w}(x)|^2\,dx\leq C_1\varepsilon^2\displaystyle\int_{\mathcal{O}_{4\varepsilon}}|\nabla \widetilde{w}(x)|^2\,dx
\]
where we've made the change of variables $\varepsilon x\mapsto x$ in~\eqref{seventythree} and $C_1$ is a constant depending on $\Omega$, $\omega$, and $M$ but independent of $\varepsilon$.
\end{proof}
\begin{lemm}\label{twentysix}
For $w\in H^1(\Omega_\varepsilon,\Gamma_\varepsilon;\mathbb{R}^d)$,
\begin{align*}
\left|\displaystyle\int_{\Omega_\varepsilon}A^\varepsilon\nabla r_\varepsilon\cdot\nabla w\right| &\leq C\left\{\|u_0\|_{L^2(\mathcal{O}_{4\varepsilon})}+\|(\nabla u_0)\eta_\varepsilon-\smoothone{(\nabla u_0)\eta_\varepsilon}{\varepsilon}\|_{L^2(\Omega)}\right. \\
&\hspace{30mm}\left.+\varepsilon\|\smoothone{(\nabla^2 u_0)\eta_\varepsilon}{\varepsilon}\|_{L^2(\Omega)}\right\}\|w\|_{H^1(\Omega_\varepsilon)}
\end{align*}
\end{lemm}
\begin{proof}
By Lemma~\ref{twenty},
\begin{equation}\label{seventy}
\displaystyle\int_{\Omega_\varepsilon}A^\varepsilon\nabla r_\varepsilon\cdot\nabla w=I_1+I_2+I_3+I_4+I_5,
\end{equation}
where
\begin{align*}
I_1 &= |Q\cap\omega|\displaystyle\int_{\Omega}\widehat{A}\nabla u_0\cdot\nabla\eta_\varepsilon\widetilde{w},\\
I_2 &= -|Q\cap\omega|\displaystyle\int_{\Omega}(1-\eta_\varepsilon)\widehat{A}\nabla u_0\cdot\nabla\widetilde{w},\\
I_3 &= \displaystyle\int_\Omega\left[|Q\cap\omega|\widehat{A}-\textbf{1}_+^\varepsilon A^\varepsilon\right]\left[\nabla u_0-\smoothtwo{(\nabla u_0)\eta_\varepsilon}{\varepsilon}\right]\cdot\nabla \widetilde{w},\\
I_4 &= \displaystyle\int_\Omega\left[|Q\cap\omega|\widehat{A}-\textbf{1}_+^\varepsilon A^\varepsilon\nabla\mathbb{X}^\varepsilon\right]\smoothtwo{(\nabla u_0)\eta_\varepsilon}{\varepsilon}\cdot\nabla \widetilde{w},\\
I_5 &= -\varepsilon\displaystyle\int_{\Omega_\varepsilon}A^\varepsilon\chi^\varepsilon\nabla\smoothtwo{(\nabla u_0)\eta_\varepsilon}{\varepsilon}\cdot\nabla w,
\end{align*}
and $w\in H^1(\Omega_\varepsilon,\Gamma_\varepsilon;\mathbb{R}^d)$. According to~\eqref{eleven}, $\text{supp}(\nabla\eta_\varepsilon)\subset\mathcal{O}_{4\varepsilon}$, where $\mathcal{O}_{4\varepsilon}=\{x\in\Omega\,:\,\text{dist}(x,\partial\Omega)<4\varepsilon\}$. Moreover, $|\nabla \eta_\varepsilon|\leq C\varepsilon^{-1}$. Hence, Lemma~\ref{twentyone}, Lemma~\ref{sixtyeight}, and~\eqref{eleven} imply
\[
|I_1|\leq C\varepsilon^{-1}\displaystyle\int_{\mathcal{O}_{4\varepsilon}}|\nabla u_0\cdot\widetilde{w}|\leq C\|\nabla u_0\|_{L^2(\mathcal{O}_{4\varepsilon})}\|\nabla \widetilde{w}\|_{L^2(\Omega)}.
\]
Since $\text{supp}(1-\eta_\varepsilon)\subset\mathcal{O}_{4\varepsilon}$ and $\eta_\varepsilon\leq 1$, Lemma~\ref{sixtynine} and~Lemma~\ref{sixtyeight} imply
\begin{align}
|I_2|&\leq C\displaystyle\int_{\mathcal{O}_{4\varepsilon}}\left|\widehat{A}\nabla u_0\cdot\nabla\widetilde{w}\right|\leq C\|\nabla u_0\|_{L^2(\mathcal{O}_{4\varepsilon})}\|\nabla \widetilde{w}\|_{L^2(\Omega)}.\nonumber
\end{align}
By Theorem~\ref{thirteen},
\begin{equation}\label{twentytwo}
|I_1+I_2|\leq C\|\nabla u_0\|_{L^2(\mathcal{O}_{4\varepsilon})}\|w\|_{H^1(\Omega_\varepsilon)}.
\end{equation}
Again, since $\text{supp}(1-\eta_\varepsilon)\subset\mathcal{O}_{4\varepsilon}$ (see~\eqref{eleven}),
\begin{align*}
&\|\nabla u_0-\smoothtwo{(\nabla u_0)\eta_\varepsilon}{\varepsilon}\|_{L^2(\Omega)} \\
&\hspace{10mm}\leq \|(1-\eta_\varepsilon)\nabla u_0\|_{L^2(\Omega)}+\|(\nabla u_0)\eta_\varepsilon-\smoothone{(\nabla u_0)\eta_\varepsilon}{\varepsilon}\|_{L^2(\Omega)} \\
&\hspace{20mm}+\|\smoothone{(\nabla u_0)\eta_\varepsilon-\smoothone{(\nabla u_0)\eta_\varepsilon}{\varepsilon}}{\varepsilon}\|_{L^2(\Omega)} \\
&\hspace{10mm}\leq\|\nabla u_0\|_{L^2(\mathcal{O}_{4\varepsilon})}+C\|(\nabla u_0)\eta_\varepsilon-\smoothone{(\nabla u_0)\eta_\varepsilon}{\varepsilon}\|_{L^2(\Omega)}.
\end{align*}
Therefore,
\begin{align}
|I_3|&\leq C\|\nabla u_0-\smoothtwo{(\nabla u_0)\eta_\varepsilon}{\varepsilon}\|_{L^2(\Omega)}\|w\|_{H^1(\Omega_\varepsilon)} \nonumber\\
&\leq C\left\{\|\nabla u_0\|_{L^2(\mathcal{O}_{4\varepsilon})} \right. \nonumber\\
&\hspace{10mm}\left.+\|(\nabla u_0)\eta_\varepsilon-\smoothone{(\nabla u_0)\eta_\varepsilon}{\varepsilon}\|_{L^2(\Omega)} \right\}\|w\|_{H^1(\Omega_\varepsilon)}.\label{twentythree}
\end{align}
Set $B=|Q\cap\omega|\widehat{A}-\textbf{1}_+ A\nabla\mathbb{X}$. By~\eqref{eight} and~\eqref{nine}, $B$ satisfies the assumptions of Lemma~\ref{fourteen}. Therefore, there exists $\pi=\{\pi_{kij}^{\a\b}\}$ that is 1-periodic with
\[
\dfrac{\partial}{\partial y_k}\pi_{kij}^{\a\b}=b_{ij}^{\a\b}\,\,\,\text{ and }\,\,\,\pi_{kij}^{\a\b}=-\pi_{ikj}^{\a\b},
\]
where
\[
b_{ij}^{\a\b}=|Q\cap\omega|\widehat{a}_{ij}^{\a\b}-\textbf{1}_+a_{ik}^{\a\gamma}\dfrac{\partial}{\partial y_k}\mathbb{X}_{j}^{\gamma\b}.
\]
Moreover, $\|\pi_{ij}^{\a\b}\|_{H^1(Q)}\leq C$ for some constant $C$ depending on $\kappa_1$, $\kappa_2$, and $\omega$. Hence, integrating by parts gives
\begin{align*}
\displaystyle\int_{\Omega}b_{ij}^{\a\b\varepsilon}\smoothtwo{\dfrac{\partial u_0^\b}{\partial x_j}\eta_\varepsilon}{\varepsilon}\dfrac{\partial\widetilde{w}^\a}{\partial x_i} &=-\varepsilon\displaystyle\int_{\Omega}\pi_{kij}^{\a\b\varepsilon}\dfrac{\partial}{\partial x_k}\left[\smoothtwo{\dfrac{\partial u_0^\b}{\partial x_j}\eta_\varepsilon}{\varepsilon}\dfrac{\partial\widetilde{w}^\a}{\partial x_i}\right] \\
&=-\varepsilon\displaystyle\int_{\Omega}\pi_{kij}^{\a\b\varepsilon}\dfrac{\partial}{\partial x_k}\left[\smoothtwo{\dfrac{\partial u_0^\b}{\partial x_j}\eta_\varepsilon}{\varepsilon}\right]\dfrac{\partial\widetilde{w}^\a}{\partial x_i} ,
\end{align*}
since
\[
\displaystyle\int_{\Omega}\pi_{kij}^{\a\b\varepsilon}\smoothtwo{\dfrac{\partial u_0^\b}{\partial x_j}\eta_\varepsilon}{\varepsilon}\dfrac{\partial^2\widetilde{w}^\a}{\partial x_k\partial x_i}=0
\]
due to the anit-symmetry of $\pi$. Thus, by Lemma~\ref{fifteen}, and~\eqref{eleven},
\begin{align}
|I_4|&\leq C\varepsilon\|\pi^\varepsilon\nabla\smoothtwo{(\nabla u_0)\eta_\varepsilon}{\varepsilon}\|_{L^2(\Omega)}\|w\|_{H^1(\Omega_\varepsilon)} \nonumber\\
&\leq C\left\{\|\nabla u_0\|_{L^2(\mathcal{O}_{4\varepsilon})}+\varepsilon\|\smoothone{(\nabla^2 u_0)\eta_\varepsilon}{\varepsilon}\|_{L^2(\Omega)}\right\}\|w\|_{H^1(\Omega_\varepsilon)}.\label{twentyfour}
\end{align}
Finally, by Lemma~\ref{fifteen}, and~\eqref{eleven},
\begin{align}
|I_5|\leq C\left\{\|\nabla u_0\|_{L^2(\mathcal{O}_{4\varepsilon})}+\varepsilon\|\smoothone{(\nabla^2 u_0)\eta_\varepsilon}{\varepsilon}\|_{L^2(\Omega)}\right\}\|w\|_{H^1(\Omega_\varepsilon)}\label{twentyfive}
\end{align}
The desired estimate follows from~\eqref{seventy},~\eqref{twentytwo},~\eqref{twentythree},~\eqref{twentyfour}, and~\eqref{twentyfive}.
\end{proof}
\begin{lemm}\label{twentyseven}
For $w\in H^1(\Omega_\varepsilon,\Gamma_\varepsilon;\mathbb{R}^d)$,
\[
\left|\displaystyle\int_{\Omega_\varepsilon}A^\varepsilon\nabla r_\varepsilon\cdot\nabla w\right|\leq C\varepsilon^{1/2}\|f\|_{H^1(\partial\Omega)}\|w\|_{H^1(\Omega_\varepsilon)}
\]
\end{lemm}
\begin{proof}
Recall that $u_0$ satisfies $\mathcal{L}_0(u_0)=0$ in $\Omega$, and so it follows from estimates for solutions in Lipschitz domains for constant-coefficient equations that
\begin{equation}\label{twentynine}
\|(\nabla u_0)^*\|_{L^2(\partial\Omega)}\leq C\|f\|_{H^1(\partial\Omega)},
\end{equation}
where $(\nabla u_0)^*$ denotes the nontangential maximal function of $\nabla u_0$ (see~\cite{dahlberg}). By the coarea formula,
\begin{equation}\label{thirty}
\|\nabla u_0\|_{L^2(\mathcal{O}_{4\varepsilon})}\leq C\varepsilon^{1/2}\|(\nabla u_0)^*\|_{L^2(\partial\Omega)}\leq C\varepsilon^{1/2}\|f\|_{H^1(\partial\Omega)}.
\end{equation}
Notice that if $u_0$ solves~\eqref{seven}, then $\mathcal{L}_0(\nabla u_0)=0$ in $\Omega$, and so we may use the interior estimate for $\mathcal{L}_0$. That is,
\begin{equation}\label{twentyeight}
|\nabla^2 u_0(x)|\leq\dfrac{C}{\d(x)}\left(-\!\!\!\!\!\!\displaystyle\int_{B(x,\d(x)/8)}|\nabla u_0|^2\right)^{1/2},
\end{equation}
where $\d(x)=\text{dist}(x,\partial\Omega)$. In particular,
\begin{align}
\|(\nabla^2 u_0)\eta_\varepsilon\|_{L^2(\Omega)} &\leq \left(\displaystyle\int_{\Omega\backslash\mathcal{O}_{3\varepsilon}}|\nabla^2 u_0|^2\right)^{1/2} \nonumber\\
&\leq C\left(\displaystyle\int_{\Omega\backslash\mathcal{O}_{3\varepsilon}}-\!\!\!\!\!\!\displaystyle\int_{B(x,\d(x)/8)}\left|\dfrac{\nabla u_0(y)}{\d(x)}\right|^2\,dy\>dx\right)^{1/2} \nonumber\\
&\leq C\left(\displaystyle\int_{3\varepsilon}^{C_0}t^{-2}\displaystyle\int_{\partial\mathcal{O}_t\cap\Omega}-\!\!\!\!\!\!\displaystyle\int_{B(x,t/8)}|\nabla u_0(y)|^2\,dy \>dS(x)\>dt\right)^{1/2} \nonumber\\
&\hspace{30mm}+C_1\left(\displaystyle\int_{\Omega\backslash\mathcal{O}_{C_0}}|\nabla u_0|^2\right)^{1/2}\nonumber\\
&\leq C\|(\nabla u_0)^*\|_{L^2(\partial\Omega)}\left(\displaystyle\int_{3\varepsilon}^{C_0}t^{-2}\,dt\right)^{1/2}+C_1\|\nabla u_0\|_{L^2(\Omega)} \nonumber\\
&\leq C\left\{\varepsilon^{-1/2}\|f\|_{H^1(\partial\Omega)}+\|f\|_{H^{1/2}(\partial\Omega)}\right\} \nonumber\\
&\leq C\varepsilon^{-1/2}\|f\|_{H^1(\partial\Omega)}\label{thirtyone}.
\end{align}
where $C_0$ is a constant depending on $\Omega$, and we've used~\eqref{eleven},~\eqref{twentyeight}, the coarea formula, energy estimates, and~\eqref{twentynine}. Hence,
\begin{equation}\label{thirtytwo}
\varepsilon\|\smoothone{(\nabla^2 u_0)\eta_\varepsilon}{\varepsilon}\|_{L^2(\Omega)}\leq C\varepsilon^{1/2}\|f\|_{H^1(\partial\Omega)}.
\end{equation}
Finally, by Lemma~\ref{sixteen},
\begin{equation}\label{thirtythree}
\|(\nabla u_0)\eta_\varepsilon-\smoothone{(\nabla u_0)\eta_\varepsilon}{\varepsilon}\|_{L^2(\Omega)}\leq C\varepsilon^{1/2}\|f\|_{H^1(\partial\Omega)}.
\end{equation}
where the last inequality follows from~\eqref{eleven}, Lemma~\ref{sixteen}, and~\eqref{thirtyone}. Equations \eqref{thirty}, \eqref{thirtytwo}, and~\eqref{thirtythree} together with Lemma~\ref{twentysix} give the desired estimate.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{ten}]
Note $r_\varepsilon\in H^1(\Omega_\varepsilon,\Gamma_\varepsilon;\mathbb{R}^d)$, and so by Lemma~\ref{twentyseven} and~\eqref{three},
\begin{align*}
\|e(r_\varepsilon)\|^2_{L^2(\Omega_\varepsilon)} &\leq C\displaystyle\int_{\Omega_\varepsilon}A^\varepsilon\nabla r_\varepsilon\cdot\nabla r_\varepsilon \\
&\leq C\varepsilon^{1/2}\|f\|_{H^1(\partial\Omega)}\|r_\varepsilon\|_{H^1(\Omega_\varepsilon)}.
\end{align*}
Lemma~\ref{fortysix} gives the desired estimate.
\end{proof}
\section{Interior Lipschitz Estimate}\label{section4}
In this section, we use Theorem~\ref{ten} to investigate interior Lipschitz estimates down to the scale $\varepsilon$. In particular, we prove Theorem~\ref{nineteen}. The proof of Theorem~\ref{nineteen} is based on the scheme used in~\cite{shen} to prove boundary Lipschitz estimates for solutions to~\eqref{five} in the case $\omega=\mathbb{R}^d$, which in turn is based on a more general scheme for establishing Lipschitz estimates presented in~\cite{smart} and adapted in~\cite{shen} and~\cite{armstrong}.
The following Lemma is essentially Cacciopoli's inequality in a perforated ball. The proof is similar to a proof of the classical Cacciopoli's ineqaulity, but nevertheless we present a proof for completeness.
Throughout this section, let $B_\varepsilon(r)$ denote the perforated ball of radius $r$ centered at some $x_0\in\mathbb{R}^d$, i.e., $B_\varepsilon(r)=B(x_0,r)\cap\varepsilon\omega$. Let $S_\varepsilon(r)=\partial(\varepsilon\omega)\cap B(x_0,r)$ and $\Gamma_\varepsilon(r)=\varepsilon\omega\cap\partial B(x_0,r)$.
\begin{lemm}\label{thirtysix}
Suppose $\mathcal{L}_\varepsilon(u_\varepsilon)=0$ in $B_\varepsilon(2)$ and $\sigma_\varepsilon(u_\varepsilon)=0$ on $S_\varepsilon(2)$. There exists a constant $C$ depending on $\kappa_1$ and $\kappa_2$ such that
\[
\left(-\!\!\!\!\!\!\displaystyle\int_{B_\varepsilon(1)}|\nabla u_\varepsilon|^2\right)^{1/2}\leq C\underset{q\in\mathbb{R}^d}{\inf}\left(-\!\!\!\!\!\!\displaystyle\int_{B_\varepsilon(2)}|u_\varepsilon-q|^2\right)^{1/2}
\]
\end{lemm}
\begin{proof}
Let $\varphi\in C_0^\infty(B(2))$ satisfy $0\leq\varphi\leq 1$, $\varphi\equiv 1$ on $B(1)$, $|\nabla\varphi|\leq C_1$ for some constant $C_1$. Let $q\in\mathbb{R}^d$, and set $w=(u_\varepsilon-q)\varphi^2$. By~\eqref{nineteen} and H\"{o}lder's inequality,
\begin{align}
0&=\displaystyle\int_{B_\varepsilon(2)}A^\varepsilon\nabla u_\varepsilon\nabla w \nonumber\\
&\geq C_2\displaystyle\int_{B_\varepsilon(2)}|e(u_\varepsilon)|^2\varphi^2-C_3\displaystyle\int_{B_\varepsilon(2)}|\nabla\varphi|^2|u_\varepsilon-q|^2\label{twelve}
\end{align}
for some constants $C_2$ and $C_3$ depending on $\kappa_1$ and $\kappa_2$. In particular,
\[
\displaystyle\int_{B_\varepsilon(2)}|e(u_\varepsilon\varphi)|^2\leq C\displaystyle\int_{B_\varepsilon(2)}|\nabla\varphi|^2|u_\varepsilon-q|^2,
\]
where $C$ only depends on $\kappa_1$ and $\kappa_2$. Since $\varphi\equiv 1$ in $B(1)$ and $u_\varepsilon\varphi\in H^1(B_\varepsilon(2),\Gamma_\varepsilon(2);\mathbb{R}^d)$, equation~\eqref{twelve} together with Lemma~\ref{fortysix} gives the desired estimate.
\end{proof}
We extend Lemma~\ref{thirtysix} to hold for a ball $B_\varepsilon(r)$ with $r>0$ by a convenient scaling technique---the so called ``blow-up argument''---often used in the study of homogenization.
\begin{lemm}\label{thirtyseven}
Suppose $\mathcal{L}_\varepsilon(u_\varepsilon)=0$ in $B_\varepsilon(2r)$ and $\sigma_\varepsilon(u_\varepsilon)=0$ on $S_\varepsilon(2r)$. There exists a constant $C$ depending on $\kappa_1$ and $\kappa_2$ such that
\[
\left(-\!\!\!\!\!\!\displaystyle\int_{B_\varepsilon(r)}|\nabla u_\varepsilon|^2\right)^{1/2}\leq \dfrac{C}{r}\underset{q\in\mathbb{R}^d}{\inf}\left(-\!\!\!\!\!\!\displaystyle\int_{B_\varepsilon(2r)}|u_\varepsilon-q|^2\right)^{1/2}
\]
\end{lemm}
\begin{proof}
Let $U_\varepsilon(x)=u_\varepsilon(rx)$, and note $U_\varepsilon$ satisfies $\mathcal{L}_{\varepsilon/r}(U_\varepsilon)=0$ in $B_\varepsilon(2)$ and $\sigma_{\varepsilon/r}(U_\varepsilon)=0$ on $S_\varepsilon(2)$. By Lemma~\ref{thirtysix},
\[
\left(-\!\!\!\!\!\!\displaystyle\int_{B_{\varepsilon/r}(1)}|\nabla U_\varepsilon|^2\right)^{1/2}\leq C\underset{q\in\mathbb{R}^d}{\inf}\left(-\!\!\!\!\!\!\displaystyle\int_{B_{\varepsilon/r}(2)}|U_\varepsilon-q|^2\right)^{1/2}
\]
for some $C$ independent of $\varepsilon$ and $r$. Note $\nabla U_\varepsilon=r\nabla u_\varepsilon$, and so
\[
r^{1-d/2}\left(-\!\!\!\!\!\!\displaystyle\int_{B_{\varepsilon}(r)}|\nabla u_\varepsilon|^2\right)^{1/2}\leq Cr^{-d/2}\underset{q\in\mathbb{R}^d}{\inf}\left(-\!\!\!\!\!\!\displaystyle\int_{B_{\varepsilon}(2r)}|u_\varepsilon-q|^2\right)^{1/2},
\]
where we've made the substitution $rx\mapsto x$. The desired inequality follows.
\end{proof}
The following lemma is a key estimate in the proof of Theorem~\ref{nineteen}. Intrinsically, the following Lemma uses the convergence rate in Theorem~\ref{ten} to approximate the solution $u_\varepsilon$ with a ``nice'' function.
\begin{lemm}\label{forty}
Suppose $\mathcal{L}_\varepsilon(u_\varepsilon)=0$ in $B_\varepsilon(3r)$ and $\sigma_\varepsilon(u_\varepsilon)=0$ on $S_\varepsilon(3r)$. There exists a $v\in H^1(B(r);\mathbb{R}^d)$ with $\mathcal{L}_0(v)=0$ in $B(r)$ and
\[
\left(-\!\!\!\!\!\!\displaystyle\int_{B_\varepsilon(r)}|u_\varepsilon-v|\right)^{1/2}\leq C\left(\dfrac{\varepsilon}{r}\right)^{1/2}\left(-\!\!\!\!\!\!\displaystyle\int_{B_\varepsilon(3r)}|u_\varepsilon|^2\right)^{1/2}
\]
for some constant $C$ depending on $d$, $\omega$, $\kappa_1$, and $\kappa_2$
\end{lemm}
\begin{proof}
With rescaling (see the proof of Lemma~\ref{thirtyseven}), we may assume $r=1$. By Lemma~\ref{thirtyseven} and estimate~\eqref{thirtyeight} of Lemma~\ref{thirteen},
\[
\left(-\!\!\!\!\!\!\displaystyle\int_{B(3/2)}|\widetilde{u}_\varepsilon|^2\right)^{1/2}+\left(-\!\!\!\!\!\!\displaystyle\int_{B(3/2)}|\nabla \widetilde{u}_\varepsilon|^2\right)^{1/2}\leq C\left(-\!\!\!\!\!\!\displaystyle\int_{B_\varepsilon(3)}|u_\varepsilon|^2\right)^{1/2},
\]
where $\widetilde{u}_\varepsilon=P_\varepsilon u_\varepsilon\in H^1(B(3);\mathbb{R}^d)$ and $P_\varepsilon$ is the linear extension operator provided in Lemma~\ref{thirteen}. The coarea formula then implies there exists a $t\in [1,3/2]$ such that
\begin{equation}\label{thirtynine}
\|\nabla \widetilde{u}_\varepsilon\|_{L^2(\partial B(t))}+\|\widetilde{u}_\varepsilon\|_{L^2(\partial B(t))}\leq C\|u_\varepsilon\|_{L^2(B_\varepsilon(3))}.
\end{equation}
Let $v$ denote the solution to the Dirichlet problem $\mathcal{L}_0(v)=0$ in $B(t)$ and $v=\widetilde{u}_\varepsilon$ on $\partial B(t)$. Note that $v=u_\varepsilon=\widetilde{u}_\varepsilon$ on $\Gamma_\varepsilon(t)$. By Theorem~\ref{ten},
\[
\|u_\varepsilon-v\|_{L^2(B_\varepsilon(t))}\leq C\varepsilon^{1/2}\|\widetilde{u}_\varepsilon\|_{H^1(\partial B(t))}
\]
since
\[
\|\chi^\varepsilon\smoothtwo{(\nabla v)\eta_\varepsilon}{\varepsilon}\|_{L^2(B_\varepsilon(t))}\leq C\|\nabla v\|_{L^2(B(t))},
\]
where we've used notation consistent with Theorem~\ref{ten}. Hence,~\eqref{thirtynine} gives
\[
\|u_\varepsilon-v\|_{L^2(B_\varepsilon(1))}\leq \|u_\varepsilon-v\|_{L^2(B_\varepsilon(t))}\leq C\varepsilon^{1/2}\|u_\varepsilon\|_{L^2(B_\varepsilon(3))}.
\]
\end{proof}
\begin{lemm}\label{fortytwo}
Suppose $\mathcal{L}_0(v)=0$ in $B(2r)$. For $r\geq \varepsilon$, there exists a constant $C$ depending on $\omega,\kappa_1,\kappa_2$ and $d$ such that
\begin{equation}\label{fortythree}
\left(-\!\!\!\!\!\!\displaystyle\int_{B(r)}|v|^2\right)^{1/2}\leq C\left(-\!\!\!\!\!\!\displaystyle\int_{B_\varepsilon(2r)}|v|^2\right)^{1/2}
\end{equation}
\end{lemm}
\begin{proof}
Let
\[
T_\varepsilon=\{z\in\mathbb{Z}^d\,:\,\varepsilon(Q+z)\cap B(r)\neq\emptyset\},
\]
and fix $z\in T_\varepsilon$. Let $\{H_{k}\}_{k=1}^N$ denote the bounded, connected components of $\mathbb{R}^d\backslash\omega $ with $H_k\cap (Q+z)\neq \emptyset$. Define $\varphi_k\in C_0^\infty(Q^*(z))$ by
\[
\begin{cases}
\varphi_k(x)=1,\,\,\,\text{ if }x\in H_k, \\
\varphi_k(x)=0,\,\,\,\text{ if }\text{dist}(x,H_k)>\frac{1}{4}\mathfrak{g}^\omega, \\
|\nabla\varphi_k|\leq C,
\end{cases}
\]
where $C$ depends on $\omega$, $\mathfrak{g}^\omega>0$ is defined in Section 2 by~\eqref{sixtysix}, and
\[
Q^*(z)=\bigcup_{j=1}^{3^d} (Q+z_j),\,\,\,z_j\in\mathbb{Z}^d\text{ and }|z-z_j|\leq \sqrt{d}.
\]
Set $\varphi=\sum_{k=1}^N\varphi_k\in C_0^\infty(Q^*)$, where $Q^*=Q^*(z)$. Note by construction $\varphi\equiv 1$ in $Q^*\backslash\omega$.
Set $V(x)=v(\varepsilon x)$. Note $\mathcal{L}_0(V)=0$ in $Q+z$. By Poincar\'{e}'s and Cacciopoli's inequalities,
\[
\displaystyle\int_{(Q+z)\backslash\omega}|V|^2\leq\sum_{k=1}^{N}\displaystyle\int_{H_k}|V|^2\leq C\displaystyle\int_{Q^*}|\nabla (V\varphi)|^2\leq C\displaystyle\int_{Q^*}|V|^2|\nabla\varphi|^2,
\]
where $C$ depends on $\omega$, $\kappa_1$, $\kappa_2$, and $d$ but is independent of $z$. Specifically, since $\nabla\varphi=0$ in $Q^*\backslash\omega$ and $(Q+z)\subset Q^*$,
\[
\displaystyle\int_{(Q+z)\cap\omega}|V|^2+\displaystyle\int_{(Q+z)\backslash\omega}|V|^2\leq C\displaystyle\int_{Q^*\cap\omega}|V|^2,
\]
where $C$ only depends on $\omega$, $\kappa_1$, $\kappa_2$, and $d$. Making the change of variables $\varepsilon x\mapsto x$ gives
\[
\displaystyle\int_{\varepsilon(Q+z)}|v|^2\leq C\displaystyle\int_{\varepsilon (Q^*\cap\omega)}|v|^2.
\]
Summing over all $z\in T_\varepsilon$ gives the desired inequality, since there is a constant $M<\infty$ depending only on $d$ such that $Q^*(z_1)\cap Q^*(z_2)\neq\emptyset$ for at most $M$ coordinates $z_2\in\mathbb{Z}^d$ different from $z_1$.
\end{proof}
For $w\in L^2(B_\varepsilon(r);\mathbb{R}^d)$ and $\varepsilon,r>0$, set
\begin{equation}\label{fortyfive}
H_\varepsilon(r;w)=\dfrac{1}{r}\underset{\substack{M\in\mathbb{R}^{d\times d} \\ q\in\mathbb{R}^d}}{\inf}\left(-\!\!\!\!\!\!\displaystyle\int_{B_\varepsilon(r)}|w-Mx-q|^2\right)^{1/2},
\end{equation}
and set
\[
H_0(r;w)=\dfrac{1}{r}\underset{\substack{M\in\mathbb{R}^{d\times d} \\ q\in\mathbb{R}^d}}{\inf}\left(-\!\!\!\!\!\!\displaystyle\int_{B(r)}|w-Mx-q|^2\right)^{1/2}.
\]
\begin{lemm}\label{fortyfour}
Let $v$ be a solution of $\mathcal{L}_0(v)=0$ in $B(r)$. For $r\geq \varepsilon$, there exists a $\theta\in (0,1/4)$ such that
\[
H_\varepsilon(\theta r;v)\leq \dfrac{1}{2} H_\varepsilon(r;v).
\]
\end{lemm}
\begin{proof}
There exists a constant $C_1$ depending on $d$ such that
\[
H_\varepsilon(r;v)\leq C_1H_0(r;v)
\]
for any $r>0$. It follows from interior $C^2$-estimates for elasticity systems with constant coefficients that there exists $\theta\in (0,1/4)$ with
\[
H_0(\theta r;v)\leq \dfrac{1}{2C_2} H_0(r/2;v),
\]
where $C_2=C_3C_1$ and $C_3$ is the constant in~\eqref{fortythree} given in Lemma~\ref{fortytwo}. By Lemma~\ref{fortytwo}, we have the desired inequality.
\end{proof}
\begin{lemm}\label{fifty}
Suppose $\mathcal{L}_\varepsilon(u_\varepsilon)=0$ in $B_\varepsilon(2r)$ and $\sigma_\varepsilon(u_\varepsilon)=0$ on $S_\varepsilon(2r)$. For $r\geq \varepsilon$,
\[
H_\varepsilon(\theta r; u_\varepsilon)\leq \dfrac{1}{2} H_\varepsilon(r;u_\varepsilon)+\dfrac{C}{r}\left(\dfrac{\varepsilon}{r}\right)^{1/2}\underset{q\in\mathbb{R}^d}{\inf}\left(-\!\!\!\!\!\!\displaystyle\int_{B_\varepsilon(3r)}|u_\varepsilon-q|^2\right)^{1/2}
\]
\end{lemm}
\begin{proof}
With $r$ fixed, let $v_r\equiv v$ denote the function guaranteed in Lemma~\ref{forty}. Observe then
\begin{align*}
H_\varepsilon(\theta r; u_\varepsilon) &\leq \dfrac{1}{\theta r}\left(-\!\!\!\!\!\!\displaystyle\int_{B_\varepsilon(\theta r)}|u_\varepsilon-v|^2\right)^{1/2}+H_\varepsilon(\theta r;v) \\
&\leq \dfrac{C}{r}\left(-\!\!\!\!\!\!\displaystyle\int_{B_\varepsilon( r)}|u_\varepsilon-v|^2\right)^{1/2}+\dfrac{1}{2}H_\varepsilon(r;v) \\
&\leq \dfrac{C}{r}\left(-\!\!\!\!\!\!\displaystyle\int_{B_\varepsilon( r)}|u_\varepsilon-v|^2\right)^{1/2}+\dfrac{1}{2}H_\varepsilon(r;u_\varepsilon), \\
\end{align*}
where we've used Lemma~\ref{fortyfour}. By Lemma~\ref{forty}, we have
\[
H_\varepsilon(\theta r; u_\varepsilon)\leq \dfrac{C}{r}\left(\dfrac{\varepsilon}{r}\right)^{1/2}\left(-\!\!\!\!\!\!\displaystyle\int_{B_\varepsilon(3r)}|u_\varepsilon|^2\right)^{1/2}+\dfrac{1}{2}H_\varepsilon(r;u_\varepsilon).
\]
Since $H$ remains invariant if we subtract a constant from $u_\varepsilon$, the desired inequality follows.
\end{proof}
\begin{lemm}\label{fortynine}
Let $H(r)$ and $h(r)$ be two nonnegative continous functions on the interval $(0,1]$. Let $0<\varepsilon<1/6$. Suppose that there exists a constant $C_0$ with
\[
\begin{cases}
\underset{r\leq t\leq 3r}{\max} H(t)\leq C_0 H(3r), \\
\underset{r\leq t,s\leq 3r}{\max} |h(t)-h(s)|\leq C_0H(3r), \\
\end{cases}
\]
for any $r\in [\varepsilon,1/3]$. We further assume
\[
H(\theta r)\leq \dfrac{1}{2}H(r)+C_0\left(\dfrac{\varepsilon}{r}\right)^{1/2}\left\{H(3r)+h(3r)\right\}
\]
for any $r\in [\varepsilon,1/3]$, where $\theta\in (0,1/4)$. Then
\[
\underset{\varepsilon\leq r\leq 1}{\max}\left\{H(r)+h(r)\right\}\leq C\{H(1)+h(1)\},
\]
where $C$ depends on $C_0$ and $\theta$.
\end{lemm}
\begin{proof}
See~\cite{shen}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{nineteen}]
By rescaling, we may assume $R=1$. We assume $\varepsilon\in(0,1/6)$, and we let $H(r)\equiv H_\varepsilon(r;u_\varepsilon)$, where $H_\varepsilon(r;u_\varepsilon)$ is defined above by~\eqref{fortyfive}. Let $h(r)=|M_r|$, where $M_r\in\mathbb{R}^{d\times d}$ satisfies
\[
H(r)=\dfrac{1}{r}\underset{q\in\mathbb{R}^d}{\inf}\left(-\!\!\!\!\!\!\displaystyle\int_{B_\varepsilon(r)}|u_\varepsilon-M_rx-q|^2\right)^{1/2}.
\]
Note there exists a constant $C$ independent of $r$ so that
\begin{equation}\label{fortyeight}
H(t)\leq C H(3r),\,\,\,t\in [r,3r].
\end{equation}
Suppose $s,t\in [r,3r]$. We have
\begin{align}
|h(t)-h(s)| &\leq \dfrac{C}{r}\underset{q\in\mathbb{R}^d}{\inf}\left(-\!\!\!\!\!\!\displaystyle\int_{B_\varepsilon(r)}|(M_t-M_s)x-q|^2\right)^{1/2} \nonumber\\
&\leq \dfrac{C}{t}\underset{q\in\mathbb{R}^d}{\inf}\left(-\!\!\!\!\!\!\displaystyle\int_{B_\varepsilon(t)}|u_\varepsilon-M_tx-q|^2\right)^{1/2} \nonumber\\
&\hspace{10mm}+\dfrac{C}{s}\underset{q\in\mathbb{R}^d}{\inf}\left(-\!\!\!\!\!\!\displaystyle\int_{B_\varepsilon(s)}|u_\varepsilon-M_sx-q|^2\right)^{1/2} \nonumber\\
&\leq C H(3r),\nonumber
\end{align}
where we've used~\eqref{fortyeight} for the last inequality. Specifically,
\begin{equation}\label{fiftyone}
\underset{r\leq t,s\leq 3r}{\max}|h(t)-h(s)|\leq CH(3r).
\end{equation}
Clearly
\[
\dfrac{1}{r}\underset{q\in\mathbb{R}^d}{\inf}\left(-\!\!\!\!\!\!\displaystyle\int_{B_\varepsilon(3r)}|u_\varepsilon-q|^2\right)^{1/2}\leq H(3r)+h(3r),
\]
and so Lemma~\ref{fifty} implies
\begin{equation}\label{fiftytwo}
H(\theta r)\leq \dfrac{1}{2}H(r)+C\left(\dfrac{\varepsilon}{r}\right)^{1/2}\left\{H(3r)+h(3r)\right\}
\end{equation}
for any $r\in [\varepsilon,1/3]$ and some $\theta\in (0,1/4)$. Note equations~\eqref{fortyeight},~\eqref{fifty}, and~\eqref{fiftytwo} show that $H(r)$ and $h(r)$ satisfy the assumptions of Lemma~\ref{fortynine}. Consequently,
\begin{align}
\left(-\!\!\!\!\!\!\displaystyle\int_{B_\varepsilon(r)}|\nabla u_\varepsilon|^2\right)^{1/2}&\leq \dfrac{C}{r}\underset{q\in\mathbb{R}^d}{\inf}\left(-\!\!\!\!\!\!\displaystyle\int_{B_\varepsilon(3r)}|u_\varepsilon-q|^2\right)^{1/2} \nonumber\\
&\leq C\left\{H(3r)+h(3r)\right\} \nonumber\\
&\leq C\left\{H(1)+h(1)\right\} \nonumber\\
&\leq C\left(-\!\!\!\!\!\!\displaystyle\int_{B_\varepsilon(1)}|u_\varepsilon|^2\right)^{1/2}.\label{fiftythree}
\end{align}
Since~\eqref{fiftythree} remains invariant if we subtract a constant from $u_\varepsilon$, the desired estimate in Theorem~\ref{nineteen} follows.
\end{proof}
\begin{proof}[Proof of Corollary~\ref{fiftyfive}]
Under the H\"{o}lder continuous condition~\eqref{fiftyfour} and the assumption that $\omega$ is an unbounded $C^{1,\a}$ domain for some $\alpha>0$, solutions to the systems of linear elasticity are known to be locally Lipschitz. That is, if $\mathcal{L}_1(u)=0$ in $B(y,1)\cap\omega$ and $\sigma_1(u)=0$ on $B(y,1)\cap \partial\omega$, then
\begin{equation}\label{fiftynine}
\|\nabla u\|_{L^\infty(B(y,1/3)\cap\omega)}\leq C\left(-\!\!\!\!\!\!\displaystyle\int_{B(y,1)\cap\omega}|\nabla u|^2\right)^{1/2},
\end{equation}
where $C$ depends on $d$, $\kappa_1$, $\kappa_2$, and $\omega$.
By rescaling, we may assume $R=1$. To prove the desired estimate, assume $\varepsilon\in (0,1/6)$. Indeed, if $\varepsilon\geq 1/6$, then~\eqref{sixtyone} follows from~\eqref{fiftynine}. From~\eqref{fiftynine}, a ``blow-up argument'' (see the proof of Lemma~\ref{thirtysix}), and Theorem~\ref{nineteen} we deduce
\begin{align*}
\|\nabla u_\varepsilon\|_{L^\infty(B(y,\varepsilon)\cap\varepsilon\omega)}&\leq C\left(-\!\!\!\!\!\!\displaystyle\int_{B(y,3\varepsilon)\cap\varepsilon\omega}|\nabla u_\varepsilon|^2\right)^{1/2} \\
&\leq C\left(-\!\!\!\!\!\!\displaystyle\int_{B(x_0,1)\cap\varepsilon\omega}|\nabla u_\varepsilon|^2\right)^{1/2}
\end{align*}
for any $y\in B(x_0,1/3)$. The deisred esitmate readily follows by covering $B(x_0,1/3)$ with balls $B(y,\varepsilon)$.
\end{proof}
\begin{proof}[Proof of Corollary~\ref{fiftyeight}]
If $u$ satisfies the growth condition~\eqref{sixtytwo}, then by Lemma~\ref{thirtyseven} and Theorem~\ref{nineteen},
\[
\left(-\!\!\!\!\!\!\displaystyle\int_{B(x_0,r)\cap\omega}|\nabla u|^2\right)^{1/2}\leq C\left(-\!\!\!\!\!\!\displaystyle\int_{B(x_0,R)\cap\omega}|\nabla u|^2\right)^{1/2}\leq CR^{\nu-1},
\]
where $C$ is independent of $R$. Take $R\to\infty$ and note $\nabla u=0$ for arbitrarily large $r$. Since $\omega$ is connected, we conclude $u$ is constant.
\end{proof}
| proofpile-arXiv_065-9384 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Nested sampling \citep{Skilling2006} is a numerical method for Bayesian computation which simultaneously provides both posterior samples and Bayesian evidence estimates.
The approach is closely related to Sequential Monte Carlo (SMC) \citep{Salomone2018} and rare event simulation \citep{Walter2017}.
The original development of the nested sampling algorithm was motivated by evidence calculation, but the \texttt{MultiNest}{} \citep{Feroz2008,Feroz2009,Feroz2013} and \texttt{PolyChord}{} \citep{Handley2015a,Handley2015b} software packages are now extensively used for parameter estimation from posterior samples \citep[such as in][]{DESCollaboration2017}.
Nested sampling performs well compared to Markov chain Monte Carlo (MCMC)-based parameter estimation for multi-modal and degenerate posteriors due to its lack of a thermal transition property and the relatively small amount of problem-specific tuning required; for example there is no need to specify a proposal function.
Furthermore, \texttt{PolyChord}{} is well suited to high-dimensional parameter estimation problems due to its slice sampling-based implementation.
Nested sampling explores the posterior distribution by maintaining a set of samples from the prior, called {\em live points}, and iteratively updating them subject to the constraint that new samples have increasing likelihoods.
Conventionally a fixed number of live points is used; we term this {\em standard nested sampling}.
In this case the expected fractional shrinkage of the prior volume remaining is the same at each step, and as a result many samples are typically taken from regions of the prior that are remote from the bulk of the posterior.
The allocation of samples in standard nested sampling is set by the likelihood and the prior, and cannot be changed depending on whether calculating the evidence or obtaining posterior samples is the primary goal.
We propose modifying the nested sampling algorithm by dynamically varying the number of live points in order to maximise the accuracy of a calculation for some number of posterior samples, subject to practical constraints.
We term this more general approach {\em dynamic nested sampling}, with standard nested sampling representing the special case where the number of live points is constant.
Dynamic nested sampling is particularly effective for parameter estimation, as standard nested sampling typically spends most of its computational effort iterating towards the posterior peak.
This produces posterior samples with negligible weights which make little contribution to parameter estimation calculations, as discussed in our previous analysis of sampling errors in nested sampling parameter estimation \citep{Higson2017a}.
We also achieve significant improvements in the accuracy of evidence calculations, and show both evidence and parameter estimation can be improved simultaneously.
Our approach can be easily incorporated into existing standard nesting sampling software; we have created the \href{https://github.com/ejhigson/dyPolyChord}{\texttt{dyPolyChord}}{} package \citep{Higson2018dypolychord} for performing dynamic nested sampling using \texttt{PolyChord}{}.
In this paper we demonstrate the advantages of dynamic nested sampling relative to the popular standard nested sampling algorithm in a range of empirical tests.
A detailed comparison of nested sampling with alternative methods such as MCMC-based parameter estimation and thermodynamic integration is beyond the current scope --- for this we refer the reader to \citet{Allison2014}, \citet{Murray2007} and \citet{Feroz2008thesis}.
The paper proceeds as follows: \Cref{sec:background} contains background on nested sampling, and \Cref{sec:vary_nlive} establishes useful results about the effects of varying the number of live points.
Our dynamic nested sampling algorithm for increasing efficiency in general nested sampling calculations is presented in \Cref{sec:dns}; its accurate allocation of live points for {\em a priori\/} unknown posterior distributions is illustrated in \Cref{fig:nlive_gaussian}.
We first test dynamic nested sampling in the manner described by \citet{Keeton2011}, using analytical cases where one can obtain uncorrelated samples from the prior space within some likelihood contour using standard techniques.
We term the resulting procedure {\em perfect nested sampling\/} (in both standard and dynamic versions), and use it to compare the performance of dynamic and standard nested sampling in a variety of cases without software-specific effects from correlated samples or prohibitive computational costs.
These tests were performed with our \href{https://github.com/ejhigson/perfectns}{\texttt{perfectns}}{} package \citep{Higson2018perfectns} and are described in \Cref{sec:numerical_tests}, which includes a discussion of the effects of likelihood, priors and dimensionality on the improvements from dynamic nested sampling.
In particular we find large efficiency gains for high-dimensional parameter estimation problems.
\Cref{sec:practical_problems} discusses applying dynamic nested sampling to challenging posteriors, in which results from nested sampling software may include implementation-specific effects from correlations between samples \citep[see][for a detailed discussion]{Higson2018a}.
We describe the strengths and weaknesses of dynamic nested sampling compared to standard nested sampling in such cases.
This section includes numerical tests with a multimodal Gaussian mixture model and a practical signal reconstruction problem using \href{https://github.com/ejhigson/dyPolyChord}{\texttt{dyPolyChord}}{}.
We find that dynamic nested sampling also produces significant accuracy gains for these more challenging posteriors, and that it is able to reduce implementation-specific effects compared to standard nested sampling.
\subsection{Other related work}
Other variants of nested sampling include diffusive nested sampling \citep{Brewer2011} and superposition enhanced nested sampling \citep{Martiniani2014}, which have been implemented as stand alone software packages.
In particular, dynamic nested sampling shares some similarities with \texttt{DNest4} \citep{Brewer2016}, in which diffusive nested sampling is followed by additional sampling targeting regions of high posterior mass.
However dynamic nested sampling differs from these alternatives as, like standard nested sampling, it only requires drawing samples within hard likelihood constraints.
As a result dynamic nested sampling can be used to improve the efficiency of popular standard nested sampling implementations such as \texttt{MultiNest}{} (rejection sampling), \texttt{PolyChord}{} (slice sampling) and constrained Hamiltonian nested sampling \citep{Betancourt2011} while maintaining their strengths in sampling degenerate and multimodal distributions.
It has been shown that efficiency can be greatly increased using nested importance sampling \citep{Chopin2010} or by performing nested sampling using an auxiliary prior which approximates the posterior as described in \citet{Cameron2014}. However, the efficacy of these approaches is contingent on having adequate knowledge of the posterior (either before the algorithm is run, or by using the results of previous runs). As such, the speed increase on {\em a priori\/} unknown problems is generally lower than might be suggested by toy examples.
Dynamic nested sampling is similar in spirit to the adaptive schemes for thermodynamic integration introduced by \citet{Hug2016} and \citet{Friel2014}, as each involves an initial run followed by additional targeted sampling using an estimated error criteria.
Furthermore, dynamically weighting sampling in order to target regions of higher posterior mass has also been used in the statistical physics literature, such as in multi-canonical sampling \citep[see for example][]{Okamoto2004}.
\section{Background: the nested sampling algorithm}\label{sec:background}
We now give a brief description of the nested sampling algorithm following \citet{Higson2017a} and set out our notation; for more details see \citet{Higson2017a} and \citet{Skilling2006}.
For theoretical treatments of nested sampling's convergence properties, see \citet{Keeton2011,Skilling2009,Walter2017,Evans2007}.
For a given likelihood $\mathcal{L}(\btheta)$ and prior $\pi(\btheta)$, nested sampling is a method for simultaneously computing the Bayesian evidence
\begin{equation}
\mathcal{Z}
=
\int \mathcal{L}(\btheta) \pi (\btheta)\d{\btheta}
\label{equ:Z_definition}
\end{equation}
and samples from the posterior distribution
\begin{equation}
\mathcal{P}(\btheta) = \frac{\mathcal{L}(\btheta) \pi(\btheta)}{\mathcal{Z}}.
\label{equ:parameter_estimation}
\end{equation}
The algorithm begins by sampling some number of {\em live points\/} randomly from the prior $\pi(\btheta)$.
In standard nested sampling, at each iteration $i$ the point with the lowest likelihood $\mathcal{L}_i$ is replaced by a new point sampled from the region of prior with likelihood $\mathcal{L}(\btheta)>\mathcal{L}_i$ and the number of live points remains constant throughout.
This process is continued until some termination condition is met, producing a list of samples (referred to as {\em dead points\/}) which --- along with any remaining live points --- can then be used for evidence and parameter estimation.
We term the finished nested sampling process a {\em run}.
Nested sampling calculates the evidence~\eqref{equ:Z_definition} as a one-dimensional integral
\begin{equation}
\mathcal{Z}=\int_0^1 \mathcal{L}(X) \d{X},
\label{equ:Z(X)}
\end{equation}
where $X(\mathcal{L})$ is the fraction of the prior with likelihood greater than $\mathcal{L}$ and $\mathcal{L}(X)\equiv X^{-1}(\mathcal{L})$.
The prior volumes $X_i$ corresponding to the dead points $i$ are unknown but can be modelled statistically as $X_i = t_i X_{i-1}$, where $X_0 = 1$.
For a given number of live points $n$, each shrinkage ratio $t_i$ is independently distributed as the largest of $n$ random variables from the interval $[0,1]$ and so \citep{Skilling2006}:
\begin{equation}
P(t_i) = n t_i^{n-1}, \qquad
\mathrm{E}[\log t_i ] = -\frac{1}{n}, \qquad
\mathrm{Var}[\log t_i ] = \frac{1}{n^2}.
\label{equ:dist_t}
\end{equation}
In standard nested sampling the number of live points $n$ is some constant value for all $t_i$ --- the iteration of the algorithm in this case is illustrated schematically in \Cref{fig:ns_evidence}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{img/nsevidence.pdf}
\caption{A schematic illustration of standard nested sampling with a constant number of live points $n$ reproduced from \citet{Higson2017a}.
$\mathcal{L}(X)X$ shows the relative posterior mass, the bulk of which is contained in some small fraction of the prior.
Most of the samples in the diagram are in $\log X$ regions with negligible posterior mass, as is typically the case in standard nested sampling.%
}\label{fig:ns_evidence}
\end{figure}
\subsubsection*{Evidence estimation}\label{sec:sampling_evidence_error}
Nested sampling calculates the evidence~\eqref{equ:Z(X)} as a quadrature sum over the dead points
\begin{equation}
\mathcal{Z}(\mathbf{t}) \approx \sum_{i \in \mathrm{dead}} \mathcal{L}_i w_i(\mathbf{t}),
\label{equ:ztot}
\end{equation}
where $\mathbf{t}=\{t_1,t_2,\dots,t_{n_\mathrm{dead}}\}$ are the unknown set of shrinkage ratios for each dead point and each $t_i$ is an independent random variable with distribution~\eqref{equ:dist_t}.
If required any live points remaining at termination can also be included.
The $w_i$ are appropriately chosen quadrature weights; we use the trapezium rule such that $w_i(\mathbf{t})=\frac{1}{2}(X_{i-1}(\mathbf{t})-X_{i+1}(\mathbf{t}))$, where $X_i(\mathbf{t}) = \prod^i_{k=0} t_k$.
Given that the shrinkage ratios $\mathbf{t}$ are {\em a priori\/} unknown, one typically calculates an expected value and error on the evidence~\eqref{equ:ztot} using~\eqref{equ:dist_t}.
The dominant source of error in evidence estimates from perfect nested sampling is the statistical variation in the unknown volumes of the prior ``shells'' $w_i(\mathbf{t})$.
\subsubsection*{Parameter estimation}\label{sec:sampling_parameter_error}
Nested sampling parameter estimation uses the dead points, and if required the remaining live points at termination, to construct a set of posterior samples with weights proportional to their share of the posterior mass:
\begin{equation}
p_i(\mathbf{t})=\frac{w_i(\mathbf{t})\mathcal{L}_i}{\sum_i w_i(\mathbf{t})\mathcal{L}_i}=\frac{w_i(\mathbf{t})\mathcal{L}_i}{\mathcal{Z}(\mathbf{t})}.
\label{equ:posterior_weight}
\end{equation}
Neglecting any implementation-specific effects, which are not present in perfect nested sampling, the dominant sampling errors in estimating some parameter or function of parameters $f(\btheta)$ come from two sources \citep{Higson2017a}:
\begin{enumerate}[label= (\roman*)]
\item approximating the relative point weights $p_i(\mathbf{t})$ with their expectation $\mathrm{E}[p_i(\mathbf{t})]$ using~\eqref{equ:dist_t};\label{enu:w_error}
\item approximating the mean value of a function of parameters over an entire iso-likelihood contour with its value at a single point $f(\btheta_i)$.\label{enu:sample_error}
\end{enumerate}
\subsubsection*{Combining and dividing nested sampling runs}\label{sec:divide}
\citet{Skilling2006} describes how several standard nested sampling runs $r=1,2,\dots$ with constant live points $n^{(r)}$ may be combined simply by merging the dead points and sorting by likelihood value.
The combined sequence of dead points is equivalent to a single nested sampling run with $n_\mathrm{combined}=\sum_r n^{(r)}$ live points.
\citet{Higson2017a} gives an algorithm for the reverse procedure: decomposing a nested sampling run with $n$ live points into a set of $n$ valid nested sampling runs, each with 1 live point.
These single live point runs, which we term {\em threads}, are the smallest unit from which valid nested sampling runs can be constructed and will prove useful in developing dynamic nested sampling.
\section{Variable numbers of live points}\label{sec:vary_nlive}
Before presenting our dynamic nested sampling algorithm in \Cref{sec:dns}, we first establish some basic results for a nested sampling run in which the number of live points varies.
Such runs are valid as successive shrinkage ratios $t_i$ are independently distributed \citep{Skilling2006}.
For now we assume the manner in which the number of live points changes is specified in advance; adaptive allocation of samples is considered in the next section.
Let us define $n_i$ as the number of live points present for the prior shrinkage ratio $t_i$ between dead points $i-1$ and $i$.\footnote{In order for~\eqref{equ:dist_t} to be valid, the number of live points must remain constant across the shrinkage ratios $t_i$ between successive dead points. We therefore only allow the number of live points to change on iso-likelihood contours $\mathcal{L}(\btheta) = \mathcal{L}_i$ where a dead point $i$ is present.
This restriction has negligible effects for typical calculations, and is automatically satisfied by most nested sampling implementations.}
In this notation all information about the number of live points for a nested sampling run can be expressed as a list of numbers $\mathbf{n} = \{n_1, n_2, \dots, n_{n_\mathrm{dead}}\}$ which correspond to the shrinkage ratios $\mathbf{t} = \{t_1,t_2,\dots,t_{n_\mathrm{dead}}\}$.
Nested sampling calculations for variable numbers of live points differ from the constant live point case only in the use of different $n_i$ in calculating the distribution of each $t_i$ from~\eqref{equ:dist_t}.
\citet{Skilling2006}'s method for combining constant live point runs, mentioned in \Cref{sec:divide}, can be extended to accommodate variable numbers of live points by requiring that at any likelihood the live points of the combined run equals the sum of the live points of the constituent runs at that likelihood (this is illustrated in \Cref{fig:combining_dynamic}).
Variable live point runs can also be divided into their constituent threads using the algorithm in \citet{Higson2017a}. However, unlike for constant live point runs, the threads produced may start and finish part way through the run and there is no longer a single unique division into threads on iso-likelihood contours where the number of live points increases.
The technique for estimating sampling errors by resampling threads introduced in \citet{Higson2017a} can also be applied for nested sampling runs with variable numbers of live points (see Appendix~\ref{app:bootstrap} for more details), as can the diagnostic tests for correlated samples and missed modes described in \citet{Higson2018a}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{img/dynamic_run_addition.pdf}
\caption{Combining nested sampling runs $a$ and $b$ with variable numbers of live points $\mathbf{n}^{(a)}$ and $\mathbf{n}^{(b)}$ into a single nested sampling run $c$; black dots show dead points arranged in order of increasing likelihood.
The number of live points in run $c$ at some likelihood equals the sum of the live points of run $a$ and run $b$ at that likelihood.
}\label{fig:combining_dynamic}
\end{figure}
In addition, the variable live point framework provides a natural way to include the final set of live points remaining when a standard nested sampling run terminates in a calculation.
These are uniformly distributed in the region of the prior with $\mathcal{L}(\btheta) > \mathcal{L}_\mathrm{terminate}$, and can be treated as samples from a dynamic nested sampling run with the number of live points reducing by 1 as each of the points remaining after termination is passed until the final point $i$ has $n_i = 1$.
This allows the final live points of standard nested sampling runs to be combined with variable live point runs.
The remainder of this section analyses the effects of local variations in the number of live points on the accuracy of nested sampling evidence calculation and parameter estimation.
The dynamic nested sampling algorithm in \Cref{sec:dns} uses these results to allocate additional live points.
\subsection{Effects on calculation accuracy}\label{sec:optimum_w}
Nested sampling calculates the evidence $\mathcal{Z}$ as the sum of sample weights~\eqref{equ:ztot}; the dominant sampling errors are from statistically estimating shrinkage ratios $t_i$ which affect the weights of all subsequent points.
In Appendix~\ref{app:optimum_z_derivation} we show analytically that the reduction in evidence errors achieved by taking additional samples to increase the local number of live points $n_i$ is inversely proportional to $n_i$, and is approximately proportional to the evidence contained in point $i$ and all subsequent points.
This makes sense as the dominant evidence errors are from statistically estimating shrinkages $t_i$ which affect all points $j \ge i$.
In nested sampling parameter estimation, sampling errors come both from taking a finite number of samples in any region of the prior and from the stochastic estimation of their normalised weights $p_i$ from~\eqref{equ:posterior_weight}.
Typically standard nested sampling takes many samples with negligible posterior mass as illustrated in \Cref{fig:ns_evidence}; these make little contribution to estimates of parameters or to the accuracy of samples' normalised weights.
From~\eqref{equ:dist_t} the expected separation between points in $\log X$ (approximately proportional to the posterior mass they each represent) is $1/n_i$.
As a result, increasing the number of live points wherever the dead points' posterior weights $p_i \propto \mathcal{L}_i w_i$ are greatest distributes posterior mass more evenly among the samples.
This improves the accuracy of the statistically estimated weights $p_i$, and can dramatically increase the information content (Shannon entropy of the samples)
\begin{equation}
H = \exp\left( - \sum_i p_i \log p_i \right),
\label{equ:entropy}
\end{equation}
which is maximised for a given number of samples when the sample weights are equal.
Empirical tests of dynamic nested sampling show that increasing the number of live points wherever points have the highest $p_i \propto \mathcal{L}_i w_i$ works well as regards increasing parameter estimation accuracy for most calculations.
As the contribution of each sample $i$ to a parameter estimation problem for some quantity $f(\btheta)$ is dependent on $f(\btheta_i)$, the precise optimum allocation of live points is different for different quantities.
In most cases the relative weight $p_i$ of samples is a good approximation for their influence on a calculation, but for some problems much of the error may come from sampling $\log X$ regions containing a small fraction of the posterior mass but with extreme parameter values \citep[see Section 3.1 of][for diagrams illustrating this]{Higson2017a}.
Appendix~\ref{app:tuning} discusses estimating the importance of points to a specific parameter estimation calculation and using dynamic nested sampling to allocate live points accordingly.
\section{The dynamic nested sampling algorithm}\label{sec:dns}
This section presents our algorithm for performing nested sampling calculations with a dynamically varying number of live points to optimise the allocation of samples.
Since the distribution of posterior mass as a function of the likelihood is {\em a priori\/} unknown, we first approximate it by performing a standard nested sampling run with some small constant number of live points $n_\mathrm{init}$.
The algorithm then proceeds by iteratively calculating the range of likelihoods where increasing the number of live points will have the greatest effect on calculation accuracy, and generating an additional thread running over these likelihoods.
If required some $n_\mathrm{batch}$ additional threads can be generated at each step to reduce the number of times the importance must be calculated and the sampler restarted.
We find in empirical tests that using $n_\mathrm{batch} > 1$ has little effect on efficiency gains from dynamic nested sampling when the number of samples taken in each batch is small compared to the total number of samples in the run.
From the discussion in \Cref{sec:optimum_w} we define functions to measure the relative importance of a sample $i$ for evidence calculation
and parameter estimation respectively as
\begin{align}
\importance_\mathcal{Z}(i)
&\propto
\frac{\mathrm{E}[\mathcal{Z}_{\ge i}]}{n_i}, \quad \text{where} \, \mathcal{Z}_{\ge i} \equiv \sum_{k \ge i} \mathcal{L}_k w_k(\mathbf{t}),
\label{equ:z_importance}
\\
\importance_\mathrm{param}(i)
&\propto
\mathcal{L}_i \,\, \mathrm{E}[w_i(\mathbf{t})].\label{equ:p_importance}
\end{align}
Alternatively~\eqref{equ:z_importance} can be replaced with the more complex expression~\eqref{equ:exact_z_importance} derived in Appendix~\ref{app:optimum_z_derivation}, although we find this typically makes little difference to results.
Modifying~\eqref{equ:p_importance} to optimise for estimation of a specific parameter or function of parameters is discussed in Appendix~\ref{app:tuning}.
The user specifies how to divide computational resources between evidence calculation and parameter estimation through an input goal $G \in [0,1]$, where $G=0$ corresponds to optimising for evidence calculation and $G=1$ optimises for parameter estimation.
The dynamic nested sampling algorithm calculates importance as a weighted sum of the points' normalised evidence and parameter estimation importances
\begin{equation}
I(G, i)
=
(1-G) \frac{\importance_\mathcal{Z}(i)}{\sum_j \importance_\mathcal{Z}(j)}
+
G \frac{\importance_\mathrm{param}(i)}{\sum_j \importance_\mathrm{param}(j)}.
\label{equ:importance}
\end{equation}
The likelihood range in which to run an additional thread is chosen by finding all points with importance greater than some fraction $f$ of the largest importance.
Choosing a smaller fraction makes the threads added longer and reduces the number of times the importance must be recalculated, but can also cause the number of live points to plateau for regions with importance greater than that fraction of the maximum importance (see the discussion of~\Cref{fig:nlive_gaussian} in the next section for more details).
We use $f = 0.9$ for results in this paper, but find empirically that using slightly higher or lower values make little difference to results.
To ensure any steep or discontinuous increases in the likelihood $\mathcal{L}(X)$ are captured we find the first point $j$ and last point $k$ which meet this condition, then generate an additional thread starting at $\mathcal{L}_{j-1}$ and ending when a point is sampled with likelihood greater than $\mathcal{L}_{k+1}$.
If $j$ is the first dead point, threads which initially sample the whole prior are generated.
If $k$ is the final dead point then the thread will stop when a sample with likelihood greater than $\mathcal{L}_k$ is found.\footnote{We find empirically that one additional point per thread is sufficient to reach higher likelihoods if required. This is because typically there are many threads, and for each thread (which has only one live point) the expected shrinkage between samples~\eqref{equ:dist_t} of $E[\log t_i] = -1$ is quite large.}
This allows the new thread to continue beyond $\mathcal{L}_k$, meaning dynamic nested sampling iteratively explores higher likelihoods when this is the most effective use of samples.
Unlike in standard nested sampling, more accurate dynamic nested sampling results can be obtained simply by continuing the calculation for longer.
The user must specify a condition at which to stop dynamically adding threads, such as when fixed number of samples has been taken or some desired level of accuracy has been achieved.
Sampling errors on evidence and parameter estimation calculations can be estimated from the dead points at any stage using the method described in \citet{Higson2017a}.
We term these {\em dynamic termination conditions\/} to distinguish them from the type of termination conditions used in standard nested sampling.
Our dynamic nested sampling algorithm is presented more formally in Algorithm~\ref{alg:dns}.
\begin{algorithm}\SetAlgoLined{}
\SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up}
\SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress}
\SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output}\SetKwInOut{op}{Other parameters}
\Output{Samples and live points information $\mathbf{n}$.}
\Input{Goal $G$, $n_\mathrm{init}$, dynamic termination condition.}
\BlankLine{}
\PrintSemicolon{}
Generate a nested sampling run with a constant number of live points $n_\mathrm{init}$\;
\While{dynamic termination condition not satisfied}{%
recalculate importance $I(G, i)$ of all points\;
find first point $j$ and last point $k$ with importance of greater than some fraction $f$ (we use $f=0.9$) of the largest importance\;
generate an additional thread (or alternatively $n_\mathrm{batch}$ additional threads) starting at $\mathcal{L}_{j-1}$ and ending with the first sample taken with likelihood greater than $\mathcal{L}_{k+1}$\footnotemark{}\;
}
\caption{Dynamic nested sampling.}\label{alg:dns}
\end{algorithm}%
\footnotetext{If $k$ is the final dead point, the additional thread terminates after the first point with likelihood greater than $\mathcal{L}_k$.}
\subsection{Software implementation}
Since dynamic nested sampling only requires the ability to sample from the prior within a hard likelihood constraint, implementations and software packages developed for standard nested sampling can be easily adapted to perform dynamic nested sampling.
We demonstrate this with the \href{https://github.com/ejhigson/dyPolyChord}{\texttt{dyPolyChord}}{} package, which performs dynamic nested sampling using \texttt{PolyChord}{} and is compatible with Python, C\texttt{++} and Fortran likelihoods.
\texttt{PolyChord}{} was designed before the creation of the dynamic nested sampling algorithm, and is not optimized to quickly resume the nested sampling process at an arbitrary point to add more threads.
\href{https://github.com/ejhigson/dyPolyChord}{\texttt{dyPolyChord}}, which performs nested sampling with \texttt{PolyChord}{}, minimises the computational overhead from saving and resuming by using Algorithm~\ref{alg:dypolychord} --- a modified version of Algorithm~\ref{alg:dns} described in Appendix~\ref{app:dyPolyChord}.
After the initial exploratory run with $n_\mathrm{init}$ live points, Algorithm~\ref{alg:dypolychord} calculates a dynamic allocation of live points and then generates more samples in a single run without recalculating point importances.
This means only the initial run provides information on where to place samples, and as a result the allocation of live points is slightly less accurate and a higher value of $n_\mathrm{init}$ is typically needed.
Dynamic nested sampling will be incorporated in the forthcoming \texttt{PolyChord 2}{} software package, which is currently in development and is designed for problems of up to $\sim 1,000$ dimensions --- dynamic nested sampling can provide very large improvements in the accuracy of such high-dimensional problems, as shown by the numerical tests in the next section.
Furthermore, we anticipate reloading a past iteration $i$ of a \texttt{PolyChord 2}{} nested sampling run in order to add additional threads will be less computationally expensive than a single likelihood call for many problems.
Nevertheless, it is often more efficient for dynamic nested sampling software to generate additional threads in selected likelihood regions in batches rather than one at a time; this approach is used in the \href{https://github.com/joshspeagle/dynesty}{\texttt{dynesty}}{}\footnote{See \href{https://github.com/joshspeagle/dynesty}{{https://github.com/joshspeagle/dynesty}}{} for more information.} dynamic nested sampling package.
\section{Numerical tests with perfect nested sampling}\label{sec:numerical_tests}
In the manner described by \citet{Keeton2011} we first consider spherically symmetric test cases; here one can perform {\em perfect nested sampling}, as perfectly uncorrelated samples from the prior space within some iso-likelihood contour can be found using standard techniques.
Results from nested sampling software used for practical problems may include additional uncertainties from imperfect sampling within a likelihood contour that are specific to a given implementation --- we discuss these in \Cref{sec:practical_problems}.
The tests in this section were run using our \href{https://github.com/ejhigson/perfectns}{\texttt{perfectns}}{} package.
Perfect nested sampling calculations depend on the likelihood $\mathcal{L}(\btheta)$ and prior $\pi(\btheta)$ only through the distribution of posterior mass $\mathcal{L}(X)$ and the distribution of parameters on iso-likelihood contours $P(f(\btheta)|\mathcal{L}(\btheta)=\mathcal{L}(X))$, each of which are functions of both $\mathcal{L}(\btheta)$ and $\pi(\btheta)$ \citep{Higson2017a}.
We therefore empirically test dynamic nested sampling using likelihoods and priors with a wide range of distributions of posterior mass, and consider a variety of functions of parameters $f(\btheta)$ in each case.
We first examine perfect nested sampling of $d$-dimensional spherical unit Gaussian likelihoods centred on the origin
\begin{equation}\label{equ:gaussian}
\mathcal{L}(\btheta) = {(2 \pi)}^{-d/2} \mathrm{e}^{-{|\btheta|}^2 / 2}.
\end{equation}
For additional tests using distributions with lighter and heavier tails we use $d$-dimensional exponential power likelihoods
\begin{equation}\label{equ:exp_power}
\mathcal{L}(\btheta) = \frac{d\, \Gamma(\frac{d}{2})}{{\pi}^{\frac{d}{2}} 2^{1+\frac{1}{2b}} \Gamma(1+\frac{n}{2b})} \mathrm{e}^{-{|\btheta|}^{2b} / 2},
\end{equation}
where $b=1$ corresponds to a $d$-dimensional Gaussian~\eqref{equ:gaussian}.
All tests use $d$-dimensional co-centred spherical Gaussian priors
\begin{equation}\label{equ:gaussian_prior}
\pi(\btheta) = {(2 \pi \sigma_\pi^2)}^{-d/2} \mathrm{e}^{-{|\btheta|}^2 / 2 \sigma_\pi^2}.
\end{equation}
The different distributions of posterior mass in $\log X$ for~\eqref{equ:gaussian} and~\eqref{equ:exp_power} with dimensions $d$ are illustrated in \Cref{fig:an_w}.
\begin{figure*}
\centering
\includegraphics{img/an_weights_3like.pdf}
\caption{Relative posterior mass ($\propto \mathcal{L}(X)X$) as a function of $\log X$ for Gaussian likelihoods~\eqref{equ:gaussian} and exponential power likelihoods~\eqref{equ:exp_power} with $b=2$ and $b=\frac{3}{4}$. Each has a Gaussian prior~\eqref{equ:gaussian_prior} with $\sigma_\pi=10$.
The lines are scaled so that the area under each of them is equal.}\label{fig:an_w}
\end{figure*}
In tests of parameter estimation we denote the first component of the $\btheta$ vector as $\thcomp{1}$, although by symmetry the results will be the same for any component.
$\thmean{1}$ is the mean of the posterior distribution of $\thcomp{1}$, and the one-tailed $Y\%$ upper credible interval $\mathrm{C.I.}_{Y\%}(\thcomp{1})$ is the value $\thcomp{1}^\ast$ for which $P(\thcomp{1}<\thcomp{1}^\ast|\mathcal{L},\pi)=Y/100$.
Tests of dynamic nested sampling terminate after a fixed number of samples, which is set such that they use similar or slightly smaller numbers of samples than the standard nested sampling runs we compare them to.
Dynamic runs have $n_\mathrm{init}$ set to 10\% of the number of live points used for the standard runs.
Standard nested sampling runs use the termination conditions described by \citet[][Section 3.4]{Handley2015b}, stopping when the estimated evidence contained in the live points is less than $10^{-3}$ times the evidence contained in dead points (the default value used in \texttt{PolyChord}).
This is an appropriate termination condition for nested sampling parameter estimation \citep{Higson2017a}, but if only the evidence is of interest then stopping with a larger fraction of the posterior mass remaining will have little effect on calculation accuracy.
The increase in computational efficiency from our method can be calculated by observing that nested sampling calculation errors are typically proportional to the square root of the computational effort applied \citep{Skilling2006,Higson2017a}, and that the number of samples produced is approximately proportional to the computational effort.
The increase in efficiency (computational speedup) from dynamic nested sampling over standard nested sampling for runs containing approximately the same number of samples on average can therefore be estimated from the variation of results as
\begin{equation}
\mathrm{efficiency\,gain} = \frac{\mathrm{Var}\left[\mathrm{standard\,NS\,results}\right]}{\mathrm{Var}\left[\mathrm{dynamic\,NS\,results}\right]}.\label{equ:efficiency_gain}
\end{equation}
Here the numerator is the variance of the calculated values of some quantity (such as the evidence or the mean of a parameter) from a number of standard nested nested sampling runs, and the denominator is the variance of the calculated values of the same quantity from a number of dynamic nested sampling runs.
When the two methods use different numbers of samples on average,~\eqref{equ:efficiency_gain} can be replaced with
\begin{equation}
\mathrm{efficiency\,gain} = \frac{\mathrm{Var}\left[\mathrm{standard\,NS\,results}\right]}{\mathrm{Var}\left[\mathrm{dynamic\,NS\,results}\right]}
\times
\frac{\overline{N_\mathrm{samp,sta}}}{\overline{N_\mathrm{samp,dyn}}}
\label{equ:efficiency_gain_nsamp}
\end{equation}
where the additional term is the ratio of the mean number of samples produced by the standard and dynamic nested sampling runs.
\subsection{10-dimensional Gaussian example}
We begin by testing dynamic nested sampling on a 10-dimensional Gaussian likelihood~\eqref{equ:gaussian} with a Gaussian prior~\eqref{equ:gaussian_prior} and $\sigma_\pi = 10$.
\Cref{fig:nlive_gaussian} shows the relative allocation of live points as a function of $\log X$ for standard and dynamic nested sampling runs.
The dynamic nested sampling algorithm (Algorithm~\ref{alg:dns}) can accurately and consistently allocate live points, as can be seen by comparison with the analytically calculated distribution of posterior mass and posterior mass remaining.
Dynamic nested sampling live point allocations do not precisely match the distribution of posterior mass and posterior mass remaining in the $G=1$ and $G=0$ cases because they include the initial exploratory run with a constant $n_\mathrm{init}$ live points.
Furthermore as additional live points are added where the importance is more than $90\%$ of the maximum importance, the number of live points allocated by dynamic nested sampling is approximately constant for regions with importance of greater than $\sim 90\%$ of the maximum --- this can be clearly seen in \Cref{fig:nlive_gaussian} near the peak number of live points in the $G=1$ case.
Similar diagrams for exponential power likelihoods~\eqref{equ:exp_power} with $b=2$ and $b=\frac{3}{4}$ are provided in Appendix~\ref{app:exp_power_add_tests} (\Cref{fig:nlive_exp_power_2,fig:nlive_exp_power_0_75}), and show the allocation of live points is also accurate in these cases.
\begin{figure*}
\centering
\includegraphics{img/nlive_gaussian.pdf}
\caption{Live point allocation for a 10-dimensional Gaussian likelihood~\eqref{equ:gaussian} with a Gaussian prior~\eqref{equ:gaussian_prior} and $\sigma_\pi = 10$.
Solid lines show the number of live points as a function of $\log X$ for 10 standard nested sampling runs with $n=500$, and 10 dynamic nested sampling runs with $n_\mathrm{init}=50$, a similar number of samples and different values of $G$.
The dotted and dashed lines show the relative posterior mass $\propto \mathcal{L}(X)X$ and the posterior mass remaining $\propto \int_{-\infty}^X \mathcal{L}(X')X' \d{X'}$ at each point in $\log X$; for comparison these lines are scaled to have the same area under them as the average of the number of live point lines.
Standard nested sampling runs include the final set of live points at termination, which are modeled using a decreasing number of live points as discussed in \Cref{sec:vary_nlive}.
Similar diagrams for exponential power likelihoods~\eqref{equ:exp_power} with $b=2$ and $b=\frac{3}{4}$ are presented in \Cref{fig:nlive_exp_power_2,fig:nlive_exp_power_0_75} in Appendix~\ref{app:exp_power_add_tests}.
}\label{fig:nlive_gaussian}
\end{figure*}
The variation of results from repeated standard and dynamic nested sampling calculations with a similar number of samples is shown in \Cref{tab:dynamic_test_gaussian} and \Cref{fig:dynamic_test_dists}.
Dynamic nested sampling optimised for evidence calculation ($G=0$) and parameter estimation ($G=1$) produce significantly more accurate results than standard nested sampling.
In addition, results for dynamic nested sampling with $G=0.25$ show that both evidence calculation and parameter estimation accuracy can be improved simultaneously.
Equivalent results for 10-dimensional exponential power likelihoods~\eqref{equ:exp_power} with $b=2$ and $b=\frac{3}{4}$ are shown in \Cref{tab:dynamic_test_exp_power_2,tab:dynamic_test_exp_power_0_75} in Appendix~\ref{app:exp_power_add_tests}.
The reduction in evidence errors for $G=0$ and parameter estimation errors for $G=1$ in \Cref{tab:dynamic_test_gaussian} correspond to increasing efficiency by factors of $1.40 \pm 0.04$ and up to $4.4 \pm 0.1$ respectively.
\begin{table*}
\centering
\caption{Test of dynamic nested sampling for a 10-dimensional Gaussian likelihood~\eqref{equ:gaussian} and a Gaussian prior~\eqref{equ:gaussian_prior} with $\sigma_\pi = 10$.
The first row shows the standard deviation of $5,000$ calculations for standard nested sampling with a constant number of live points $n=500$.
The next three rows show the standard deviations of $5,000$ dynamic nested sampling calculations with a similar number of samples; these are respectively optimised purely for evidence calculation accuracy ($G=0$), for both evidence and parameter estimation ($G=0.25$) and purely for parameter estimation ($G=1$).
The final three rows show the computational efficiency gain~\eqref{equ:efficiency_gain} from dynamic nested sampling over standard nested sampling in each case.
The first column shows the mean number of samples for the $5,000$ runs.
The remaining columns show calculations of the log evidence, the mean, median and $84\%$ one-tailed credible interval of a parameter $\thcomp{1}$, and the mean and median of the radial coordinate $|\btheta|$.
Numbers in brackets show the $1\sigma$ numerical uncertainty on the final digit.\label{tab:dynamic_test_gaussian}}
\begin{tabular}{llllllll}
\toprule
{} & samples & $\log \mathcal{Z}$ & $\thmean{1}$ & $\mathrm{median}(\thcomp{1}) $ & $\mathrm{C.I.}_{84\%}(\thcomp{1})$ & $\overline{|\btheta|}$ & $\mathrm{median}(|\btheta|)$ \\
\midrule
St.Dev.\ standard & 15,189 & 0.189(2) & 0.0158(2) & 0.0194(2) & 0.0253(3) & 0.0262(3) & 0.0318(3) \\
St.Dev.\ $G=0$ & 15,152 & 0.160(2) & 0.0180(2) & 0.0249(2) & 0.0301(3) & 0.0292(3) & 0.0335(3) \\
St.Dev.\ $G=0.25$ & 15,156 & 0.179(2) & 0.0124(1) & 0.0163(2) & 0.0204(2) & 0.0205(2) & 0.0239(2) \\
St.Dev.\ $G=1$ & 15,161 & 0.549(5) & 0.00834(8) & 0.0104(1) & 0.0132(1) & 0.0138(1) & 0.0152(2) \\
Efficiency gain $G=0$ & & 1.40(4) & 0.77(2) & 0.60(2) & 0.71(2) & 0.80(2) & 0.90(3) \\
Efficiency gain $G=0.25$ & & 1.11(3) & 1.62(5) & 1.42(4) & 1.54(4) & 1.64(5) & 1.77(5) \\
Efficiency gain $G=1$ & & 0.119(3) & 3.6(1) & 3.5(1) & 3.7(1) & 3.6(1) & 4.4(1) \\
\bottomrule
\end{tabular}
\end{table*}
\begin{figure*}
\centering
\includegraphics{img/gaussian_dynamic_results_kde.pdf}
\caption{Distributions of results for the dynamic and standard nested sampling calculations shown in \Cref{tab:dynamic_test_gaussian}, plotted using kernel density estimation.
Black dotted lines show the correct value of each quantity for the likelihood and prior used.
Compared to standard nested sampling (blue lines), the distributions of results of dynamic nested sampling with $G=1$ (red lines) for parameter estimation problems show much less variation around the correct value.
Results for dynamic nested sampling with $G=0$ (orange lines) are on average closer to the correct value than standard nested sampling for calculating $\log \mathcal{Z}$, and results with $G=0.25$ (green lines) show improvements over standard nested sampling for both evidence and parameter estimation calculations.
}\label{fig:dynamic_test_dists}
\end{figure*}
\subsection{Efficiency gains for different distributions of posterior mass}
Efficiency gains~\eqref{equ:efficiency_gain} from dynamic nested sampling depend on the fraction of the $\log X$ range explored which contains samples that make a significant contribution to calculation accuracy.
If this fraction is small most samples taken by standard nested sampling contain little information, and dynamic nested sampling can greatly improve performance.
For parameter estimation ($G=1$), only $\log X$ regions containing significant posterior mass ($\propto \mathcal{L}(X)X$) are important, whereas for evidence calculation ($G=0$) all samples taken before the bulk of the posterior is reached are valuable. Both cases benefit from dynamic nested sampling using fewer samples to explore the region after most of the posterior mass has been passed but before termination.
We now test the efficiency gains~\eqref{equ:efficiency_gain} of dynamic nested sampling empirically for a wide range of distributions of posterior mass by considering Gaussian likelihoods~\eqref{equ:gaussian} and exponential power likelihoods~\eqref{equ:exp_power} of different dimensions $d$ and prior sizes $\sigma_\pi$.
The results are presented in \Cref{fig:prior_r_max_performance,fig:n_dim_performance}, and show large efficiency gains from dynamic nested sampling for parameter estimation in all of these cases.
\begin{figure*}
\centering
\includegraphics{img/eff_gain_dim.pdf}
\caption{Efficiency gain~\eqref{equ:efficiency_gain} from dynamic nested sampling compared to standard nested sampling for likelihoods of different dimensions; each has a Gaussian prior~\eqref{equ:gaussian_prior} with $\sigma_\pi = 10$.
Results are shown for calculations of the log evidence, the mean, median and $84\%$ one-tailed credible interval of a parameter $\thcomp{1}$, and the mean and median of the radial coordinate $|\btheta|$.
Each efficiency gain is calculated using $1,000$ standard nested sampling calculations with $n=200$ and $1,000$ dynamic nested sampling calculations with $n_\mathrm{init}=20$ using a similar or slightly smaller number of samples.}\label{fig:n_dim_performance}
\vspace{0.4cm}
\centering
\includegraphics{img/eff_gain_prior_scale.pdf}
\caption{Efficiency gain~\eqref{equ:efficiency_gain} from dynamic nested sampling for Gaussian priors~\eqref{equ:gaussian_prior} of different sizes $\sigma_\pi$.
Results are shown for calculations of the log evidence and the mean of a parameter $\thcomp{1}$ for $2$-dimensional Gaussian likelihoods~\eqref{equ:gaussian} and 2-dimensional exponential power likelihoods~\eqref{equ:exp_power} with $b=2$ and $b=\frac{3}{4}$.
Each efficiency gain is calculated using $1,000$ standard nested sampling calculations with $n=200$ and $1,000$ dynamic nested sampling calculations with $n_\mathrm{init}=20$ using a similar or slightly smaller number of samples.}\label{fig:prior_r_max_performance}
\end{figure*}
Increasing the dimension $d$ typically means the posterior mass is contained in a smaller fraction of the prior volume \citep{Higson2017a}, as shown in \Cref{fig:an_w}.
In the spherically symmetric cases we consider, the range of $\log X$ to be explored before significant posterior mass is reached increases approximately linearly with $d$.
This increases the efficiency gain~\eqref{equ:efficiency_gain} from dynamic nested sampling for parameter estimation ($G=1$) but reduces it for evidence calculation ($G=0$).
In high-dimensional problems the vast majority of the $\log X$ range explored is usually covered before any significant posterior mass is reached, resulting in very large efficiency gains for parameter estimation but almost no gains for evidence calculation --- as can be seen in \Cref{fig:n_dim_performance}.
For the 1,000-dimensional exponential power likelihood with $b=2$, dynamic nested sampling with $G=1$ improves parameter estimation efficiency by a factor of up to $72\pm5$, with the largest improvement for estimates of the median the posterior distribution of $|\btheta|$.
Increasing the size of the prior $\sigma_\pi$ increases the fraction of the $\log X$ range explored before any significant posterior mass is reached, resulting in larger efficiency gains~\eqref{equ:efficiency_gain} from dynamic nested sampling for parameter estimation ($G=1$) but smaller gains for evidence calculation ($G=0$).
However when $\sigma_\pi$ is small the bulk of the posterior mass is reached after a small number of steps, and most of the $\log X$ range explored is after the majority of the posterior mass but before termination.
Dynamic nested sampling places fewer samples in this region than standard nested sampling, leading to large efficiency gains for both parameter estimation and evidence calculation.
This is shown in \Cref{fig:prior_r_max_performance}; when $\sigma_\pi = 0.1$, dynamic nested sampling evidence calculations with $G=0$ improve efficiency over standard nested sampling by a factor of approximately 7 for all 3 likelihoods considered.
However we note that if only the evidence estimate is of interest then standard nested sampling can safely terminate with a higher fraction of the posterior mass remaining than $10^{-3}$, in which case efficiency gains would be lower.
\section{Dynamic nested sampling with challenging posteriors}\label{sec:practical_problems}
Nested sampling software such as \texttt{MultiNest}{} and \texttt{PolyChord}{} use numerical techniques to perform the sampling within hard likelihood constrains required by the nested sampling algorithm; see \citet{Feroz2013,Handley2015b} for more details.
For challenging problems, such as those involving degenerate or multimodal posteriors, samples produced may not be drawn uniformly from the region of the prior within the desired iso-likelihood contour --- for example if this software misses a mode in a multimodal posterior.
This introduces additional uncertainties which are specific to a given software package and are not present in perfect nested sampling; we term these {\em implementation-specific effects\/} \citep[see][for a detailed discussion]{Higson2018a}.
Nested sampling software generally uses the population of dead and live points to sample within iso-likelihood contours, and so taking more samples in the region of an iso-likelihood contour will reduce the sampler's implementation-specific effects.
As a result dynamic nested sampling typically has smaller implementation-specific effects than standard nested sampling in the regions of the posterior where it has a higher number of live points, but conversely may perform worse in regions with fewer live points.
For highly multimodal or degenerate likelihoods it is important all modes or other regions of significant posterior mass are found by the sampler --- dynamic nested sampling performs better than standard nested sampling at finding hard to locate modes which become separated from the remainder of the posterior at likelihood values where it has more live points,%
\footnote{However, if a mode is only discovered late in the dynamic nested sampling process then it may still be under-sampled due to not being present in threads calculated before it was found.} as illustrated schematically in \Cref{fig:mode_splitting}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{img/mode_splitting.pdf}
\caption{Dynamic and standard nested sampling's relative ability to discover hard to locate modes is determined by the number of live points present at the likelihood $\mathcal{L}(X_\mathrm{split})$ at which a mode splits from the remainder of the posterior (illustrated on the left).
In the schematic graph on the right we would expect dynamic nested sampling to be better at finding modes than standard nested sampling in region B (where it has a higher number of live points) but worse in regions A and C.}\label{fig:mode_splitting}
\end{figure}
Provided no significant modes are lost we expect dynamic nested sampling to have lower implementation-specific effects than standard nested sampling, as it has more live points --- and therefore lower implementation-specific effects --- in the regions which have the largest effect on calculation accuracy.
If modes separate at likelihood values where dynamic nested sampling assigns few samples, $n_\mathrm{init}$ must be made large enough to ensure no significant modes are lost.
For highly multimodal posteriors, a safe approach is to set $n_\mathrm{init}$ high enough to find all significant modes, in which case dynamic nested sampling will use the remaining computational budget to minimise calculation errors.
Even if, for example, half of the computational budget is used on the initial exploratory run, dynamic nested sampling will still achieve over half of the efficiency gain compared to standard nested sampling that it could with a very small $n_\mathrm{init}$.
The remainder of this section presents empirical tests of dynamic nested sampling for two challenging problems in which significant implementation-specific effects are present.
Additional examples of dynamic nested sampling's application to practical problems in scientific research can be found in \citet{Orazio2018}, \citet{Zucker2018}, \citet{Higson2018b} and \citet{Guillochon2018}.
\subsection{Numerical tests with a multimodal posterior}\label{sec:gaussian_mix}
We now use \href{https://github.com/ejhigson/dyPolyChord}{\texttt{dyPolyChord}}{} to numerically test dynamic nested sampling on a challenging multimodal $d$-dimensional, $M$-component Gaussian mixture likelihood
\begin{equation}\label{equ:gaussian_mix}
\mathcal{L}(\btheta) = \sum_{m=1}^M W^{(m)} {\left(2 \pi {\sigma^{(m)}}^2\right)}^{-d/2} \exp\left( -\frac{{|\btheta - \bmu^{(m)}|}^2}{2 {\sigma^{(m)}}^2}\right).
\end{equation}
Here each component $m$ is centred on a mean $\bmu^{(m)}$ with standard deviation $\sigma^{(m)}$ in all dimensions, and the component weights $W^{(m)}$ satisfy $\sum_{m=1}^M W^{(m)} = 1$.
For comparison with the perfect nested sampling results using a Gaussian likelihood~\eqref{equ:gaussian} in \Cref{sec:numerical_tests}, we use $d=10$, $\sigma^{(m)}=1$ for all $m$ and a Gaussian prior~\eqref{equ:gaussian_prior} with $\sigma_\pi = 10$.
We consider a Gaussian mixture~\eqref{equ:gaussian_mix} of $M=4$ components with means and weights
\begin{align}
\quad W^{(1)}&=& 0.4, \qquad \mu^{(1)}_{\hat{1}} &=& 0, \qquad \mu^{(1)}_{\hat{2}} &=& 4,\nonumber \\
\quad W^{(2)}&=& 0.3, \qquad \mu^{(2)}_{\hat{1}} &=& 0, \qquad \mu^{(2)}_{\hat{2}} &=& -4,\nonumber \\
\quad W^{(3)}&=& 0.2, \qquad \mu^{(3)}_{\hat{1}} &=& 4, \qquad \mu^{(3)}_{\hat{2}} &=& 0,\label{equ:mix_means} \\
\quad W^{(4)}&=& 0.1, \qquad \mu^{(4)}_{\hat{1}} &=& -4, \qquad \mu^{(4)}_{\hat{2}} &=& 0,\nonumber
\end{align}
\vspace{-0.5cm}
\begin{equation*}
\quad \text{and} \,\, \mu^{(m)}_{\hat{k}} = 0 \quad \text{for all} \,\, k \, \in (3,\dots,d), \, m \in (1,\dots,M).
\end{equation*}
The posterior distribution for this case is shown in \Cref{fig:triangle_gaussian_mix}.
As in \Cref{sec:numerical_tests}, we compare standard nested sampling runs to dynamic nested sampling runs which use a similar or slightly smaller number of samples.
\href{https://github.com/ejhigson/dyPolyChord}{\texttt{dyPolyChord}}{} uses Algorithm~\ref{alg:dypolychord}, meaning only the initial run provides information on where to place samples, so we set $n_\mathrm{init}$ to 20\% of the number of live points used in standard nested sampling runs they are compared to, instead of the 10\% used in the perfect nested sampling tests in \Cref{sec:numerical_tests}.
The allocation of live points from \href{https://github.com/ejhigson/dyPolyChord}{\texttt{dyPolyChord}}{} runs with the Gaussian mixture likelihood~\eqref{equ:gaussian_mix} is shown in~\Cref{fig:nlive_gaussian_mix}.
As in the tests with perfect nested sampling, the numbers of live points with settings $G=1$ and $G=0$ match the posterior mass and posterior mass remaining respectively despite the more challenging likelihood.
The live point allocation is not as precise as in~\Cref{fig:nlive_gaussian} due to \href{https://github.com/ejhigson/dyPolyChord}{\texttt{dyPolyChord}}{} only using information from the initial exploratory run to calculate all the point importances.
Another difference is that the truncation of the peak number of live points in the $G=1$ in \Cref{fig:nlive_gaussian} is not present for \href{https://github.com/ejhigson/dyPolyChord}{\texttt{dyPolyChord}}{} runs, as this is due to Algorithm~\ref{alg:dns} adding new points where the importance is within 90\% of the maximum.
\begin{figure}
\centering
\includegraphics{img/triangle_plot.png}
\caption{Posterior distributions for the 4-component 10-dimensional Gaussian mixture model~\eqref{equ:gaussian_mix} with component weights and means given by~\eqref{equ:mix_means}, and a Gaussian prior~\eqref{equ:gaussian_prior}.
By symmetry the distributions of $\thcomp{k}$ are the same for $k \in (3,\dots,d)$, so we only show only the first 4 components of $\btheta$; 1- and 2-dimensional plots of other parameters are the same as those of $\thcomp{3}$ and $\thcomp{4}$.}\label{fig:triangle_gaussian_mix}
\end{figure}
\begin{figure*}
\centering
\includegraphics{img/nlive_gaussian_mix_4comp_4sep.pdf}
\caption{Live point allocation as in \Cref{fig:nlive_gaussian} but with a 10-dimensional Gaussian mixture likelihood~\eqref{equ:gaussian_mix}, with component weights and means given by~\eqref{equ:mix_means} and a Gaussian prior~\eqref{equ:gaussian_prior} with $\sigma_\pi = 10$.
The 10 standard nested sampling runs shown were generated using \texttt{PolyChord}{} with $n=500$, and 10 dynamic nested sampling runs with each $G$ value were generated using \href{https://github.com/ejhigson/dyPolyChord}{\texttt{dyPolyChord}}{} with a similar number of samples and $n_\mathrm{init}=100$.
The dotted and dashed lines show the relative posterior mass $\propto \mathcal{L}(X)X$ and the posterior mass remaining $\propto \int_{-\infty}^X \mathcal{L}(X')X' \d{X'}$ at each point in $\log X$; for comparison these lines are scaled to have the same area under them as the average of the number of live point lines.
}\label{fig:nlive_gaussian_mix}
\end{figure*}
\Cref{tab:dynamic_test_gaussian_mix} shows the variation of repeated calculations for dynamic nested sampling for the 10-dimensional Gaussian mixture model~\eqref{equ:gaussian_mix} with \href{https://github.com/ejhigson/dyPolyChord}{\texttt{dyPolyChord}}{}.
This shows significant efficiency gains~\eqref{equ:efficiency_gain} from dynamic nested sampling of $1.3 \pm 0.1$ for evidence calculation with $G=0$ and up to $4.0 \pm 0.4$ for parameter estimation with $G=1$, demonstrating how dynamic nested sampling can be readily applied to more challenging multimodal cases.
In Appendix~\ref{app:gaussian_mix_add_tests} we empirically verify that dynamic nested sampling does not introduce any errors from sampling bias (which would not be captured by efficiency gains~\eqref{equ:efficiency_gain} based on the variation of results) using analytically calculated true values of the log evidence and posterior means.
\Cref{tab:gaussian_mix_rmse} shows that the mean calculation results are very close to the correct values, and hence the standard deviation of the results is almost identical to their root-mean-squared-error, meaning efficiency gains~\eqref{equ:efficiency_gain} accurately reflect reductions in calculation errors (as for perfect nested sampling).
\begin{table*}
\centering
\caption{Tests of dynamic nested sampling as in \Cref{tab:dynamic_test_gaussian} but with a 10-dimensional Gaussian mixture likelihood~\eqref{equ:gaussian_mix}, with component weights and means given by~\eqref{equ:mix_means} and a Gaussian prior~\eqref{equ:gaussian_prior} with $\sigma_\pi = 10$.
The first row shows the standard deviation of $500$ \texttt{PolyChord}{} standard nested sampling calculations with a constant number of live points $n=500$.
The next three rows show the standard deviations of $500$ \href{https://github.com/ejhigson/dyPolyChord}{\texttt{dyPolyChord}}{} calculations with a similar number of samples; these are respectively optimised purely for evidence calculations ($G=0$), for both evidence and parameter estimation ($G=0.25$) and purely for parameter estimation ($G=1$).
All runs use the setting $\texttt{num\_repeats}=50$.
The final three rows show the computational efficiency gain~\eqref{equ:efficiency_gain} from dynamic nested sampling over standard nested sampling in each case.
The first column shows the mean number of samples produced by the $500$ runs.
The remaining columns show calculations of the log evidence, the mean of parameters $\thcomp{1}$ and $\thcomp{2}$, the median and $84\%$ one-tailed credible interval of $\thcomp{1}$, and the mean radial coordinate $|\btheta|$.
Numbers in brackets show the $1\sigma$ numerical uncertainty on the final digit.\label{tab:dynamic_test_gaussian_mix}}
\begin{tabular}{llllllll}
\toprule
{} & samples & $\log \mathcal{Z}$ & $\thmean{1}$ & $\thmean{2}$ & $\mathrm{median}(\thcomp{1})$ & $\mathrm{C.I.}_{84\%}(\thcomp{1})$ & $\overline{|\btheta|}$ \\
\midrule
St.Dev.\ standard & 14,739 & 0.181(6) & 0.057(2) & 0.126(4) & 0.035(1) & 0.170(5) & 0.0196(6) \\
St.Dev.\ $G=0$ & 14,574 & 0.160(5) & 0.076(2) & 0.176(6) & 0.048(2) & 0.229(7) & 0.0222(7) \\
St.Dev.\ $G=0.25$ & 14,628 & 0.170(5) & 0.046(1) & 0.105(3) & 0.0293(9) & 0.138(4) & 0.0156(5) \\
St.Dev.\ $G=1$ & 14,669 & 0.36(1) & 0.032(1) & 0.069(2) & 0.0203(6) & 0.085(3) & 0.0110(3) \\
Efficiency gain $G=0$ & & 1.3(1) & 0.56(5) & 0.51(5) & 0.53(5) & 0.55(5) & 0.78(7) \\
Efficiency gain $G=0.25$ & & 1.1(1) & 1.5(1) & 1.5(1) & 1.4(1) & 1.5(1) & 1.6(1) \\
Efficiency gain $G=1$ & & 0.25(2) & 3.3(3) & 3.4(3) & 3.0(3) & 4.0(4) & 3.2(3) \\
\bottomrule
\end{tabular}
\end{table*}
\Cref{tab:imp_error_gaussian_mix} shows estimated implementation-specific effects for the results in \Cref{tab:dynamic_test_gaussian_mix}; these are calculated using the procedure described in \citet[][Section 5]{Higson2018a}, which estimates the part of the variation of results which is not explained by the intrinsic stochasticity of perfect nested sampling.
Dynamic nested sampling with $G=1$ and $G=0.25$ both reduce implementation-specific effects in all of the parameter estimation calculations as expected.
However we are not able to measure a statistically significant difference in implementation-specific effects for $\log \mathcal{Z}$ with $G=0$; this is because for evidence calculations implementation-specific effects represent a much smaller fraction of the total error \citep[see][for more details]{Higson2018a}.
\begin{table*}
\centering
\caption{Estimated errors due to implementation-specific effects for the Gaussian mixture likelihood results shown in \Cref{tab:dynamic_test_gaussian_mix}, calculated using the method described in \citet[][Section 5]{Higson2018a}.
Numbers in brackets show the $1\sigma$ numerical uncertainty on the final digit.}\label{tab:imp_error_gaussian_mix}
\begin{tabular}{lllllll}
\toprule
{} & $\log \mathcal{Z}$ & $\thmean{1}$ & $\thmean{2}$ & $\mathrm{median}(\thcomp{1})$ & $\mathrm{C.I.}_{84\%}(\thcomp{1})$ & $\overline{|\btheta|}$ \\
\midrule
Implementation St.Dev.\ standard & 0.02(4) & 0.044(2) & 0.115(4) & 0.022(2) & 0.138(7) & 0.005(3) \\
Implementation St.Dev.\ $G=0$ & 0.06(2) & 0.062(3) & 0.163(6) & 0.033(2) & 0.191(9) & 0.005(5) \\
Implementation St.Dev.\ $G=0.25$ & 0.03(4) & 0.035(2) & 0.095(4) & 0.018(2) & 0.110(6) & 0.002(4) \\
Implementation St.Dev.\ $G=1$ & 0.00(8) & 0.024(1) & 0.062(2) & 0.013(1) & 0.065(4) & 0.000(2) \\
\bottomrule
\end{tabular}
\end{table*}
The efficiency gains in \Cref{tab:dynamic_test_gaussian_mix} are slightly lower than those for the similar unimodal Gaussian likelihood~\eqref{equ:gaussian} used in~\Cref{tab:dynamic_test_gaussian}; this is because of the higher $n_\mathrm{init}$ value used, and because while implementation-specific effects are reduced by dynamic nested sampling they are not reduced by as large a factor as errors from the stochasticity of the nested sampling algorithm.
\subsection{Numerical tests with signal reconstruction from noisy data}\label{sec:fit}
We now test dynamic nested sampling on a challenging signal reconstruction likelihood, which fits a 1-dimensional function $y = f(x,\theta)$ using a sum of basis functions.
Similar signal reconstruction problems are common in scientific research and are of great practical importance; for a detailed discussion see \citet{Higson2018b}.
We consider reconstructing a signal $y(x)$ given $D$ data points $\{x_d,y_d\}$, each of which has independent Gaussian $x$- and $y$-errors of size $\sigma_x = \sigma_y = 0.05$ around their unknown true values $\{X_d,Y_d\}$.
In our example, the data points' true $x$-coordinates $X_d$ were randomly sampled with uniform probability in the range $0 < X_d < 1$.
In this case the likelihood is \citep{Hee2016a}
\begin{equation}
\begin{split}
\mathcal{L}(\theta)
=
\prod_{d=1}^D \int_{0}^{1} \frac{\exp\left[-\frac{{(x_d-X_d)}^2}{2\sigma_x^2}-\frac{{(y_d-f(X_d,\theta))}^2}{2\sigma_y^2}\right]}{2\pi\sigma_x\sigma_y} \d{X_d},
\label{equ:fitting_likelihood_hee}
\end{split}
\end{equation}
where the integrals are over the unknown true values of the data points' $x$-coordinates, and each likelihood calculation involves an integral for each of the $D$ data points.
We reconstruct the signal using generalised Gaussian basis functions
\begin{equation}
\phi(x,a,\mu,\sigma,\beta) = a \mathrm{e}^{-{(|x - \mu|/\sigma)}^{\beta}},
\label{equ:gg_1d}
\end{equation}
where when $\beta=1$ the basis function is proportional to a Gaussian.
Our reconstruction uses 4 such basis functions,\footnote{Here the number of basis functions used is fixed. Examples of signal reconstructions in which the number and form of the basis functions are determined from the data simultaneously can be found in \citet{Higson2018b}.} giving 16 parameters
\begin{equation}
\theta=(a_1,a_2,a_3,a_4,\mu_1,\mu_2,\mu_3,\mu_4,\sigma_1,\sigma_2,\sigma_3,\sigma_4,\beta_1,\beta_2,\beta_3,\beta_4),
\end{equation}
and
\begin{equation}
y(x,\theta) = \sum_{j=1}^4 \phi(x, a_j,\mu_j,\sigma_j,\beta_j).
\end{equation}
The priors used are given in \Cref{tab:fit_priors} in Appendix~\ref{app:fit}.
We use 120 data points, sampled from a true signal composed of the sum of 4 generalised Gaussian basis functions with parameters shown in \Cref{tab:fit_data_args} in Appendix~\ref{app:fit}.
The true signal, the noisy data and the posterior distribution of the signal calculated with dynamic nested sampling are shown in \Cref{fig:fit_fgivenx}; this was plotted using the \texttt{fgivenx} package \citep{Handley2018fgivenx}.
\href{https://github.com/ejhigson/dyPolyChord}{\texttt{dyPolyChord}}{}'s allocation of live points for the basis function fitting likelihood and priors are shown in \Cref{fig:nlive_fit}; as before, the software is able to accurately allocate live points in this case.
\Cref{tab:dynamic_test_fit} shows efficiency gains from dynamic nested sampling over standard nested sampling for the signal reconstruction problem.
Due to the computational expense of this likelihood, we use only 20 runs for each of standard nested sampling and dynamic nested sampling with $G=0$, $G=0.25$ and $G=1$.
Consequently the results are less precise than those for previous examples, but the improvements over standard nested sampling are similar to the other tests and include large efficiency gains in estimates of the mean value of the fitted signal (of up to $9.0\pm4.1$).
Furthermore, dynamic nested sampling is also able to reduce errors due to implementation-specific effects in this case --- as can be seen in \Cref{tab:imp_error_fit}.
\begin{figure*}
\centering
\includegraphics{img/fit_fgivenx.pdf}
\caption{Signal reconstruction with generalised Gaussian basis functions.
The first plot shows the true signal; this is composed of 4 generalised Gaussians~\eqref{equ:gg_1d}, with the individual components shown by dashed lines.
The 120 data points, which have added normally distributed $x$- and $y$-errors with $\sigma_x=\sigma_y=0.05$, are show in the second plot.
The third plot shows the fit calculated from a single \href{https://github.com/ejhigson/dyPolyChord}{\texttt{dyPolyChord}}{} dynamic nested sampling run with $G=1$, $n_\mathrm{init}=400$, $\texttt{num\_repeats}=400$ and 101,457 samples; coloured contours represent posterior iso-probability credible intervals on $y(x)$.}%
\label{fig:fit_fgivenx}
\end{figure*}
\begin{figure*}
\centering
\includegraphics{img/nlive_fit.pdf}
\caption{Live point allocation as in \Cref{fig:nlive_gaussian,fig:nlive_gaussian_mix} but for fitting 4 generalised Gaussians to the data shown in \Cref{fig:fit_fgivenx}.
In this case the likelihood~\eqref{equ:fitting_likelihood_hee} is 16-dimensional, and the priors are given in \Cref{tab:fit_priors} in Appendix~\ref{app:fit}.
The 10 standard nested sampling runs shown were generated using \texttt{PolyChord}{} with $n=2,000$, and 10 dynamic nested sampling runs with each $G$ value were generated using \href{https://github.com/ejhigson/dyPolyChord}{\texttt{dyPolyChord}}{} with a similar number of samples and $n_\mathrm{init}=400$.
All runs use the setting $\texttt{num\_repeats}=400$.
The dotted and dashed lines show the relative posterior mass $\propto \mathcal{L}(X)X$ and the posterior mass remaining $\propto \int_{-\infty}^X \mathcal{L}(X')X' \d{X'}$ at each point in $\log X$; for comparison these lines are scaled to have the same area under them as the average of the number of live point lines.
}\label{fig:nlive_fit}
\end{figure*}
\begin{table*}
\centering
\caption{Tests of dynamic nested sampling as in \Cref{tab:dynamic_test_gaussian,tab:dynamic_test_gaussian_mix} but for fitting 4 generalised Gaussians to the data shown in \Cref{fig:fit_fgivenx}; the likelihood is given by~\eqref{equ:fitting_likelihood_hee} and the priors are shown in \Cref{tab:fit_priors} in Appendix~\ref{app:fit}.
The first row shows the standard deviation of $20$ \texttt{PolyChord}{} standard nested sampling calculations with a constant number of live points $n=2,000$.
The next three rows show the standard deviations of $20$ \href{https://github.com/ejhigson/dyPolyChord}{\texttt{dyPolyChord}}{} calculations with a similar number of samples; these are respectively optimised purely for evidence calculations ($G=0$), for both evidence and parameter estimation ($G=0.25$) and purely for parameter estimation ($G=1$).
The final three rows show the computational efficiency gain~\eqref{equ:efficiency_gain} from dynamic nested sampling over standard nested sampling in each case.
The first column shows the mean number of samples produced by the $20$ runs.
The remaining columns show calculations of the log evidence, and the posterior expectation of $y(x,\theta)$ at $x=0.1$, $x=0.3$, $x=0.5$, $x=0.7$ and $x=0.9$.
Numbers in brackets show the $1\sigma$ numerical uncertainty on the final digit.\label{tab:dynamic_test_fit}}
\begin{tabular}{llllllll}
\toprule
{} & samples & $\log \mathcal{Z}$ & \ymean{0.1}& \ymean{0.3}& \ymean{0.5}& \ymean{0.7}& \ymean{0.9}\\
\midrule
St.Dev.\ standard & 100,461 & 0.19(3) & 0.0013(2) & 0.0020(3) & 0.0020(3) & 0.0019(3) & 0.0016(3) \\
St.Dev.\ $G=0$ & 100,490 & 0.15(2) & 0.0018(3) & 0.0023(4) & 0.0025(4) & 0.0022(4) & 0.0027(4) \\
St.Dev.\ $G=0.25$ & 100,708 & 0.20(3) & 0.0015(2) & 0.0017(3) & 0.0018(3) & 0.0014(2) & 0.0017(3) \\
St.Dev.\ $G=1$ & 100,451 & 0.39(6) & 0.0007(1) & 0.0007(1) & 0.0009(1) & 0.0008(1) & 0.0013(2) \\
Efficiency gain $G=0$ & & 1.7(8) & 0.5(2) & 0.8(3) & 0.7(3) & 0.7(3) & 0.4(2) \\
Efficiency gain $G=0.25$ & & 0.9(4) & 0.8(4) & 1.4(7) & 1.2(6) & 1.7(8) & 1.0(4) \\
Efficiency gain $G=1$ & & 0.2(1) & 3.6(16) & 9.0(41) & 4.9(22) & 5.2(24) & 1.7(8) \\
\bottomrule
\end{tabular}
\end{table*}
\begin{table*}
\centering
\caption{Estimated error due to implementation-specific effects for the basis function fitting likelihood results shown in \Cref{tab:dynamic_test_fit}, calculated using the method described in \citet[][Section 5]{Higson2018a}.
Numbers in brackets show the $1\sigma$ numerical uncertainty on the final digit.}\label{tab:imp_error_fit}
\begin{tabular}{lllllll}
\toprule
{} & $\log \mathcal{Z}$ & \ymean{0.1}& \ymean{0.3}& \ymean{0.5}& \ymean{0.7}& \ymean{0.9} \\
\midrule
Implementation St.Dev.\ standard & 0.14(5) & 0.0008(5) & 0.0016(4) & 0.0014(6) & 0.0013(6) & 0.0007(8) \\
Implementation St.Dev.\ $G=0$ & 0.10(5) & 0.0012(6) & 0.0018(6) & 0.0017(7) & 0.0013(9) & 0.0021(6) \\
Implementation St.Dev.\ $G=0.25$ & 0.16(5) & 0.0008(6) & 0.0010(6) & 0.0010(7) & 0.0000(7) & 0.0009(7) \\
Implementation St.Dev.\ $G=1$ & 0.2(1) & 0.0000(3) & 0.0000(3) & 0.0000(4) & 0.0000(4) & 0.0010(3) \\
\bottomrule
\end{tabular}
\end{table*}
\section{Conclusion}
This paper began with an analysis of the effects of changing the number of live points on the accuracy of nested sampling parameter estimation and evidence calculations.
We then presented dynamic nested sampling (Algorithm~\ref{alg:dns}), which varies the number of live points to allocate posterior samples efficiently for {\em a priori\/} unknown likelihoods and priors.
Dynamic nested sampling can be optimised specifically for parameter estimation, showing increases in computational efficiency over standard nested sampling~\eqref{equ:efficiency_gain} by factors of up to $72\pm5$ in numerical tests.
The algorithm can also increase evidence calculation accuracy, and can improve both evidence calculation and parameter estimation simultaneously.
We discussed factors effecting the efficiency gain from dynamic nested sampling, including showing large improvements in parameter estimation are possible when the posterior mass is contained in a small region of the prior (as is typically the case in high-dimensional problems).
Empirical tests show significant efficiency gains from dynamic nested sampling for a wide range likelihoods, priors, dimensions and estimators considered.
Another advantage of dynamic nested sampling is that more accurate results can be obtained by continuing the run for longer, unlike in standard nested sampling.
We applied dynamic nested sampling to problems with challenging posteriors using \href{https://github.com/ejhigson/dyPolyChord}{\texttt{dyPolyChord}}{}, and found the technique is able to reduce errors due to implementation-specific effects compared to standard nested sampling.
This included tests with a practical signal reconstruction calculation, and a multimodal posterior in which the new method gave similar performance gains to the unimodal test cases.
Dynamic nested sampling has also been applied to a number of problems in scientific research; see for example \citet{Orazio2018}, \citet{Zucker2018}, \citet{Higson2018b} and \citet{Guillochon2018}.
The many popular approaches and software implementations for standard nested sampling can be easily adapted for dynamic nested sampling, since it too only requires samples to be drawn randomly from the prior within some hard likelihood constraint.
As a result, our new method can be used to increase computational efficiency while maintaining the strengths of standard nested sampling.
Publicly available dynamic nested sampling packages include \href{https://github.com/ejhigson/dyPolyChord}{\texttt{dyPolyChord}}{}, \href{https://github.com/joshspeagle/dynesty}{\texttt{dynesty}}{} and \href{https://github.com/ejhigson/perfectns}{\texttt{perfectns}}{}.
\bibliographystyle{spbasic}
| proofpile-arXiv_065-9391 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
NA61/SHINE at the CERN SPS is a fixed target experiment which studies final hadronic states produced in interactions between various particles at different collision energies~\cite{na61det}. The search for the onset of deconfinement of strongly interacting matter is part of the experiment's physics program. Within the framework of this program measurements of p+p, Be+Be, p+Pb, Ar+Sc and Pb+Pb collisions were performed. In the nearest future measurements of Xe+La reactions are planned. The measurements will greatly benefit the two-dimensional system size and collision energy scan, which will be helpful in studying the phase transition between hadronic and deconfined phases.
The NA61/SHINE detector (see Fig.~\ref{fig:setup}) is a large-acceptance hadron spectrometer located on the SPS ring at CERN. Upstream of the spectrometer are placed scintillators, Cherenkov detectors and Beam Position Detectors. They provide timing references and position measurements for the incoming beam particles. About 4 meters after the target, the trigger scintillator counter S4 detects if collisions are in the target area by the absence of a beam particle signal. The main detectors of NA61/SHINE are four large volume Time Projection Chambers used for determination of trajectories and energy loss $dE/dx$ of produced charged particles. The first two -- VTPC-1 and VTPC-2 are placed in a magnetic field to measure particles' charge and momentum. Two large TPCs (MTPC-L and MTPC-R) are placed downstream of the magnets. Time-of-flight measurements are performed by two ToF walls (ToF-L, ToF-R). The last part of the setup is the Projectile Spectator Detector (PSD) -- a zero-degree calorimeter which measures the energy of projectile spectators.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{./experimentalSetup.pdf}
\caption{Experimental setup of the NA61/SHINE}
\label{fig:setup}
\end{figure}
\section{The "kink" plot}
According to the Standard Model, quarks and gluons are elementary constituents of hadronic matter. Interactions between them are described by quantum chromodynamics~\cite{gellmann,zweig}. One of the most important properties of quarks and gluons is that in matter under normal conditions they are always enclosed within hadrons. This is called quark confinement. In order to transform matter from the hadronic into the deconfined phase one needs to create high temperature and system density conditions. This can be achieved experimentally in ultra-relativistic heavy ion collisions. A system that achieves the necessary conditions during the early stage of the collision forms the quark-gluon plasma~\cite{qgp:1}, which rapidly expands and freezes-out. In this later stage of the collision, quarks and gluons recombine and form a new set of hadrons.
Since the number of degrees of freedom is higher for the quark-gluon plasma than for confined matter, it is expected that the entropy density of the system at given temperature and density should also be higher in the first case. The majority of particles ($\sim90\%$) produced in heavy ion collisions are $\pi$ mesons. Therefore, the entropy and information regarding the state of matter formed in the early stage of a collision should be reflected in the number of produced pions normalised to the volume of the system. This intuitive argument was quantified within the Statistical Model of the Early Stage (SMES)~\cite{pedestrians}. The change of the mean number of produced pions $\langle \pi \rangle$ is often normalized to the number of wounded nucleons $\langle W \rangle$ and plotted against the Fermi energy measure $\left( F=\left[(\sqrt{s_{\text{NN}}}-2m_{\text{N}})^3/\sqrt{s_{\text{NN}}}\right]^{1/4} \right)$. As the plot resembles a linear increase with a kink of slope in the case of nucleus-nucleus collisions, it is often referred to as the "kink" plot. The plot is presented in Fig.~\ref{fig:kinkRaw}.
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{./kinkRaw.pdf}
\caption{The "kink" plot for p-p and nucleus-nucleus collisions.}
\label{fig:kinkRaw}
\end{figure}
\section{The "kink" plot enriched with preliminary results from Ar+Sc collisions}
In order to add a new point to the "kink" plot it is necessary to calculate the mean number of pions produced in a collision and the mean number of wounded nucleons. The former is extracted from experimental data. In this paper, preliminary results obtained from Ar+Sc central collisions at 13\textit{A}, 19\textit{A}, 30\textit{A}, 40\textit{A}, 75\textit{A} and 150\textit{A} GeV/c, taken during 2015 NA61/SHINE physics data run are discussed. The number of wounded nucleons is not measured experimentally and has to be calculated using Monte Carlo simulations.
\subsection{Calculating the mean number of produced pions}
The starting point of the analysis described herein are double differential spectra $\frac{dn}{dydp_{\text{T}}}$ of negatively charged hadrons (see expample Fig.~\ref{fig:spectrum}).
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{./ptvyBin.pdf}
\caption{Example of a double differential spectrum $\frac{dn}{dydp_{\text{T}}}$.}
\label{fig:spectrum}
\end{figure}
They were obtained from reconstructed tracks applying a series of quality cuts. In order to correct for trigger and reconstruction inefficiencies, one needs to apply a Monte Carlo correction. To this end, the EPOS MC~\cite{EPOS} is used in NA61/SHINE. A large statistics of particle collisions is generated and particles are accumulated in bins $n_{\text{gen}}^{i,j}$ in transverse momentum $p_{\text{T}}$ versus rapidity. The generated data undergoes the regular reconstruction procedure and negatively charged pion selection resuling in the distribution $n_{\text{sel}}^{i,j}$. The correction factor $c^{i,j}$ is then calculated as the ratio of the two Monte-Carlo generated spectra $c^{i,j}=n_{\text{gen}}^{i,j}/n_{\text{sel}}^{i,j}$. The final experimental spectra are obtained in the following way
$$n^{i,j}=n^{i,j}_{\text{data}}c^{i,j}$$
The NA61/SHINE experimental apparatus is characterized by large, but limited acceptance. In order to estimate the mean $\pi^-$ multiplicity in the full acceptance, one needs to extrapolate the experimental data to unmeasured regions. The procedure consists of the following steps:
\begin{enumerate}
\item \label{extrapolation:1} For fixed $y$ extrapolate the $p_{\text{T}}$ spectrum from the edge of acceptance to $p_{\text{T}}=2 GeV/c$, using the exponential form $$f(p_{\text{T}})=c p_{\text{T}} \exp\left(\frac{-\sqrt{p_{\text{T}}^2+m^2}}{T}\right)$$
To obtain $\frac{dn}{dy}$, the measured $p_{\text{T}}$ data bins are summed and the integral of the extrapolated curve is added
$$\frac{dn}{dy}=\sum_0^{p_{\text{T}}^{\text{max}}}dp_{\text{T}}\left(\frac{dn}{dydp_{\text{T}}}\right)_{\text{measured}}+\int_{p_{\text{T}}^{\text{max}}}^2f(p_{\text{T}})dp_{\text{T}}$$
\item \label{extrapolation:2} The corrected rapidity spectrum is extrapolated to missing rapidity acceptance, using a sum of two symmetrically displaced Gaussians.
$$g(y)=\frac{A_0A_{rel}}{\sigma\sqrt{2\pi}}\exp\left(-\frac{(y-y_0)^2}{2\sigma^2}\right)+\frac{A_0}{\sigma\sqrt{2\pi}}\exp\left(-\frac{(y+y_0)^2}{2\sigma^2}\right)$$
\end{enumerate}
The procedure is presented schematically in Fig. \ref{fig:extrapolation}
\begin{figure}[H]
\begin{minipage}{0.94\textwidth}
\centering
$\xrightarrow{\text{Extrapolation in }p_{\text{T}}}$\\
\includegraphics[width=0.49\textwidth]{./spectrumPTCor19.pdf}
\includegraphics[width=0.49\textwidth]{./spectrumPTCorExt19.pdf}\\
\includegraphics[width=0.49\textwidth]{./projectionPT19.pdf}
\includegraphics[width=0.49\textwidth]{./spectrumPTIntComp19.pdf}\\
$\xleftarrow{\text{Fitting Gaussians}}$
\end{minipage}
\begin{minipage}{0.05\textwidth}
$\Bigg\downarrow{\sum}$
\end{minipage}
\caption{Scheme of the extrapolation procedure}
\label{fig:extrapolation}
\end{figure}
The total mean $\pi^-$ multiplicity is given by the formula:
$$\langle \pi^- \rangle = \int_{-4}^{y_{\text{min}}}g(y)dy + \sum_{y_{\text{min}}}^{y_{\text{max}}} dy\left(\frac{dn}{dy}\right)_{\text{extrapolated in} p_{\text{T}}}+\int_{y_{\text{max}}}^4 g(y)dy$$
The results of this procedure are presented in Table~\ref{tab:piMultiplicity}. Statistical uncertainties $\sigma_{\text{stat}}(\langle \pi^{-} \rangle)$ were obtained by propagating the statistical uncertainties of $\frac{dn}{dydp_{\text{T}}}$ spectra. Systematic uncertainties $\sigma_{\text{sys}}(\langle \pi^{-} \rangle)$ are assumed to be $5\%$ based on the previous NA61 analysis of p+p collisions.
\begin{table}
\centering
\footnotesize
\begin{tabular}{l|cccccc}
Momentum [\textit{A} GeV/c] & 13 & 19 & 30 & 40 & 75 & 150\\
\hline
$\langle\pi^-\rangle$ & $38.46$ & $48.03$ & $59.72$ & $66.28$ & $86.12$ & $108.92$\\
$\sigma_{\text{stat}}(\langle \pi^{-} \rangle)$ & $\pm 0.021$ & $\pm 0.021$ & $\pm 0.024$ & $\pm 0.018$ & $\pm 0.0079$ & $\pm 0.0088$\\
$\sigma_{\text{sys}}(\langle \pi^{-} \rangle)$& $\pm 1.92$ & $\pm 2.40$ & $\pm 2.98$ & $\pm 3.31$ & $\pm 4.30$ & $\pm 5.44$
\end{tabular}
\caption{Mean $\pi^-$ multiplicities in the 5 \% most central Ar+Sc collisions with systematic and statistical uncertainties.}
\label{tab:piMultiplicity}
\end{table}
\subsection{Calculating the mean number of wounded nucleons}
The number of wounded nucleons can not be measured experimentally in NA61/SHINE. It has to be calculated using Monte Carlo models. Two models were used to perform calculations -- Glissando 2.73~\cite{glauber} based on the Glauber model and EPOS 1.99 (version CRMC 1.5.3)~\cite{EPOS} using a parton ladder model. Uncertainties of $\langle W\rangle$ were not calculated and are not presented herein. A procedure to obtain a reliable number from the models has been developed. Glissando provides a value that is consistent with previous measurements and applicable to the wounded nucleon model~\cite{wounded}. EPOS, on the other hand, allows for more detailed centrality analysis and event selection. It is possible to reproduce Glauber-based values in EPOS and they are in good agreement with Glissando as shown in Fig.~\ref{fig:comparison}.
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{./comparison.pdf}
\caption{Comparison of Glissando and EPOS "a la Glauber" values of $\langle W \rangle$.}
\label{fig:comparison}
\end{figure}
For now, the EPOS "a la Glauber" value is used with centrality selected based on the number of projectile spectators. Results are presented in the Table~\ref{tab:w}. More detailed analysis of centrality is planned as a future step of the analysis.
\begin{table}
\centering
\footnotesize
\begin{tabular}{l|cccccc}
Momentum [\textit{A} GeV/c] & 13 & 19 & 30 & 40 & 75 & 150\\
\hline
$\langle W\rangle$ & $66.3262$ & $66.3996$ & $66.2887$ & $66.4137$ & $66.6193$ & $66.3485$
\end{tabular}
\caption{$\langle W \rangle$ in the 5 \% most central Ar+Sc collisions calculated by EPOS.}
\label{tab:w}
\end{table}
\section{Conclusions}
As for NA61/SHINE there are only results for $\langle\pi^-\rangle$ in Ar+Sc collisions, the pion multiplicity was approximated by $\langle\pi\rangle_{\text{Ar+Sc}}=3\langle\pi^-\rangle_{\text{Ar+Sc}}$. This allows to produce the preliminary version of the "kink" plot shown in Fig.~\ref{fig:kinkNew}.
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{./kink.pdf}
\caption{The "kink" plot with new measurements.}
\label{fig:kinkNew}
\end{figure}
From the preliminary version of the "kink" plot one can conclude that for high SPS energies Ar+Sc follows the Pb+Pb trend. On the other hand, for low SPS energies Ar+Sc follows the p+p tendency. The plot also confirms the low-energy enhancement of $\frac{\langle \pi \rangle}{\langle W \rangle}$ measured in p+p compared to A+A collisions.
\section{Acknowledgments}
This work was partially supported by the National Science Centre, Poland grant \textit{Harmonia 7 2015/18/M/ST2/00125}.
\FloatBarrier
\bibliographystyle{unsrt}
| proofpile-arXiv_065-9410 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
It is known that lepton-hadron scattering had played crucial role
in our understanding of deep inside of matter. For example, electron
scattering on atomic nuclei reveals structure of nucleons in Hofstadter
experiment [1]. Moreover, quark parton model was originated
from lepton-hadron collisions at SLAC [2]. Extending the
kinematic region by two orders of magnitude both in high $Q^{2}$
and small $x$, HERA (the first and still unique lepton-hadron collider)
with $\sqrt{s}=0.32$ TeV has shown its superiority compared to the
fixed target experiments and provided parton distribution functions
(PDF) for LHC and Tevatron experiments (for review of HERA results see [3, 4]). Unfortunately, the region
of sufficiently small $x$ ($<10^{-5}$) and high $Q^{2}$ ($\geq10\,GeV^{2}$) simultaneously,
where saturation of parton densities should manifest itself, has not
been reached yet. Hopefully, LHeC [5] with $\sqrt{s}=1.3$
TeV will give opportunity to touch this region.
Construction of linear $e^{+}e^{-}$colliders (or dedicated linac) and
muon colliders (or dedicated muon ring) tangential to the future circular pp
colliders, FCC or SppC, as shown in Fig. 1, will give opportunity to use
highest energy proton beams in order to obtain highest center of mass energy in lepton-hadron and photon-hadron collisions. (For earlier studies on linac-ring type ep, $\gamma$p, eA and $\gamma$A colliders, see reviews [6, 7] and papers [8-14].)
\begin{figure}[hbtp]
\centering
\includegraphics[scale=0.45]{fig1.png}
\caption{Possible configuration for SppC, linear collider (LC) and muon collider (${\mu}C)$.}
\end{figure}
FCC is the future 100 TeV center-of-mass energy pp collider studied at CERN and supported by European Union within the Horizon 2020 Framework Programme for Research and Innovation [15]. SppC is the Chinese analog of the FCC. Main parameters of the SppC proton beam [16, 17] are presented in Table~\ref{tab:tablo1}. The FCC based ep and $\mu$p colliders have been considered recently (see [18] and references therein).
\clearpage
\begin{table}[!h]
\captionsetup{singlelinecheck=false, justification=justified}
\caption{Main parameters of proton beams in SppC.}\label{tab:tablo1}
\centering
\begin{tabular}{|c|c|c|}
\hline
Beam Energy (TeV) & 35.6 & 68.0 \\
\hline
Circumference (km) & 54.7 & 100.0 \\
\hline
Peak Luminosity ($10^{34}\,cm^{-2}s^{-1}$) & 11 & 102 \\
\hline
Particle per Bunch ($10^{10}$) & 20 & 20 \\
\hline
Norm. Transverse Emittance ($\mu m$) & 4.10 & 3.05 \\
\hline
{$\beta$}{*} amplitude function at IP (m) & 0.75 & 0.24 \\
\hline
IP beam size ($\mu m$) & 9.0 & 3.04 \\
\hline
Bunches per Beam & 5835 & 10667 \\
\hline
Bunch Spacing (ns) & 25 & 25 \\
\hline
Bunch length (mm) & 75.5 & 15.8 \\
\hline
Beam-beam parameter, $\xi_{pp}$ & 0.006 & 0.008 \\
\hline
\end{tabular}
\end{table}
\vspace{20pt}
In this paper we consider SppC based ep and $\mu$p colliders. In Section 2, main parameters of proposed colliders, namely center of mass energy and luminosity, are estimated taken into account beam-beam tune shift and disruption effects. Physics search potential of the SppC based lp colliders have been evaluated in Section 3, where small Bj{\"o}rken-x region is considered as an example of the SM physics and resonant production of color octet leptons is considered as an example of the BSM physics. Our conclusions and recommendations are presented in Section 4.\\
\section{Main Parameters of the SppC Based ep and $\mu$p Colliders}
\vspace{10pt}
General expression for luminosity of SppC based $lp$ colliders is
given by ($l$ denotes electron or muon):
\begin{eqnarray}
L_{lp} & = & \frac{N_{l}N_{p}}{4\pi max[\sigma_{x_{p}},\sigma_{x_{l}}]max[\sigma_{y_{p}},\sigma_{y_{l}}]}min[f_{c_{p},}\,f_{c_{l}}]\label{eq:Denklem1}
\end{eqnarray}\\
\noindent where $N_{l}$ and $N_{p}$ are numbers of leptons and protons per
bunch, respectively; $\sigma_{x_{p}}$ ($\sigma_{x_{l}}$) and $\sigma_{y_{p}}$
($\sigma_{y_{l}}$) are the horizontal and vertical proton (lepton)
beam sizes at interaction point (IP); $f_{c_{l}}$ and $f_{c_{p}}$ are LC/$\mu$C and SppC bunch
frequencies. $f_{c}$ is expressed by $f_{c}=N_{b}f_{rep}$, where
$N_{b}$ denotes number of bunches, $f_{rep}$ means revolution frequency
for SppC/$\mu$C and pulse frequency for LC. In order to determine collision
frequency of lp collider, minimum value should be chosen among lepton
and hadron bunch frequencies. Some of these parameters can be rearranged
in order to maximize $L_{lp}$ but one should note that there are
main limitations due to beam-beam effects that should be kept in mind. While
beam-beam tune shift affects proton and muon beams,
disruption has influence on electron beams.
Disruption parameter for electron beam is given by:
\begin{eqnarray}
D_{x_{e}} & = & \frac{2\,N_{p}r_{e}\sigma_{z_{p}}}{\gamma_{e}\sigma_{x_{p}}(\sigma_{x_{p}}+\sigma_{y_{p}})}\label{eq:Denklem2}
\end{eqnarray}
$\,$
\begin{equation}
D_{y_{e}}=\frac{2\,N_{p}r_{e}\sigma_{z_{p}}}{\gamma_{e}\sigma_{y_{p}}(\sigma_{y_{p}}+\sigma_{x_{p}})}
\end{equation}
\vspace{10pt}
\noindent where, $r_{e}=2.82\times10^{-15}$ $m$ is classical radius for
electron, $\gamma_{e}$ is the Lorentz factor of electron beam, $\sigma_{x_{p}}$
and $\sigma_{y_{p}}$ are horizontal and vertical proton beam sizes
at IP, respectively. $\sigma_{z_{p}}$ is bunch length of proton beam. Beam-beam parameter for proton beam is given by:\\
\begin{equation}
\xi_{x_{p}}=\frac{N_{l}r_{p}\beta_{p}^{*}}{2\pi\gamma_{p}\sigma_{x_{l}}(\sigma_{x_{l}}+\sigma_{y_{l}})}\label{eq:Denklem3}
\end{equation}
$ $
\begin{equation}
\xi_{y_{p}}=\frac{N_{l}r_{p}\beta_{p}^{*}}{2\pi\gamma_{p}\sigma_{y_{l}}(\sigma_{y_{l}}+\sigma_{x_{l}})}
\end{equation}
\vspace{10pt}
\noindent where $r_{p}$ is classical radius for proton,
$r_{p}=1.54\times10^{-18}$ $m$, $\beta_{p}^{*}$ is beta function of
proton beam at IP, $\gamma_{p}$ is the Lorentz
factor of proton beam. $\sigma_{x_{l}}$ and $\sigma_{y_{l}}$ are
horizontal and vertical sizes of lepton beam at IP, respectively.\\
Beam-beam parameter for muon beam is given by:\\
\begin{equation}
\xi_{x_{\mu}}=\frac{N_{p}r_{\mu}\beta_{\mu}^{*}}{2\pi\gamma_{\mu}\sigma_{x_{p}}(\sigma_{x_{p}}+\sigma_{y_{p}})}\label{eq:Denklem3}
\end{equation}
$ $
\begin{equation}
\xi_{y_{\mu}}=\frac{N_{p}r_{\mu}\beta_{\mu}^{*}}{2\pi\gamma_{\mu}\sigma_{y_{p}}(\sigma_{y_{p}}+\sigma_{x_{p}})}
\end{equation}
\vspace{10pt}
\noindent where $r_{\mu}=1.37\times10^{-17}$ $m$ is classical muon radius, $\beta_{\mu}^{*}$ is beta function of
muon beam at IP, $\gamma_{\mu}$ is the Lorentz
factor of muon beam. $\sigma_{x_{p}}$ and $\sigma_{y_{p}}$ are
horizontal and vertical sizes of proton beam at IP, respectively.
\subsection{ep option}
Preliminary study of CepC-SppC based e-p collider with $\sqrt{s}=4.1$ TeV and $L_{ep}=10^{33}$ $cm^{-2}s^{-1}$ has been performed in [19]. In this subsection, we consider ILC (International Linear Collider) [20] and PWFA-LC (Plasma Wake Field Accelerator - Linear Collider) [21] as a source of electron/positron beam for SppC based energy frontier ep colliders. Main parameters of ILC and PWFA-LC electron beams are given Table~\ref{tab:tablo2}.
\clearpage
\begin{table}[!h]
\captionsetup{singlelinecheck=false, justification=justified}
\caption{ Main parameters of the ILC (second column) and PWFA-LC (third column) electron beams.}\label{tab:tablo2}
\centering
\begin{tabular}{|c|c|c|}
\hline
Beam Energy (GeV) & $500$ & $5000$\tabularnewline
\hline
Peak Luminosity ($10^{34}\,cm^{-2}s^{-1}$) & $4.90$ & $6.27$\tabularnewline
\hline
Particle per Bunch ($10^{10}$) & $1.74$ & $1.00$\tabularnewline
\hline
Norm. Horiz. Emittance ($\mu m$) & $10.0$ & $10.0$\tabularnewline
\hline
Norm. Vert. Emittance (nm) & $30.0$ & $35.0$\tabularnewline
\hline
Horiz. {$\beta$}{*} amplitude function at IP (mm) & $11.0$ & $11.0$\tabularnewline
\hline
Vert. {$\beta$}{*} amplitude function at IP (mm) & $0.23$ & $0.099$\tabularnewline
\hline
Horiz. IP beam size (nm) & $335$ & $106$\tabularnewline
\hline
Vert. IP beam size (nm) & $2.70$ & $59.8$\tabularnewline
\hline
Bunches per Beam & $2450$ & $1$\tabularnewline
\hline
Repetition Rate (Hz) & $4.00$ & $5000$\tabularnewline
\hline
Beam Power at IP (MW) & $27.2$ & $40$\tabularnewline
\hline
Bunch Spacing (ns) & $366$ & $20$x$10^{4} $\tabularnewline
\hline
Bunch length (mm) & $0.225$ & $0.02$\tabularnewline
\hline
\end{tabular}
\end{table}
\vspace{20pt}
It is seen that bunch spacings of ILC and PWFA-LC are much greater than SppC bunch spacing. On the other hand, transverse size of proton beam is much greater than transverse sizes of electron beam. Therefore, Eq. (1) for luminosity turns into:
\begin{equation}
L_{ep}=\frac{N_{e}N_{p}}{4\pi\sigma_{p}^{2}}f_{c_{e}}\label{eq:Denklem4}
\end{equation}
\vspace{10pt}
For transversely matched electron and proton beams at IP, equations for electron beam disruption and proton beam tune shift become:
\begin{equation}
D_{e}=\frac{N_{p}r_{e}\sigma_{z_{p}}}{\gamma_{e}\sigma_{p}^{2}}\label{eq:Denklem5}
\end{equation}
\begin{equation}
\xi_{p}=\frac{N_{e}r_{p}\beta_{p}^{*}}{4\pi\gamma_{p}\sigma_{p}^{2}}=\frac{N_{e}r_{p}}{4\pi\epsilon_{np}}\label{eq:Denklem6}
\end{equation}
\noindent where $\epsilon_{np}$ is normalized transverse emittance of proton beam.
Using nominal parameters of ILC, PWFA-LC and SppC, we obtain values of L$_{ep}$, D$_e$ and ${\xi}_{p}$ parameters for LC$\otimes$SppC based ep colliders, which are given in Table~\ref{tab:tablo3}. The values for luminosity given in parantheses represent results of beam-beam simulations by ALOHEP software [22], which is being developed for linac-ring type ep colliders.
\clearpage
\begin{table}[!h]
\captionsetup{singlelinecheck=false, justification=justified}
\caption{ Main parameters of LC$\otimes$SppC based ep colliders.}\label{tab:tablo3}
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
E$_{e}$, TeV & E$_{p}$, TeV & $\sqrt{s}$, TeV & L$_{ep}$, $cm^{-2}s^{-1}$ & D$_{e}$ & $\xi_{p}$, $10^{-3}$ \\
\hline
0.5 & 35.6 & 8.44 & 3.35 (6.64) x $10^{30}$ & 0.537 & 0.5 \\
\hline
0.5 & 68 & 11.66 & 2.69 (5.33) x $10^{31}$ & 0.902 & 0.7 \\
\hline
5 & 35.6 & 26.68 & 0.98 (1.94) x $10^{30}$ & 0.054 & 0.3 \\
\hline
5 & 68 & 36.88 & 0.78 (1.56) x $10^{31}$ & 0.090 & 0.4 \\
\hline
\end{tabular}
\end{table}
\vspace{10pt}
In order to increase luminosity of ep collisions LHeC-like upgrade of the SppC proton beam parameters have been used. Namely, $\beta$ function of proton beam at IP is arranged to be 7.5/2.4 times lower (0.1 m instead of 0.75/0.24 m) which corresponds to LHeC [5] and THERA [23] designs. This leads to increase of luminosity and D$_{e}$ by factor 7.5 and 2.4 for SppC with 35.6 TeV and 68 TeV proton beam, respectively. Results are shown in Table~\ref{tab:tablo4}.
\begin{table}[!h]
\captionsetup{singlelinecheck=false, justification=justified}
\caption{ Main parameters of LC$\otimes$SppC based ep colliders with upgraded $\beta$*.}\label{tab:tablo4}
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
E$_{e}$, TeV & E$_{p}$, TeV & $\sqrt{s}$, TeV & L$_{ep}$, $cm^{-2}s^{-1}$ & D$_{e}$ & $\xi_{p}$, $10^{-3}$ \\
\hline
0.5 & 35.6 & 8.44 & 2.51 (4.41) x $10^{31}$ & 4.03 & 0.5 \\
\hline
0.5 & 68 & 11.66 & 6.45 (10.8) x $10^{31}$ & 2.16 & 0.7 \\
\hline
5 & 35.6 & 26.68 & 7.37 (13.3) x $10^{30}$ & 0.403 & 0.3 \\
\hline
5 & 68 & 36.88 & 1.89 (3.75) x $10^{31}$ & 0.216 & 0.4 \\
\hline
\end{tabular}
\end{table}
\vspace{10pt}
In principle "dynamic focusing scheme" [24] which was proposed for THERA, could provide additional factor of 3-4. Therefore, luminosity values exceeding $10^{32}$ $cm^{-2}s^{-1} $ can be achieved for all options.
Concerning ILC$\otimes$SppC based ep colliders, a new scheme for energy recovery proposed for higher-energy LHeC (see Section 7.1.5 in [5]) may give an opportunity to increase luminosity by an additional order, resulting in L$_{ep}$ exceeding $10^{33}$ $cm^{-2}s^{-1} $. Unfortunately, this scheme can not be applied at PWFA-LC$\otimes $SppC.
\subsection{$\mu$p option}
Muon-proton colliders were proposed almost two decades ago: construction of additional proton ring in $\sqrt{s}$ = 4 TeV muon collider tunnel was suggested in [25], construction of additional 200 GeV energy muon ring in the Tevatron tunnel was considered in [26] and ultimate $\mu$p collider with 50 TeV proton ring in $\sqrt{s}$ = 100 TeV muon collider tunnel was suggested in [27]. Here, we consider construction of TeV energy muon colliders ($\mu$C) [28] tangential to the SppC. Parameters of $\mu$C are given in Table~\ref{tab:tablo5}.
Keeping in mind that both SppC and $\mu$C have round beams, luminosity Eq. (1) turns to:
\begin{eqnarray}
L_{pp} & = & f_{pp}\frac{N_{p}^{2}}{4\pi\sigma_{p}^{2}}\label{eq:Denklem7}
\end{eqnarray}
\begin{center}
\begin{eqnarray}
L_{\mu\mu} & = & f_{\mu\mu}\frac{N_{\mu}^{2}}{4\pi\sigma_{\mu}^{2}}\label{eq:Denklem8}
\end{eqnarray}
\par\end{center}
\noindent for SppC-$pp$ and $\mu$C, respectively. Concerning muon-proton
collisions one should use larger transverse beam sizes and smaller
collision frequency values. Keeping in mind that $f_{\mu\mu}$ is
smaller than $f_{pp}$ by more than two orders, following correlation between $\mu p$
and $\mu\mu$ luminosities take place:
\begin{center}
\begin{eqnarray}
L_{\mu p} & = & (\frac{N_{p}}{N_{\mu}})(\frac{\sigma_{\mu}}{max[\sigma_{p},\,\sigma_{\mu}]})^{2}L_{\mu\mu}\label{eq:Denklem9}
\end{eqnarray}
\par\end{center}
Using nominal parameters of $\mu\mu$ colliders given in Table 5,
parameters of the SppC based
$\mu p$ colliders are calculated according to Eq. (\ref{eq:Denklem9}) and presented in Table~\ref{tab:tablo6}. Concerning beam beam tune shifts, for round and matched beams Eqs. (4,5) and Eqs. (6,7) turns to:
\begin{eqnarray}
\xi_{p} = \frac{N_{\mu}r_{p}\beta_{p}^{*}}{4\pi\gamma_{p}\sigma_{\mu}^{2}} = \frac{N_{\mu}r_{p}}{4\pi\epsilon_{np}}\label{eq:Denklem10}
\end{eqnarray}
\noindent and
\begin{eqnarray}
\xi_{\mu} = \frac{N_{p}r_{\mu}\beta_{\mu}^{*}}{4\pi\gamma_{\mu}\sigma_{p}^{2}} = \frac{N_{p}r_{\mu}}{4\pi\epsilon_{n\mu}},\label{eq:Denklem11}
\end{eqnarray}
\noindent respectively.
As one can see from Table~\ref{tab:tablo6}, where nominal parameters of SppC proton beam are used, $\xi_{p}$ is unacceptably high and should be decreased to 0.02 which seems acceptable for $\mu$p colliders [26]. According to Eq. (14), $\xi_{p}$ can be decreased, for example, by decrement of N$_{\mu}$ which leads to corresponding reduction of luminosity (three times and four times for $\mu$p 35.6 TeV and 68 TeV, respectively). Alternatively, crab crossing [29] can be used for decreasing of $\xi_{p}$ without change of the luminosity.
\begin{table}[!h]
\captionsetup{singlelinecheck=false, justification=justified}
\caption{Main parameters of the muon beams.}\label{tab:tablo5}
\centering
\begin{tabular}{|c|c|c|}
\hline
Beam Energy (GeV) & $750$ & $1500$ \tabularnewline
\hline
Circumference (km) & $2.5$ & $4.5$\tabularnewline
\hline
Average Luminosity ($10^{34}\,cm^{-2}s^{-1}$) & $1.25$ & $4.4$\tabularnewline
\hline
Particle per Bunch ($10^{12}$) & $2$ & $2$\tabularnewline
\hline
Norm. Trans. Emitt. (mm-rad) & $0.025$ & $0.025$\tabularnewline
\hline
{$\beta$}{*} amplitude function at IP (cm) & $1 (0.5-2)$ & $0.5 (0.3-3)$\tabularnewline
\hline
IP beam size ($\mu$m) & $6$ & $3$\tabularnewline
\hline
Bunches per Beam & $1$ & $1$\tabularnewline
\hline
Repetition Rate (Hz) & $15$ & $12$\tabularnewline
\hline
Bunch Spacing (ns) & $8300$ & $15000$\tabularnewline
\hline
Bunch length (cm) & $1$ & $0.5$\tabularnewline
\hline
\end{tabular}
\end{table}
\begin{table}[!h]
\captionsetup{singlelinecheck=false, justification=justified}
\caption{Main parameters of SppC based $\mu$p colliders.}\label{tab:tablo6}
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$E_{\mu}$, TeV & $E_{p}$, TeV & $\surd$S, TeV & $L_{{\mu}p} $, $cm^{-2}s^{-1}$ & $\xi_{\mu}$ & $\xi_{p}$ \\
\hline
0.75 & 35.6 & 10.33 & 5.5 x $10^{32}$ & 8.7 x $10^{-3}$ & 6.0 x $10^{-2}$ \\
\hline
0.75 & 68 & 14.28 & 12.5 x $10^{32}$ & 8.7 x $10^{-3}$ & 8.0 x $10^{-2}$ \\
\hline
1.5 & 35.6 & 14.61 & 4.9 x $10^{32}$ & 8.7 x $10^{-3}$ & 6.0 x $10^{-2}$ \\
\hline
1.5 & 68 & 20.2 & 42.8 x $10^{32}$ & 8.7 x $10^{-3}$ & 8.0 x $10^{-2}$ \\
\hline
\end{tabular}
\end{table}
\subsection{Ultimate $\mu$p option}
This option can be realized if an additional muon ring is constructed in the SppC tunnel. In order to estimate CM energy and luminosity of $\mu$p collisions we use muon beam parameters from [30], where 100 TeV center of mass energy muon collider with 100 km ring circumference have been proposed. These parameters are presented in Table~\ref{tab:tablo7}.
\vspace{10pt}
CM energy, luminosity and tune shifts for ultimate $\mu$p collider are given in Table~\ref{tab:tablo8}. Again $\xi_{\mu}$ and $\xi_{p}$ can be decreased by lowering of $N_{p}$ and $N_{\mu}$ respectively (which lead to corresponding decrease of luminosity) or crab crossing can be used without change of the luminosity.
\begin{table}[!ht]
\captionsetup{singlelinecheck=false, justification=justified}
\caption{Main parameters of the ultimate muon beam.}\label{tab:tablo7}
\centering
\begin{tabular}{|c|c|}
\hline
Beam Energy (TeV) & $50$ \tabularnewline
\hline
Circumference (km) & $100$ \tabularnewline
\hline
Average Luminosity ($10^{34}\,cm^{-2}s^{-1}$) & $100$ \tabularnewline
\hline
Particle per Bunch ($10^{12}$) & $0.80$ \tabularnewline
\hline
Norm. Trans. Emitt. (mm-mrad) & $8.7$ \tabularnewline
\hline
{$\beta$}{*} amplitude function at IP (mm) & $2.5$ \tabularnewline
\hline
IP beam size ($\mu$m) & $0.21$ \tabularnewline
\hline
Bunches per Beam & $1$ \tabularnewline
\hline
Repetition Rate (Hz) & $7.9$ \tabularnewline
\hline
Bunch Spacing ($\mu$s) & $333$ \tabularnewline
\hline
Bunch length (mm) & $2.5$ \tabularnewline
\hline
\end{tabular}
\end{table}
\begin{table}[!ht]
\captionsetup{singlelinecheck=false, justification=justified}
\caption{Main parameters of the ultimate SppC based ${\mu}$p collider.}\label{tab:tablo8}
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$E_{\mu}$, TeV & $E_{p}$, TeV & $\surd$S, TeV & $L_{{\mu}p} $, $cm^{-2}s^{-1}$ & $\xi_{\mu}$ & $\xi_{p}$ \\
\hline
50 & 68 & 116.6 & 1.2 x $10^{33}$ & 2.6 x $10^{-2}$ & 3.5 x $10^{-2}$ \\
\hline
\end{tabular}
\end{table}
\section{Physics}
In order to evaluate physics search potential of the SppC based lp colliders we consider two phenomena, namely, small Bj{\"o}rken $x$ region is considered as an example of the SM physics and resonant production of color octet electron and muon is considered as an example of the BSM physics.
\subsection{Small Bj{\"o}rken $x$}
As mentioned above, investigation of extremely small $x$ region ($x$ $<$ $10^{-5}$) at sufficiently large $Q^{2}$ ($>$ 10 $GeV^{2}$), where saturation of parton density should manifest itself, is crucial for understanding of QCD basics. Smallest achievable $x$ at lp colliders is given by $Q^{2}$/S. For LHeC with $\sqrt{s}=1.3$ TeV minimal acvievable value is $x$ = 6 x $10^{-6}$. In Table~\ref{tab:tablo9}, we present smallest $x$ values for different SppC based lepton-proton colliders (E$_{p}$ is chosen as 68 TeV). It is seen that proposed machines has great potential for enligthening of QCD basics.
\begin{table}[!h]
\captionsetup{singlelinecheck=false, justification=justified}
\caption{Attainable Bj{\"o}rken $x$ values at $Q^{2}=10$ $GeV^{2}$.}\label{tab:tablo9}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
E$_{l}$ (TeV) & 0.5 & 5 & 1.5 & 50 \\
\hline
$x$ & $7$ x $10^{-8}$ & $7$ x $10^{-9}$ &$2$ x $10^{-8}$ & $7$ x $10^{-10}$ \\
\hline
\end{tabular}
\end{table}
\subsection{Color octet leptons}
Color octet leptons ($l_{8}$) are predicted in preonic models with colored preons [31]. There are various phenomenological studies on $l_{8}$ at TeV energy scale colliders [32-39]. Resonant production of color octet electron ($e_{8}$) and muon ($\mu_{8}$) at the FCC based lp colliders have been considered in [40] and [41] respectively. Performing similar analyses for SppC based lp colliders we obtain mass discovedynamicry limits for $e_{8}$ and $\mu_{8}$ in $\Lambda = M_{l_{8}}$ case (where $\Lambda$ is compositeness scale) which are presented in Figs 2 and 3, respectively. Discovery mass limit value for LHC and SppC are obtained by rescaling ATLAS/CMS second generation LQ results [42, 43] using the method developed by G. Salam and A. Weiler [44]. For lepton colliders, it is obvious that discovery mass limit for pair production of $l_{8}$ are approximately half of CM energies. It is seen that $l_{8}$ search potential of SppC based lp colliders overwhelmingly exceeds that of LHC and lepton colliders. Moreover lp colliders will give an opportunity to determine compositeness scale (for details see [40, 41]).
It should be noted that FCC/SppC based lp colliders has great potential for search of a lot of BSM phenomena, such as excited leptons (see [45] for ${\mu}^*$), contact interactions, R-parity violating SUSY etc.
\clearpage
\begin{figure}[!h]
\centering
\includegraphics[scale=0.30]{fig2.png}
\caption{Discovery mass limits for color octet electron at different pp, $e^+$$e^-$ and ep colliders.}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[scale=0.30]{fig3.png}
\caption{Discovery mass limits for color octet muon at different pp, ${\mu}^+$${\mu}^-$ and ${\mu}$p colliders.}
\end{figure}
\section{Conclusion}
It is shown that construction of linear $e^{+}e^{-}$colliders (or dedicated linac) and muon colliders (or dedicated muon ring) tangential to the SppC will give opportunity to handle lepton-proton collisions with multi-TeV CM energies and sufficiently high luminosities. Concerning SM physics, these machines will certainly shed light on QCD basics. BSM search potential of lp colliders essentially exceeds that of corresponding lepton colliders. Also these type of colliders exceed the search potential of the SppC itself for a lot of BSM phenomena.
Acceleration of ion beams at the SppC will give opportunity to provide multi-TeV center of mass energy in eA and $\mu$A collisions. In addition, electron beam can be converted to high energy photon beam using Compton backdynamic-scattering of laser photons which will give opportunity to construct LC$\bigotimes$SppC based $\gamma$p and $\gamma$A colliders. Studies on these topics are ongoing.
In conclusion, systematic study of accelerator, detector and physics search potential issues of the SppC based ep, eA, $\gamma$p, $\gamma$A, $\mu$p and $\mu$A colliders are essential to foreseen the future of particle physics. Certainly, realization of these machines depend on the future results from the LHC as well as FCC and/or SppC.
\vspace{10pt}
\section*{Acknowledgments}
\addcontentsline{toc}{section}{Acknowledgement}
This study is supported by TUBITAK under the grant no 114F337.
\vspace{10pt}
\section*{References}
\addcontentsline{toc}{section}{References}
| proofpile-arXiv_065-9420 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The $k$-server problem is one of the most natural and fundamental online problems and its study has been quite influential in the development of competitive analysis (see e.g.~\cite{BEY98,KP95,Kou09,Sit14,BBMN15}).
The problem is almost settled in the deterministic case: no algorithm can be better than $k$-competitive in any metric space of more than $k$ points \cite{MMS90},
and in their breakthrough result, Koutsoupias and Papadimitriou \cite{KP95} showed that the Work Function Algorithm (WFA) is $(2k-1)$-competitive in any metric space. Tight $k$-competitive algorithms are also known for several special metrics \cite{ST85,CKPV91,CL91,KP96}.
Despite this progress, several natural variants and generalizations of the $k$-server problem are very poorly understood.
In particular, they exhibit very different and intriguing behavior and the techniques for the standard $k$-server problem do not seem to apply to them
(we describe some of these problems and results in Section \ref{sec:rel_work}).
Getting a better understanding of such problems is a natural step towards building a deeper theory of online computation.
\paragraph{Weighted $k$-server.}
Perhaps the simplest such problem is the weighted $k$-server problem on uniform metrics, that was first introduced and studied by Fiat and Ricklin~\cite{FR94}. Here, there are $k$ servers located at points of a uniform metric space. In each step a request arrives at some point and must be served by moving some server there. Each server $s_i$ has a positive weight $w_i$ and it costs $w_i$ to move $s_i$ to another point. The goal is to minimize the total cost for serving the requests.
Note that in the unweighted case where each $w_i=1$, this is the classic and extensively studied paging/caching problem \cite{ST85}, for which several tight $k$-competitive deterministic and $O(\log k)$-competitive randomized algorithms are known \cite{BEY98}.
Indeed, one of the motivations of \cite{FR94} for studying the weighted $k$-server problem was that it corresponds to paging where each memory slot has a different replacement cost.\footnote{We crucially note that this problem should not be confused by the related, but very different, weighted paging problem where the weights are on the pages instead of the servers. Weighted paging corresponds to (unweighted) $k$-server on weighted star metrics and is very well understood. In particular, tight $k$-competitive deterministic and $O(\log k)$-competitive randomized algorithms are known~\cite{CKPV91,Young94,BBN12}.}
Throughout this paper, we only consider the uniform metric, and by weighted $k$-server we always mean the problem on the uniform metric, unless stated otherwise.
\paragraph{Previous Bounds.}
There is surprisingly huge gap between the known upper and lower bounds on the
competitive ratio for weighted $k$-server.
In their seminal paper, Fiat and Ricklin \cite{FR94} gave the first deterministic algorithm with a doubly exponential competitive ratio of about $2^{4^k} = 2^{2^{2k}}$.
They also showed a (singly) exponential lower bound of $(k+1)!/2$ on the competitive ratio of deterministic algorithms, which can be improved to
$(k+1)!-1$ by a more careful argument \cite{ChV13}.
More recently,
Chiplunkar and Vishwanathan \cite{ChV13} considered a simple memoryless randomized algorithm, where server $s_i$ moves to the requested point with some fixed probability $p_i$. They showed that there is always a choice of $p_i$ as function of the weights, for which this gives an $\alpha_k < 1.6^{2^k}$-competitive algorithm against adaptive online adversaries. Note that
$\alpha_k \in [2^{2^{k-1}}, 2^{2^k}]$. They also showed that this ratio is tight for such randomized memoryless algorithms.
By the simulation technique of Ben-David et al.~\cite{BBKTW94} that relates different adversary models, this gives an implicit $\alpha_k^2 \leq 2^{2^{k+1}}$-competitive deterministic algorithm\footnote{A more careful analysis shows that the Fiat-Ricklin algorithm \cite{FR94} is also $2^{2^{k+O(1)}}$ competitive \cite{Chip-pc}.}.
\paragraph{Conjectured upper bound.}
Prior to our work, it was widely believed that the right competitive ratio should be $(k+1)!-1$. In fact, \cite{ChV13} mention that WFA is a natural candidate to achieve this.
There are several compelling reasons for believing this. First, for classical $k$-server, the lower bound of $k$ is achieved in metric spaces with $n=k+1$ points, where each request is at the (unique) point with no online server. The $(k+1)!-1$ lower bound for weighted $k$-server~\cite{FR94,ChV13} also uses $n=k+1$ points. More importantly, this is in fact the right bound for $n=k+1$. This follows as the weighted $k$-server problem on $n$ points is a Metrical Service System (MSS)\footnote{This is a Metrical Task System \cite{BLS92} where the cost in each state is either $0$ or infinite (called \textit{forcing task systems} in \cite{MMS90}).} with $N=\binom{n}{k}k!$ states, which correspond to the $k$-tuples describing the configuration of the servers. It is known that WFA is $(N-1)$-competitive for any MSS with $N$ states \cite{CL96}. As $N=(k+1)!$ for $n=k+1$, this gives the $(k+1)!-1$ upper bound. Moreover, Chrobak and Sgall~\cite{CS04} showed that WFA is exactly $(k+1)!-1 =3! -1 =5$-competitive for $k=2$ servers (with arbitrary $n$), providing strong coincidental evidence for the $(k+1)!-1$ bound for general $k$.
\subsection{Our Results}
In this paper, we study the weighted $k$-server problem systematically and
obtain several new results.
A key idea is to relate online weighted $k$-server to a natural {\em
offline} combinatorial question about the structure of all possible ``feasible
labelings'' for a hierarchical collection of intervals of depth $k$.
In particular, we show that the competitive ratio for weighted $k$-server is
closely related to a certain Ramsey-theoretic parameter of this combinatorial
problem. This parameter, let us call it $f(k)$ for the discussion here,
reflects the amount of uncertainty that
adversary can create about the truly good solutions in an instance.
This connection is used for both upper and lower bound results in this paper.
\paragraph{Lower Bounds.} Somewhat surprisingly, we show that the doubly exponential upper bounds \cite{FR94,ChV13} for the problem
are essentially the best possible (up to lower order terms in the exponent).
\begin{theorem}
\label{thm:lb}
Any deterministic algorithm for the weighted $k$-server problem on uniform metrics has a competitive ratio at least $\Omega(2^{2^{k-4}})$.
\end{theorem}
As usual, we prove Theorem \ref{thm:lb} by designing an adversarial strategy to produce an online request sequence dynamically (depending on the actions of the online algorithm), so that
(i) the online algorithm incurs a high cost, while (ii) the adversary can always guarantee some low cost offline solution in hindsight.
Our strategy is based on a recursive construction on $n \geq \exp(\exp(k))$
points (necessarily so, by the connection to MSS) and it is designed in a modular way using the combinatorial connection as follows:
First, we construct a recursive lower bound instance for the combinatorial
problem for which the Ramsey-theoretic parameter $f(k) \geq 2^{2^{k-4}}$.
Second, to obtain the online lower bound, we embed this construction into a recursive strategy to dynamically generate an adversarial request sequence with the properties described above.
Moreover, we show that the lower bound from Theorem~\ref{thm:lb}, can be
extended to general metric spaces. That means, in any metric space containing
enough points, the competitive ratio of deterministic algorithms for weighted $k$-server is at least $\Omega(2^{2^{k-4}})$. We describe the details in Appendix~\ref{sec:general_lb}.
\paragraph{Upper Bounds.} The combinatorial connection is also very useful for positive results.
We first show that the generalized WFA, a very generic online algorithm that is applicable to a wide variety of problems, is essentially optimum.
\begin{theorem}\label{thm:ub}
The generalized $\WFA$ is $2^{2^{k+O(\log k)}}$-competitive for weighted
$k$-server on uniform metrics.
\end{theorem}
While previous algorithms \cite{FR94,ChV13} were also essentially optimum, this result is interesting as the generalized WFA is a generic algorithm and is not specifically designed for this problem at hand. In fact, as we discuss in Section~\ref{sec:rel_work}, for more general variants of $k$-server the generalized WFA seems to be only known candidate algorithm that can be competitive.
To show Theorem~\ref{thm:ub}, we first prove an almost matching upper bound of
$f(k) \leq 2^{2^{k+3\log k}}$ for the combinatorial problem. As will be clear later, we call such results {\em dichotomy theorems}.
Second, we relate the combinatorial problem to the dynamics of \textit{work functions} and use the dichotomy theorem recursively to bound the cost of the WFA on any instance.
This approach also allows us to extend and refine these results to the setting of $d$ different weight classes with $k_1,\ldots,k_d$ servers of each class. This corresponds to $d$-level caching where each cache has replacement cost $w_i$ and capacity $k_i$. As practical applications usually have few weight classes, the case where $d$ is a small constant independent of $k$ is of interest.
Previously, \cite{FR94} gave an improved $k^{O(k)}$ bound for $d=2$, but a major difficulty in extending their result is that their algorithm is phase-based and gets substantially more complicated for $d>2$.
\begin{theorem}\label{thm:ubd}
The competitive ratio of the generalized $\WFA$ for the weighted
$k$-server problem on uniform metrics with $d$ different weights is at most
$2^{O(d)\,k^3 \prod_{i=1}^d (k_i+1)}$, where $k_i$ is the number of servers
of weight $w_i$, and $k = \sum_{i=1}^d k_i$.
\end{theorem}
For $k$ distinct weights, i.e~$k_i=1$ for each $i$, note that this matches the $2^{\textrm{poly}(k) \cdot 2^k}$ bound in Theorem \ref{thm:ub}.
For $d$ weight classes, this gives $2^{O(dk^{d+3})}$, which is
singly exponential in $k$ for $d=O(1)$.
To prove Theorem \ref{thm:ubd}, we proceed as before. We first prove a more refined dichotomy theorem (Theorem \ref{thm:dichotomy-d}) and use it recursively with the WFA.
\subsection{Generalizations of $k$-server and Related Work}
\label{sec:rel_work}
The weighted $k$-server problem on uniform metrics that we consider here is the simplest among the several generalizations of $k$-server that
are very poorly understood. An immediate generalization is the weighted $k$-server problem in general metrics. This seems very intriguing even for a line metric.
Koutsoupias and Taylor \cite{KT04} showed that natural generalizations of many successful $k$-server algorithms are not competitive.
Chrobak and Sgall \cite{CS04} showed that any memoryless randomized algorithm has
unbounded competitive ratio. In fact, the only candidate competitive algorithm
for the line seems to be the generalized WFA.
There are also other qualitative differences. While the standard $k$-server problem is believed to have the same competitive ratio in every metric, this is not the case for weighted $k$-server.
For $k=2$ in a line, \cite{KT04} showed that any deterministic algorithm
is at least $10.12$-competitive, while on uniform metrics the competitive
ratio is 5 \cite{CS04}.
A far reaching generalization of the weighted $k$-server problem is the {\em generalized} $k$-server problem \cite{KT04, SS06,SSP03,Sit14}, with various applications.
Here, there are $k$ metric spaces $M_1,\ldots,M_k$, and each server $s_i$ moves in its own space $M_i$. A request $r_t$ at time $t$ is specified by a $k$-tuple
$r_t = (r_t(1),\ldots,r_t(k))$ and must be served by moving server $s_i$ to
$r_t(i)$ for some $i \in [k]$. Note that the usual $k$-server corresponds to
very special case where the metrics $M_i$ are identical and each request, $r_t =(\sigma_t,\sigma_t,\ldots,\sigma_t)$, has all coordinates identical. Weighted $k$-server (in a general metric $M$) is also a very special case where each $M_i = w_i \cdot M$ and $r_t =(\sigma_t,\sigma_t,\ldots,\sigma_t)$.
In a breakthrough result, Sitters and Stougie \cite{SS06} gave an $O(1)$-competitive algorithm for the generalized $k$-server problem for $k=2$. Recently,
Sitters~\cite{Sit14} showed that the generalized WFA is also $O(1)$-competitive for $k=2$. Finding any competitive algorithm for $k>2$ is a major open problem, even in very restricted cases.
For example, the special case where each $M_i$ is a line, also called the CNN problem, has received a lot of attention (\cite{KT04,Chr03,AG10,iw01,IY04}), but even here no competitive algorithm is known for $k>2$.
\subsection{Notation and Preliminaries}\label{sec:prelim}
We now give some necessary notation and basic concepts, that will be crucial for
the technical overview of our results and techniques in Section \ref{sec:overview}.
\paragraph{Problem definition.} Let $M = (U,d)$ be a uniform metric space, where $U=\{1,\ldots,n\}$ is the set of points (we sometimes call them pages) and $d: U^2 \rightarrow \mathcal{R}$ is the distance function which satisfies $d(p,q) = 1$ for $p\neq q$, and $d(p,p) = 0$. There are $k$ servers $s_1,\dotsc,s_k$ with weights
$w_1 \leq w_2 \leq \dotsc \leq w_k$ located at points of $M$. The cost of moving server $s_i$ from the
point $p$ to $q$ is $w_i\cdot d(p,q) = w_i$. The input is a request sequence $\sigma = \sigma_1,\sigma_2,\ldots,\sigma_T$, where $\sigma_t \in U$ is the point requested at time $t$. At each time $t$, an online algorithm needs to have a server at $\sigma_t$, without the knowledge of future requests. The goal is to minimize the total cost for serving $\sigma$.
We think of $n$ and $T$ as arbitrarily large compared to $k$. Note that if the weights are equal or similar, we can
use the results for the (unweighted) $k$-server problem with no or small loss, so $w_{\max}/w_{\min}$ should be thought of as arbitrarily large.
Also, if two weights are similar, we can treat them as same without much loss, and so in general it is useful to think of the weights as well-separated, i.e.~$w_i \gg w_{i-1}$ for each $i$.
\paragraph{Work Functions and the Work Function Algorithm.}
We call a map $C \colon \{1, \dotsc, k\} \to U$, specifying that server $s_i$ is at point $C(i)$, a {\em configuration} $C$.
Given a request sequence $\sigma = \sigma_1, \dotsc, \sigma_t$, let $\WF_t(C)$ denote the optimal cost to serve requests $\sigma_1, \dotsc, \sigma_t$ and end up in configuration $C$. The function $\WF_t$ is called \textit{work function} at time $t$.
Note that if the request sequence would terminate at time $t$, then $\min_C \WF_t(C)$ would be the offline optimum cost.
The Work Function Algorithm (WFA) works as follows: Let $C_{t-1}$ denote its configuration at time $t-1$. Then upon the request $\sigma_t$, WFA moves to the configuration $C$ that minimizes $\WF_t(C) + d(C,C_{t-1})$. Note that in our setting, $d(C,C') = \sum_{i=1}^k w_i \mathbf{1}_{(C(i) \neq C'(i))}$.
Roughly, WFA tries to mimic the offline optimum while also controlling its movement costs. For more background on WFA, see~\cite{BEY98,CL96,KP95}.
The generalized Work Function Algorithm ($\WFA_{\lambda}$) is parameterized by a constant $\lambda \in (0,1]$, and at time $t$ moves to the configuration
$C_t = \argmin_{C} \WF_t(C) + \lambda d(C,C_{t-1}).$
For more on $\WFA_{\lambda}$, see~\cite{Sit14}.
\paragraph{Service Patterns and Feasible Labelings.}
We can view any solution to the weighted $k$-server problem as an interval
covering in a natural way.
For each server $s_i$ we define a set of intervals $\ensuremath{\mathcal{I}}_i$ which
captures the movements of $s_i$ as follows: Let $t_1 < t_2 < t_3 <
\dotsb$ be the times when $s_i$ moves. For each move
at time $t_j$ we have an interval $[t_{j-1}, t_{j}) \in \ensuremath{\mathcal{I}}_i$, which means that $s_i$
stayed at the same location during this time period.
We assume that $t_0=0$ and also add a final interval $[t_{\textrm{last}}, T+1)$, where $t_{\textrm{last}}$ is the last time
when server $s_i$ moved. So if $s_i$ does not move at all, $\ensuremath{\mathcal{I}}_i$ contains the single interval $[0,T+1)$.
This gives a natural bijection between the moves of $s_i$ and the intervals in
$\ensuremath{\mathcal{I}}_i$, and the cost of the solution equals to $\sum_{i=1}^k w_i(|\ensuremath{\mathcal{I}}_i|-1)$. We call $\ensuremath{\mathcal{I}}_i$ the $i$th level of intervals, and an interval in $I \in \ensuremath{\mathcal{I}}_i$ a level-$i$ interval, or simply an $i$th level interval.
\begin{definition}[Service Pattern] We call the collection $\ensuremath{\mathcal{I}} = \ensuremath{\mathcal{I}}_1 \cup \dotsb\cup \ensuremath{\mathcal{I}}_k$ a service pattern
if each $\ensuremath{\mathcal{I}}_i$ is a partition of $[0, T+1)$ into half-open intervals.
\end{definition}
Figure \ref{fig:interval} contains an
example of such service pattern.
To describe a solution for a weighted
$k$-server instance completely, we label each interval $I\in \ensuremath{\mathcal{I}}_i$ with a point
where $s_i$ was located during time period $I$. We can also decide to give no
label to some interval (which means that we don't care on which point the server is located). We call this a {\em labeled} service pattern.
\begin{definition}[Feasible Labeling] Given a service pattern $\ensuremath{\mathcal{I}}$ and a request sequence $\sigma$, we say that a (partial) labeling $\alpha\colon \ensuremath{\mathcal{I}} \to U$ is {\em feasible} with respect to $\sigma$, if for each time
$t\geq 0$ there exists an interval $I \in \ensuremath{\mathcal{I}}$ which contains $t$ and
$\alpha(I) = \sigma_t$.
\end{definition}
We call a service pattern $\ensuremath{\mathcal{I}}$ feasible with respect to $\sigma$
if there is some labeling $\alpha$ of $\ensuremath{\mathcal{I}}$ that is feasible with respect to $\sigma$.
Thus the offline weighted $k$-server problem for request sequence $\sigma$ is equivalent to the problem of
finding the cheapest feasible service pattern for $\sigma$.
Note that for a fixed service pattern $\ensuremath{\mathcal{I}}$, there may not exist any feasible labeling, or alternately there might exist many feasible labelings for it. Understanding the structure of the various possible feasible labelings for a given service pattern will play a major role in our results.
\begin{figure}[t!]
\hfill\includegraphics{interval_cover2.pdf}\hfill\
\caption{Illustration of a feasible service pattern for $k=3$. Each interval in
$\ensuremath{\mathcal{I}}_i$ defines a location for server $s_i$. At each time $t$, some interval
covering $t$ should be labeled by the requested point $\sigma_t$.}
\label{fig:interval}
\end{figure}
\section{Overview}
\label{sec:overview}
We now give an overview of the technical ideas and the organization of the
paper.
Fix some request sequence $\sigma$.
Suppose that the online algorithm knows the service pattern $\ensuremath{\mathcal{I}}$ of some optimal offline solution, but not the actual labels for the intervals in $\ensuremath{\mathcal{I}}$. Then intuitively, the online algorithm may still need to try out all possible candidate labels for an interval before figuring out the right one used by the offline solution.
So, a key natural question turns out to be: How does the structure of all possible
feasible labelings for $\ensuremath{\mathcal{I}}$ look like?
Let us consider this more closely.
First, we can assume for simplicity that $\ensuremath{\mathcal{I}}$ has a tree structure (i.e whenever an interval at level $i$ ends, all intervals at levels $1,\ldots,i-1$ end as well). Now, we can view $\ensuremath{\mathcal{I}}$
as a collection of disjoint trees on different parts of $\sigma$, that do not interact with each other.
Focusing on some tree $T$ with a root interval $I$ (corresponding to the
heaviest server $s_k$), it now suffices to understand what is the number of
labels for $I$ in all feasible labelings with respect to $\sigma$.
This is because whenever we fix some label $a$ for $I$, we get a similar
question
about the depth-$(k-1)$ subtrees of $T$ on $\sigma$ with $a$ removed,
and we can proceed recursively.
This leads to the following problem.
\paragraph{The Combinatorial Problem.} Given an interval tree $T$ in a service
pattern $\ensuremath{\mathcal{I}}$ on some request sequence $\sigma$. How many labels can the root
interval $I$ get over all possible feasible assignments to $\ensuremath{\mathcal{I}}$?
We will show the following dichotomy result for this problem: (i) Either any label in $U$ works (i.e.~the location of $s_k$ does not matter), or (ii) there can be at most $f(k)$ feasible labels for $I$.
This might be somewhat surprising as the tree $T$ can be arbitrarily large
and the number of its subtrees of depth $k-1$ may not be even a function of $k$.
We prove two such dichotomy theorems in Section \ref{sec:dichotomy}. In Theorem
\ref{thm:dichotomy}, we show $f(k) = O(\exp(\exp(k))$ for arbitrary weights, and
in Theorem \ref{thm:dichotomy-d}, we give a more refined bound for the case
with $k_1,\ldots,k_d$ servers of weights $w_1,\ldots,w_d$.
These results are proved by induction but require some subtle technical details.
In particular, we need a stronger inductive hypothesis, where we track all the
feasible labels for the intervals on the path from the particular node towards
the root.
To this end, in Section \ref{sec:intervals} we describe some properties of
these path labelings and their interactions at different nodes.
\paragraph{Upper Bounds.}
These dichotomy theorems are useful to upper-bound the competitive ratio
as follows.
Suppose that the online algorithm knows the optimum service pattern $\ensuremath{\mathcal{I}}$, but
not the actual labels.
Fix some tree $T \subseteq \ensuremath{\mathcal{I}}$ with root interval $I$. We know that the offline
solution pays $w_k$ to move the server $s_k$ at the end of $I$,
and let $\cost(k-1)$ denote the offline cost incurred during $I$ due to the
movement of the $k-1$ lighter servers.
Then, intuitively, the online algorithm has only $f(k)$ reasonable locations
\footnote{The situation in case (i) of the dichotomy theorem, where the location of $s_k$ does not matter, is much easier as the online algorithm can keep $s_k$ any location.}
to try during $I$.
Assuming recursively that its competitive ratio with $k-1$ servers
is $c_{k-1}$, its cost on $I$ should be at most
\[ f(k) \cdot (w_k + c_{k-1} \cdot \cost(k-1))
\leq f(k) \cdot c_{k-1} (w_k + \cost(k-1))
= f(k) \cdot c_{k-1} \cdot \OPT(I),
\]
which gives $c_k \leq f(k) c_{k-1}$, and hence $c_k \leq f(k) \cdots f(1)$.
Of course, the online algorithm does not know the offline service pattern $\ensuremath{\mathcal{I}}$,
but we can remove this assumption by losing another factor $f(k)$.
The idea is roughly the following.
Consider some time period $[t_1,t_2]$, during which online incurs cost about
$c_{k-1}\,w_k$ and decides to move its heaviest server at time $t_2$.
We claim that there can be at most $f(k)$ locations for the heavy server
where the offline solution would pay less than $w_k/(4f(k))$ during $[t_1,t_2]$.
Indeed, suppose there were $m=f(k)+1$ such locations $p_1,\ldots,p_m$.
Then, for each $j=1,\ldots,m$,
take the corresponding optimum service pattern $\ensuremath{\mathcal{I}}^j$ with $s_k$
located at $p_j$ throughout $[t_1,t_2]$, and consider a new pattern
$\ensuremath{\mathcal{I}}'$ by taking the common refinement of $\ensuremath{\mathcal{I}}^1,\ldots,\ensuremath{\mathcal{I}}^m$
(where any interval in $\ensuremath{\mathcal{I}}^j$ is a union of consecutive intervals in $\ensuremath{\mathcal{I}}'$).
The pattern $\ensuremath{\mathcal{I}}'$ is quite cheap, its cost is at most
$m \cdot w_k/(4 f(k)) \leq w_k/2$, and we know that its root interval $I$ can
have $m=f(k)+1$ different labels. However, the dichotomy theorem implies that
any point is a feasible label for $I$, including the location of
the algorithm's heaviest server. But in such case, algorithm would not pay more
than $c_{k-1} \cost(\ensuremath{\mathcal{I}}')$, what leads to a contradiction.
We make this intuition precise in Section \ref{sec:ubs} using work functions.
In particular, we use the idea above to show that
during any request sequence when $\WFA_{\lambda}$
moves $s_k$ about $f(k)$ times, any offline algorithm must pay $\Omega(w_k)$.
\paragraph*{Lower bound.}
In a more surprising direction, we can also use the combinatorial problem to
create a lower bound.
In Section \ref{sec:intervals}, we give a recursive combinatorial construction
of a request sequence $\sigma$ and a service pattern $\ensuremath{\mathcal{I}}$ consisting of a
single interval tree, such that the number of feasible labelings for its root
can actually be about $r_k=2^{2^k}$.
Then in Section \ref{sec:lb}, we use the underlying combinatorial structure of this construction to
design an adversarial
strategy that forces any online algorithm to have a doubly-exponential
competitive ratio.
Our adversarial strategy reacts adaptively to the movements of the online
algorithm $\ALG$, enforcing the two key properties.
First, the adversary never moves a server $s_i$, where $i<k$,
unless $\ALG$ also moves some heavier server of weight at least $w_{i+1}$.
Second, the adversary never moves the heaviest server $s_k$ unless
$\ALG$ already moved $s_k$ to all $r_k$ possible feasible locations.
By choosing the weights of the servers well-separated, e.g.
$w_{i+1} \geq r_k \cdot \sum_{j=1}^i w_j$ for each $i$,
it is easy to see that the above two properties imply an $\Omega(r_k)$
lower bound on the competitive ratio.
\section{Service Patterns}
\label{sec:intervals}
In this section, we study the structure of feasible labelings. A crucial notion for this will be {\em request lists} of intervals.
We also define two Ramsey-theoretic parameters to describe the size of the request lists.
In Subsection~\ref{sec:int_lb}, we present a combinatorial lower bound
for these two parameters.
\paragraph{Hierarchical service patterns.}
We call a service pattern $\ensuremath{\mathcal{I}}$ hierarchical, if each interval $I$ at level
$i < k$ has a unique parent $J$ at level $i+1$ such that $I \subseteq J$.
An arbitrary service pattern $\ensuremath{\mathcal{I}}$ can be made hierarchical easily and at relatively
small cost: whenever an interval at level $i>1$ ends at time $t$,
we also end all intervals at levels $j=1, \dotsc, i-1$.
This operation adds at most $w_1 + \ldots w_{i-1} \leq kw_i$, for each interval of weight $w_i$, so the overall cost
can increase by a factor at most $k$.
In fact, if the weights are well-separated, the loss is even smaller.
Henceforth, by service pattern we will always mean hierarchical service patterns, which we view as a disjoint collection of trees.
We adopt the usual terminology for trees. The
ancestors of $I$ are all intervals at higher levels containing $I$, and
descendants of $I$ are all intervals at lower levels which are contained in $I$.
We denote $A(I)$ the set of the ancestors of $I$ and $T_I$ the subtree of
intervals rooted at $I$ (note that $T_I$ includes $I$).
\paragraph{Composition of feasible labelings.}
In hierarchical service patterns, the labelings can be composed easily in
modular way.
Let $\sigma_I$ be the request sequence during the time
interval $I$, and $\sigma_J$ during some sibling $J$ of $I$.
If $\alpha_I$ and $\alpha_J$ are two feasible labelings with respect to
$\sigma_I$ and $\sigma_J$ respectively, and if they assign the same labels to
the ancestors of $I$ and $J$ (i.e.~$\alpha_I(A(I)) = \alpha_J(A(J))$),
we can easily construct a single $\alpha$ which is feasible with respect to both
$\sigma_I$ and $\sigma_J$:
Label the intervals in $T_I$ according to $\alpha_I$,
intervals in $T_J$ according to $\alpha_J$ and their ancestors according to
either $\alpha_I$ or $\alpha_J$.
\subsection{Structure of the feasible labelings}\label{sec:labelings}
Consider a fixed service pattern $\ensuremath{\mathcal{I}}$ and some request sequence $\sigma$.
There is a natural inductive approach for understanding the structure of feasible labelings of $\ensuremath{\mathcal{I}}$.
Consider an interval $I \in \ensuremath{\mathcal{I}}$ at level $\ell < k$, and the associated request sequence $\sigma(I)$.
In any feasible labeling of $\ensuremath{\mathcal{I}}$, some requests in $\sigma(I)$ will be covered (served) by the labels for the intervals
in $T_I$, while others (possibly none) will be covered by labels assigned to ancestors $A(I)$ of $I$.
So, it is useful to understand how many different ``label sets" can arise for $A(I)$ in all possible feasible labelings.
This leads to the notion of request lists.
\paragraph{Request lists.}
Let $I$ be an interval at level $\ell<k$. We call a set of pages $S \subseteq U$ with $|S|\leq k-\ell$, a {\em valid tuple} for $I$, if upon assigning $S$ to ancestors of $I$ (in any order) there is some labeling of $T_I$ that is feasible for $\sigma_I$. Let $R(I)$ denote the collection of all valid tuples for $I$.
Note that if $S$ is a valid tuple for $I$, then all its supersets of
size up to $k-\ell$ are also valid. This makes the set of all valid tuples hard to work with, and so we only consider the inclusion-wise
minimal tuples.
\begin{definition}[Request list of an interval]
Let $I$ be an interval at level $\ell < k$.
The request list of $I$, denoted by $L(I)$, is the set of inclusion-wise minimal
valid tuples.
\end{definition}
{\em Remark.} We call this a request list as we view $I$ as requesting a tuple in $L(I)$ as ``help" from its ancestors in $A(I)$, to feasibly cover $\sigma(I)$. It is possible that there is a labeling $\alpha$ of $T_I$ that can already cover $\sigma(I)$ and hence $I$ does not
need any ``help" from its ancestors $A(I)$. In this case $L(I)=\{\emptyset\}$
(or equivalently every subset $S \subset U$ of size $\leq k-\ell$ is a valid tuple).
Tuples of size $1$ in a request list will play an important role, and we will call them singletons.
{\bf Example.}
Let $I\in \ensuremath{\mathcal{I}}$ be an interval at level $1$, and $I_2,
\dotsc, I_k$ its ancestors at levels $2, \dotsc, k$.
If $P=\{p_1,\ldots,p_j\}$, where $j<k$, is the set of all pages requested in $\sigma_I$, then one feasible labeling $\alpha$ with respect to $\sigma_I$ is to
assign
$\alpha(I_{i+1}) = p_i$ for $i=1, \dotsc, j$,
and no label for any other $J\in \ensuremath{\mathcal{I}}$. So $P$ is a feasible tuple.
However, $P$ is not inclusion-wise minimal,
as $\{p_2, \dotsc, p_j\}$ is also valid tuple:
We can set $\alpha(I) = p_1$, $\alpha(I_i) = p_i$ for $i=2,\dotsc, j$ and
no label for other intervals.
Similarly, $P\setminus \{p_i\}$ for $i=2, \dotsc, j$, are also valid and inclusion-wise minimal.
So, we have
\[ L(I) = \big\{ P\setminus \{p_1\}, P\setminus \{p_2\}, \ldots,
P\setminus\{p_j\} \big\}. \]
\paragraph{Computation of Request Lists.}
Given a service pattern $\ensuremath{\mathcal{I}}$ and request sequence $\sigma$, the request lists for each interval $I$ can be computed inductively.
For the base case of a leaf interval $I$ we already saw that $L(I) = \big\{ P\setminus \{p_1\}, P\setminus \{p_2\}, \ldots,
P\setminus\{p_j\} \big\}.$
For a higher level interval $I$, we will take the request lists of the children of $I$ and combine them suitably.
To describe this, we introduce the notion of {\em joint request lists}.
Let $I$ be a level $\ell$ interval for $\ell>1$, and
let $C = \{J_1, \dotsc, J_m\}\subseteq \ensuremath{\mathcal{I}}_{\ell-1}$ be the set of its child
intervals.
Note that $m$ can be arbitrarily large (and need not be a function of $k$).
We define the joint request list of the intervals in $C$ as follows.
\begin{definition}[Joint request list]
Let $I$ be an interval at level $\ell > 1$ and $C$ be the set of its children at
level $\ell-1$. The joint request list of $C$, denoted by $L(C)$,
is the set of inclusion-wise minimal tuples $S$ with $|S| \leq k-(\ell-1)$ for which
there is a labeling $\alpha$ that is feasible with respect to $\sigma_{I}$ and $ \alpha(\{I\}\cup A(I)) = S$.
\end{definition}
Let $R(C)$ denote collection of all valid tuples (not necessarily minimal) in the joint request list for $C$.
We note the following simple observation.
\begin{observation}\label{obs:list_prod}
A tuple $S$ belongs to $R(C)$ if and only if $S$ belongs to $R(J_i)$ for each
$i=1, \dotsc, m$. This implies that $S \in L(C)$ whenever it is an
inclusion-wise minimal tuple such that each $L(J_i)$ for $i \in [m]$ contains some tuple $S_i \subseteq S$.
\end{observation}
\begin{proof} Consider the feasible labeling $\alpha$ with respect
to $\sigma_I$, which certifies that $S \in R(C)$. This is also feasible with respect
to each $\sigma_{J_i}$ and certifies that $S \in R(J_i)$.
Conversely, let $\alpha_i$, for $i=1,\dotsc, m$, denote the labeling
feasible with respect to $\sigma_{J_i}$ which certifies that $S\in R(C)$.
As $J_1, \dotsc, J_m$ are siblings, the composed labeling $\alpha$ defined by
$\alpha(J) = \alpha_i(J)$ if $J\in T_{J_i}$ and, say, $\alpha(J) = \alpha_1(J)$
if $J \in I \cup A(I)$ is feasible for $\sigma_I$.
\end{proof}
Creation of the joint request list can be also seen as a kind of a product
operation. For example, if there are two siblings $J_1$ and
$J_2$ whose request lists are disjoint and contain only singletons, then their
joint request list $L(J_1, J_2)$ contains all pairs $\{p,q\}$ such that
$\{p\}\in L(J_1)$ and $\{q\}\in L(J_2)$.
By Observation~\ref{obs:list_prod}, all such pairs belong to $R(J_1, J_2)$
and they are inclusion-wise minimal.
The number of pairs in $L(J_1, J_2)$ equals to $|L(J_1)|\cdot |L(J_2)|$.
In general, if $L(J_1)$ and $L(J_2)$ are not disjoint or contain tuples of different
sizes, the product operation becomes more complicated, and therefore we use
the view from Observation~\ref{obs:list_prod} throughout this paper.
Finally, having obtained $L(C)$, the request list $L(I)$ is obtained using the following observation.
\begin{observation}\label{obs:C_to_I_list}
A tuple $S$ belongs to $R(I)$ if and only if
$S\cup\{p\}$ belongs to $R(C)$ for some $p\in U$.
\end{observation}
\begin{proof}
If $S \cup \{p\} \in R(C)$, then we find a feasible labeling $\alpha$ for $\sigma_I$ with $\alpha(I)=p$ and $\alpha(A(I))=S$. Conversely, if $S \in R(I)$, then there must some feasible labeling $\alpha$ for $\sigma_I$ with
$\alpha(A(I))=S$, and we simply take $p=\alpha(I)$.
\end{proof}
So $L(I)$ can be generated by taking $S\setminus \{p\}$ for each
$p\in S$ and $S\in L(C)$, and eliminating all resulting tuples that are not
inclusion-wise minimal.
\vspace{2mm}
{\bf Example.} Consider the interval $I$ having two children
$J_1$ and $J_2$. We saw that if $L(J_1)$ and $L(J_2)$ are disjoint
and both contain only singletons, their joint list $L(J_1, J_2)$ contains
$|L(J_1)|\cdot |L(J_2)|$ pairs. Then, according to
Observation~\ref{obs:C_to_I_list}, $L(I)$ contains
$|L(J_1)| + |L(J_2)|$ singletons.
This observation that composition of request lists with singletons give request lists with singletons will be useful
in the lower bound below.
\paragraph{Sizes of request lists.}
Now we define the Ramsey-theoretic parameters of the service patterns.
Let us denote $f(\ell, t)$ the maximum possible numbers of $t$-tuples in the
request list $L(I)$ of any interval $I$ at level $\ell$ from any service pattern $\ensuremath{\mathcal{I}}$.
Similarly, we denote $n(\ell,t)$ the maximum possible number of $t$-tuples
contained in a joint request list $L(C)$,
where $C$ are the children of some $\ell$th level interval $I$.
The examples above show that $n(\ell,2)$ can be of order
$f^2(\ell,1)$, and $f(\ell+1,1)\geq 2 f(\ell,1)$.
In the following subsection we show that the $f(\ell,1)$ and $n(\ell,1)$ can grow doubly
exponentially with $\ell$.
\subsection{Doubly-exponential growth of the size of request lists}
\label{sec:int_lb}
In the following theorem we show a doubly exponential lower bound on $n(k,1)$,
which is the maximum number of singletons in a joint request list of children of
any $k$th level interval.
In particular, we construct a request sequence $\sigma$, and a service pattern
such that each level-$\ell$ interval has a request list of $\Omega(2^{2^{\ell - 3}})$ singletons.
This construction is the key ingredient of the lower bound
in Section~\ref{sec:lb}.
\begin{theorem}\label{thm:int_lb}
The numbers $n(\ell,1)$ and $f(\ell-1,1)$ grow doubly-exponentially with $\ell$.
More specifically, for level $k$ intervals we have
\[ n(k,1) \geq f(k-1,1) \geq 2^{2^{k - 4}}. \]
\end{theorem}
\begin{proof}
We construct a request sequence and a hierarchical service pattern with a single $k$th level interval, such that any interval $I$ at level $1 \leq \ell < k$ has a request list $L(I)$ consisting of $n_{\ell+1}$ singletons, where $n_2=2$ and
\[ n_{i+1} = (\lfloor n_i/2\rfloor + 1)
+ (\lfloor n_i/2\rfloor + 1) \lceil n_i/2 \rceil \geq (n_i/2)^2.
\]
Note that $n_2=2, n_3=4, n_4=9, \ldots$ and in general as $n_{i+1} \geq (n_i/2)^2$ it follows that for $\ell \geq 4$, we have $n_{\ell} \geq 2^{2^{\ell-4}+2} \geq 2^{2^{\ell-4}}$.
We describe our construction inductively, starting at the first level. Let $I$ be an interval at level $1$ and $\sigma_I$ be a request sequence consisting of $n_2=2$ distinct pages $p$ and $q$. Clearly, $L(I) = \big\{\{p\},\{q\}\big\}$. We denote the subtree at $I$ together with the request sequence $\sigma_I$ as $T_1(\{p,q\})$.
Now, for $i \geq 2$, let us assume that we already know how to construct a tree $T_{i-1}(P)$ which has a single $(i-1)$th level interval $J$, its request sequence $\sigma_J$ consists only of
pages in $P$, and for each $p\in P$, $\{p\}$ is contained as a singleton in $L(J)$. Let $n_i$ denote the size of $P$.
We show how to construct $T_{i}(P')$,
for an arbitrary set $P'$ of $n_{i+1}$ pages, such that $T_i$ has a single $i$th level interval $I$, and all pages $p \in P'$ are contained in the request list $L(I)$ as singletons.
First, we create a set of pages $M \subset P'$ called {\em mask}, such that $|M| = \lfloor n_i/2\rfloor + 1$. Pages that belong to $M$ are arbitrarily chosen from $P'$. Then, we partition $P' \setminus M$ into $|M|$ disjoint sets of size $\lceil n_i/2\rceil$ and we associate each of these sets with a page $q \in M$. We denote by $Q_q$ the set associated to page $q \in M$. For each $q \in M$, let $P_q = (M\setminus \{q\}) \cup Q_q$. Note that $|P_q|=n_i$. The interval tree $T_{i}(P')$ is created as follows. It consists of a single interval $I$ at level $i$ with $\lfloor n_i/2\rfloor + 1$ children $J_q$. For each $J_q$ we inductively create a level $i-1$ subtree $T_{i-1}(P_q)$.
See Figure~\ref{fig:interval_lb} for an example.
\begin{figure}
\hfill\includegraphics{mask_2to3.pdf}\hfill\
\caption{
Construction of an interval at level $i=3$ with request list of $n_4=9$ singletons, using intervals of level $i-1=2$ with request lists of $n_3 = 4$ singletons. The set $P'$ is decomposed into a mask of size $ \lfloor n_3/2\rfloor + 1 =3$, $M = \{ p_1,p_2,p_3 \}$ and sets $Q_{p_1} = \{ p_4,p_5 \}$, $Q_{p_2} = \{ p_6,p_7 \}$ and $Q_{p_3} = \{ p_8, p_9 \} $. For each $q \in M$, the requested set of interval $J_q$ is $P_q = (M \setminus q) \cup Q_q$.}
\label{fig:interval_lb}
\end{figure}
This construction has two important properties:
\begin{lemma}
\label{lem:2props}
First, for each $p\in P'$ there exists a subtree $T_{i-1}(P_q)$
such that $p\notin P_q$.
Second, for each $p\in P'$ there exists a page $\bar{p}\in P'$ such that each $P_q$ contains either $p$ or $\bar{p}$.
\end{lemma}
\begin{proof}
If page $p\in M$, then it
belongs to all sets $P_q$ except for $P_p$.
If $p \notin M$, then $p \in Q_q$ for some $q$ and hence $p$ only lies in $P_q$.
This proves the first property.
For the second property, if $p\in M$, we can choose an arbitrary
$\bar{p} \in P_p$, and note that $p$ lies in every $P_q$ for $q \in M \setminus\{p\}$.
On the other hand, if $p \notin M$, let $q \in M$ be such that $p\in Q_q$ (and hence $p \in P_q$) and we define $\bar{p}=q$. Then by construction, $q$ is contained in all other sets $P_{q'}$ for $q'\neq M \setminus \{q\}$.
\end{proof}
Using the above lemma, we can now understand the structure of request lists.
\begin{lemma} The request list $L(I)$ consists of all singletons in $P'$, i.e.~$L(I)=\{ \{p\}\;|\, p\in P'\}$.
\end{lemma}
\begin{proof}
Let us assume by inductive hypothesis that for each child $J_q$ of $I$ we have
$L(J_q) = \{ \{p\}\;|\, p\in P_q\}$. As discussed above, this is true for the base case of
intervals at level $1$.
By Observation~\ref{obs:list_prod} and by the first property in Lemma \ref{lem:2props}, no singleton
belongs to $R(C)$. By the second property we also know that each $p\in P'$
appears in some pair $\{p, \bar{p}\}$ in $R(C)$.
Therefore, by Observation~\ref{obs:C_to_I_list},
we know that $R(I)$ contains a singleton
$\{p\}$ for each $p\in P'$, and also that $R(I)$ does not contain empty set,
since $R(C)$ contains no singletons. So, we have $ L(I) = \big\{ \{p\}\;|\, p\in P'\big\}.$
\end{proof}
This completes the construction of the tree $T_{k-1}(P)$ with $P$ of size $n_{k}$
and with a single interval $J$ at level $k-1$. To finish the service pattern, we
create a single $k$th level interval $I$ having $J$ as its only child.
By the discussion above,
$L(J)$ contains a singleton $\{p\}$ for each $p\in P$ and trivially
the joint request list of the children of $I$ is simply $L(J)$. Therefore we
have $ n(k,1)\geq f(k-1,1) \geq n_{k} \geq 2^{2^{k-4}}.$
\end{proof}
\section{Online Lower bound}
\label{sec:lb}
In this section we transform the combinatorial construction of
Theorem~\ref{thm:int_lb} into a lower bound for any deterministic algorithm,
proving Theorem~\ref{thm:lb}. Throughout this section we denote $s^{\ALG}_1,
\dotsc, s^{\ALG}_k$ the servers of the online algorithm and
$s^{\ADV}_1, \dotsc, s^{\ADV}_k$ the servers of the adversary.
Here is the main idea.
Let $\ALG$ be a fixed online algorithm. We create a request sequence adaptively, based on the decisions of $\ALG$, which consists of arbitrary number of \textit{phases}. During each phase, the heaviest server of the adversary $s^{\ADV}_k$ stays at some fixed location. Whenever a phase ends, the adversary might move all its servers (including $s^{\ADV}_k$) and a new phase may start. During a phase, requests are determined by a recursive construction, using \textit{strategies} which we define later on. At a high-level, the goal of the strategies is to make sure that the following two properties are satisfied:
\begin{enumerate}[(i)]
\item \label{prtA} For $i=1,\ldots,k-1$, $\ADV$ never moves server $s_i^{\ADV}$,
unless $\ALG$ moves some heavier server $s_j^{\ALG}$ for $j > i$ at the same
time.
\item \label{prtB} During each phase, $\ALG$ moves its heaviest server $s_k^{\ALG}$ at least $n_k$ times.
\end{enumerate}
These two properties already imply a lower bound on the competitive ratio of
$\ALG$ of order $n_k$, whenever the weights of the servers are well separated, i.e.
$w_{i+1} \geq n_k \cdot \sum_{j=1}^i w_j$ for each $1\leq i<k$. Here $n_k \geq 2^{2^{k-4}}$ is the number of candidate points for $s_k^{\ADV}$.
In the following section we show how each phase is defined using strategies. We
conclude the proof of Theorem~\ref{thm:lb} in Section \ref{sec:lb_thm}.
\subsection{Definition of Strategies}
\label{sec:lb-str}
Each phase is created using $k$ adaptive \textit{strategies} $S_{1},\dotsc,S_k$, where $S_1$ is the simplest one and $S_{i+1}$ consists of several executions of $S_i$. An execution of strategy $S_i$ for $1 \leq i < k$ corresponds to a subsequence of requests where $\ALG$ moves only servers $s_1,\dotsc,s_{i}$. Whenever $\ALG$ moves some server $s_j$, for $j >i$, the execution of $S_i$ ends. An execution of $S_k$ ends only when $\ALG$ moves its heaviest server to the location of $s_k^{\ADV}$.
We denote by $S_i(P)$ an execution of strategy $S_i$, with a set of requested
points $P$, where $|P| = n_{i+1}$. We start by defining the strategy of the highest level $S_k$. An execution $S_k(P)$ defines a \textit{phase} of the request sequence. We make sure that if $p$ is the location of $s_k^{\ALG}$ when the execution starts, then $p \notin P$.
\begin{algorithm2e}
\SetAlgoRefName{}
\SetAlgorithmName{Strategy $S_k(P)$}{}{}
\caption{\ }
partition $P$ into $T$ and $B$ of size $n_k$ each arbitrarily\;
$B' := \emptyset$\;
\While{$T \neq \emptyset$}{
Run $S_{k-1}(T\cup B')$ until $\ALG$ moves $s_k$\;
$p := \text{new position of $s_k^{\ALG}$}$\;
$T := T \setminus \{p\}$\;
$B' := \text{arbitrary subset of $B \setminus \{p\}$
of size $n_{k}-|T|$}$\;
}
Terminate Phase
\end{algorithm2e}
Intuitively, we can think of $T$ as the set of candidate locations for $s_k^{\ADV}$.
The set $B'$ is just a padding of new pages to construct a set $T \cup B'$ of size
$n_k$ as an argument for $S_{k-1}$. Whenever $s_k^{\ALG}$ is placed
at some point $p \in T$, we remove it from $T$, otherwise $T$ does not change. We then update $B'$ such that $|T \cup B'| = n_k$ and $p \notin B'$. This way, we make sure that $p$ is never requested as long as $s_k^{\ALG}$ stays there.
We now define the strategies $S_i$ for $1 < i < k$. An execution of $S_i(P)$ executes several consecutive instances of $S_{i-1}$. We first describe how do we choose the set $P'$ for each execution of $S_{i-1}(P')$, such that $P' \subset P$ and $|P'| = n_i$.
We use the construction described in Section~\ref{sec:int_lb}. In particular, we choose an arbitrary set of points $M \subset P$ called {\em mask}, and we partition $P \setminus M$ into $|M|$ disjoint sets of equal size, each one associated with a point $q \in M$. We denote by $Q_q$ the set associated to point $q \in M$. All executions of strategy $S_{i-1}$, have as an argument a set $P_q = (M \setminus \{q\}) \cup Q_q $, for some $q \in M$. We discuss about the size of $M$ and $Q_q$ later on. Before describing the strategies, we observe that Lemma~\ref{lem:2props} implies the following two things regarding those sets:
\begin{observation}
For each point $p\in P$ there is a set $P_q$ such that $p \notin P_q$. If $p\in M$, this set is $P_p$. Otherwise, $p$ belongs to $Q_q$ for some $q\in M$ and then we can choose any $P_{q'}$ for $q'\ne q$.
\end{observation}
\begin{observation}\label{obs:lb_2pages}
For any $p\in P$ there is a point $\bar{p}$ such that each $P_q$ contains either $p$ or
$\bar{p}$. In particular, if $p\in Q_q$ for some $q$, we choose $\bar{p}=q$, otherwise $p\in M$ and then we can choose any $\bar{p} \in Q_p$.
\end{observation}
\begin{algorithm2e}[H]
\SetAlgoRefName{}
\SetAlgorithmName{Strategy $S_i(P)$, where $1<i<k$}{}{}
\caption{\ }
Decompose $P$ into mask $M$ and sets $Q_q$\;
For each $q \in M$, denote $P_q := (M \setminus \{q\}) \cup Q_q$\;
\Repeat{$\ALG$ moves $s_{i+1}$ or some heavier server}{
$p := \text{position of $s_i^{\ALG}$}$\;
Choose any $P_q$, s.t. $p\notin P_q$,
and run $S_{i-1}(P_q)$ until $\ALG$ moves $s_i$\;
}
\end{algorithm2e}
Last, strategy $S_1$ takes as an argument a set of $n_2=2$ points and requests them in an alternating way.
\begin{algorithm2e}[H]
\SetAlgoRefName{}
\SetAlgorithmName{Strategy $S_1(\{p,q\})$}{}{}
\caption{\ }
\Repeat{$\ALG$ moves $s_2$ or some heavier server}{
If $s_1^{\ALG}$ is at $q$: request $p$\;
Otherwise: request $q$\;
}
\end{algorithm2e}
Observe that, an execution of a strategy $S_i$, for $1 \leq i < k$, ends only if $\ALG$ moves some heavier server. This means that if $\ALG$ decides not to move any heavier server, then the execution continues until the end of the request sequence. Moreover, it is crucial to mention that, by construction of the strategies $S_1, \dotsc, S_k$, we have the following:
\begin{observation}
For $1 \leq i \leq k$, if server $s_i^{\ALG}$ is located at some point $p$, then $p$
is never requested until $s_i^{\ALG}$ moves elsewhere.
\end{observation}
\paragraph{Cardinality of sets.} We now determine the size of the sets used by $S_i$ for $1 < i \leq k $. For $2 \leq i \leq k$, recall that all arguments of $S_{i-1}$ should have size $n_i$. In order to satisfy this, we choose the sizes as in Section~\ref{sec:int_lb}, i.e. $|M| = \lceil n_i/2\rceil + 1$ and $|Q_q| = \lfloor n_i/2 \rfloor $. It is clear that $ |P_q| = |(M\setminus \{q\})| + |Q_q| = n_i $. Recall that $P = M \cup (\bigcup_{q\in M} Q_q)$, and therefore we have
\begin{equation*}
n_{i+1} = |P| = (\lceil n_i/2\rceil + 1)
+ (\lceil n_i/2\rceil + 1)\lfloor n_i/2 \rfloor
= (\lceil n_i/2\rceil + 1)(\lfloor n_i/2 \rfloor+1)
\geq n_i^2/4.
\end{equation*}
Therefore, by choosing $n_2 = 2$ we have $n_3=4$ and for $k \geq 4$
\begin{equation}
\label{eq:nklb}
n_k \geq 2^{2^{k-4}+2} \geq 2^{2^{k-4}}.
\end{equation}
Last, for strategy $S_k$ we have that $n_{k+1} = 2 n_k$.
\paragraph{Service pattern associated with the request sequence.} We associate
the request sequence with a service pattern $\mathcal{I}$,
which is constructed as follows: For each execution of strategy $S_i$, we create
one interval $I \in \mathcal{I}_i $. We define $\mathcal{I} = \mathcal{I}_1 \cup
\cdots \cup \mathcal{I}_k$. Clearly, $\mathcal{I}$ is hierarchical.
Next lemma gives a characterization of request lists of intervals $I \in \mathcal{I}$.
\begin{lemma}\label{lem:pages_useful}
Let $I$ be the interval corresponding to a particular instance $S_i(P)$.
Given that there is an interval $J$ at level $j > i$ such that $J$ is labeled
by some point $p\in P$, then there is a feasible labeling for $I$ and its
descendants covering all requests issued during the lifetime of $I$.
In other words, all points $p \in P$ are contained in the request list $L(I)$ as singletons.
\end{lemma}
\begin{proof}
We prove the lemma by induction.
The lemma holds trivially for
$S_1$:
The requests are issued only in two different points $p$ and $q$.
Whenever $J$ has assigned $p$, we assign $q$ to $I$ and vice versa. In both
cases all the requests during the lifetime of $I$ are covered either by $I$ or
by $J$.
Now, assuming that lemma holds for level $i-1$, let us prove it also for
$i$. Let $p \in P$ be the point assigned to $J$ the ancestor of $I$.
By Observation~\ref{obs:lb_2pages}, we know that there is a $\bar{p}\in P$
such that each $P_q$ contains either $p$ or $\bar{p}$. We assign $\bar{p}$ to
$I$ and this satisfies the condition of the inductive hypothesis for all
children of $I$, as all those instances have one of $P_q$ as an input.
\end{proof}
Moreover, by construction of strategy $S_k$, we get the following lemma.
\begin{lemma}
\label{lem:lb-feas}
The service pattern $\mathcal{I}$ associated with the request sequence is feasible.
\end{lemma}
\begin{proof}
Let $\ensuremath{\mathcal{I}}_P$ the service pattern associated with an execution of $S_k(P)$. We show that $\ensuremath{\mathcal{I}}_P$ is feasible. Since the request sequence consists of executions of $S_k$, the lemma then follows.
Let $p$ be the last point which remained in $T$ during the execution of
$S_k(P)$. Then, all the former children of $S_k$ were of form $S_{k-1}(P')$ where
$p \in P'$. By Lemma~\ref{lem:pages_useful}, by assigning $p$ to the $k$th level interval, there is a feasible
labeling for each children interval and all their descendants. We get that the service pattern $\mathcal{\ensuremath{\mathcal{I}}_P}$ associated with strategy $S_k(P)$ is feasible.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:lb}}\label{sec:lb_thm}
We now prove Theorem~\ref{thm:lb}. We first define the moves of $\ADV$ and then we proceed to the final calculation of the lower bound on the competitive ratio of $\ALG$. Recall that the request sequence consists of arbitrary many phases, where each phase is an execution of strategy $S_k$.
\paragraph{Moves of the adversary.} Initially, we allow the adversary to move all its servers, to prepare for the first phase. The cost of this move is at most $\sum_{i=1}^{k} w_i $. Since the request sequence can be arbitrarily long, this additive term does not affect the competitive ratio and we ignore it for the rest of the proof. It remains to describe the moves of the adversary during the request sequence.
Consider the service pattern $\mathcal{I}$ associated with the request sequence. By lemma \ref{lem:lb-feas}, $\mathcal{I}$ is feasible. We associate to the adversary a feasible assignment of $\mathcal{I}$. This way, moves of servers $s^{\ADV}_i$ for all $i$ are completely determined by $\mathcal{I}$. We get the following lemma.
\begin{lemma}
\label{lem:prta}
At each time $t$, $\ADV$ does not move server $s_i^{\ADV}$, for $i=1,\ldots,k-1$, unless $\ALG$ moves some heavier server $s_j^{\ALG}$ for $j > i$.
\end{lemma}
\begin{proof}
Consider the service pattern $\mathcal{I}$ associated with the request sequence. Each execution of strategy $S_i$ is associated with an interval $I_i \in \mathcal{I}$ at level $i$. The adversary moves $s_i^{\ADV}$ if and only if interval $I_i$ ends. By construction, interval $I_i$ ends if and only if its corresponding execution of $S_i$ ends. An execution of $S_i$ ends if and only if $\ALG$ moves some heavier server $s_j$ for $j >i$.
We get that at any time $t$, server $s_i^{\ADV}$ moves if and only if $\ALG$ moves some server $s_j$ for $j >i$.
\end{proof}
\paragraph{Calculation of the lower bound.} Let $\cost(s_i^{\ALG})$ and $\cost(s_i^{\ADV})$ denote the cost due to moves of the $i$th server of $\ALG$ and $\ADV$ respectively. Without loss of generality\footnote{It is easy to see that, if $\ALG$ uses only its lightest server $s_1^{\ALG}$, it is not competitive: the whole request sequence is an execution of $S_2(p,q)$, so the adversary can serve all requests at cost $w_2 + w_1$ by moving at the beginning $s_2^{\ADV}$ at $q$ and $s_1^{\ADV}$ at $p$. $\ALG$ pays 1 for each request, thus its cost equals the length of the request sequence, which implies an unbounded competitive ratio for $\ALG$.}, we assume that $ \sum_{i=2}^{k} \cost(s_i^{\ALG}) > 0 $. Recall that we assume a strong separation between weights of the servers. Namely, we have $w_1 = 1$ and $w_{i+1} = n_k \cdot \sum_{j=1}^i w_i$. This, combined with lemma (\ref{lem:prta}) implies that
\begin{equation}
\label{eq:prta}
\sum_{i=1}^{k-1} \cost(s_i^{\ADV}) \leq \big( \sum_{i=2}^{k} \cost(s_i^{\ALG}) \big)/n_k .
\end{equation}
Moreover, by construction of strategy $S_k$, a phase of the request sequence ends if and only if $\ALG$ has moved its heaviest server $s_k^{\ALG}$ at least $n_k$ times. For each phase $\ADV$ moves $s_k^{\ADV}$ only at the end of the phase. Thus we get that
\begin{equation}
\label{eq:prtb}
\cost(s_k^{\ADV}) \leq \cost(s_k^{\ALG})/n_k.
\end{equation}
Overall, using \eqref{eq:prta} and \eqref{eq:prtb} we get
\begin{align*}
\cost(\ADV) & = \sum_{i=1}^{k-1} \cost(s_i^{\ADV}) + \cost(s_k^{\ADV}) \leq \big( \sum_{i=2}^{k} \cost(s_i^{\ALG}) \big)/n_k + \cost(s_k^{\ALG})/ n_k \\
& = \big( \sum_{i=2}^{k-1} \cost(s_i^{\ALG}) + 2 \cost(s_k^{\ALG}) \big) /n_k \leq 2 \cdot \cost(\ALG)/n_k.
\end{align*}
Therefore, the competitive ratio of $\ALG$ is at least $n_k/2$, which by \eqref{eq:nklb} is $\Omega(2^{2^{k-4}})$.
\hfill\qedsymbol
\section{Dichotomy theorems for service patterns}
\label{sec:dichotomy}
The theorems proved in this section are matching counterparts to
Theorem~\ref{thm:int_lb} ---
they provide an upper bound for the size of the request lists in a fixed
service pattern.
The first one, Theorem~\ref{thm:dichotomy}, shows that the parameter $n(k,1)$,
as defined in Section~\ref{sec:intervals}, is at most doubly exponential in $k$.
This bound is later used in Section~\ref{sec:ub} to prove an upper bound
for the case of the weighted
$k$-server problem where all the servers might have a different weight.
Moreover, we also consider a special case when there are only $d<k$ different
weights $w_1, \dotsc, w_d$. Then, for each $i=1, \dotsc, d$, we have
$k_i$ servers of weight $w_i$. This situation can be modeled using a service
pattern with only $d$ levels, where each interval at level $i$ can be labeled by
at most $k_i$ pages. For such service patterns, we can get a stronger upper
bound, which is singly exponential in $d$, $k^2$ and the product $\prod k_i$,
see Theorem~\ref{thm:dichotomy-d}. This theorem is later used in
Section~\ref{sec:ubd} to prove a performance guarantee for $\WFA$ in this
special setting.
\subsection{General setting of weighted $k$-server problem}
\label{sec:dichotomy_k}
Recall the definitions from Section~\ref{sec:intervals},
where we denote $n(k,1)$ the maximum possible number of singletons contained in
a joint request list of children of some $k$th level interval.
\begin{theorem}[Dichotomy theorem for $k$ different weights]
\label{thm:dichotomy}
Let $\ensuremath{\mathcal{I}}$ be a service pattern of $k$ levels and $I\in \ensuremath{\mathcal{I}}$ be an arbitrary interval
at level $k$. Let $Q\subseteq U$ be the set of feasible labels for $I$.
Then either $Q = U$, or
$|Q| \leq n(k,1)$ and $n(k,1)$ can be at most $2^{2^{k+3\log k}}$.
\end{theorem}
First, we need to extend slightly our definitions of $f(\ell,t)$ and $n(\ell,t)$
from Section~\ref{sec:intervals}.
Let $I$ be an arbitrary interval at level $\ell$ and $J_1, \dotsc, J_m$ be its
children at level $\ell-1$.
We define $f(\ell, t, P)$ to be the maximum possible number of $t$-tuples in the
request list $L(I)$ such that all those $t$-tuples contain some
predefined set $P$, and we define $f(\ell, t, h)$ as a maximum such number over
all sets $P$ with $|P|= h$ pages. For example, note that we have $f(\ell, t, t) = 1$
for any $\ell$ and $t$.
In a similar way, we define $n(\ell, t, h)$ the maximum possible number of
$t$-tuples in $L(J_1, \dotsc, J_m)$ each containing a predefined set of $h$
pages. The key part of this section is the proof of the following lemma.
\begin{lemma}\label{lem:n(l,t,h)}
Let $I$ be an interval at level $\ell\geq 2$, and let $J_1, \dotsc, J_m$ be its
children.
The number $n(\ell,t,h)$ of distinct $t$-tuples in the joint request list
$L(J_1, \dotsc, J_m)$, each containing $h$ predefined pages, can be bounded as
follows:
\[ n(\ell,t,h) \leq 2^{(\ell-1)(\ell-1+t)^2 \cdot 2^{\ell-1+t-h}}. \]
\end{lemma}
First, we show that this lemma directly implies Theorem~\ref{thm:dichotomy}.
\begin{proof}[Proof of Theorem~\ref{thm:dichotomy}]
Let us denote $J_1, \dotsc, J_m$ the set of children of $I$.
If their joint request list contains only empty set, then there is a feasible
assignment $\alpha$ which gives no label to $I$. In this case, $I$ can be
feasibly labeled by an arbitrary page, and we have $Q = U$.
Otherwise, the feasible label for $I$ are precisely the pages which are
contained in $L(J_1, \dotsc, J_m)$ as $1$-tuples (singletons), whose number is
bounded by $n(k,1)$.
Therefore, using Lemma~\ref{lem:n(l,t,h)}, we get
\[
n(k,1) = n(k,1,0) \leq 2^{(k-1)(k-1+1)^2\, 2^{(k-1+1)}}
\leq 2^{2^{k+3\log k}}.
\]
The last inequality holds because
$(k-1)k^2 \leq k^3 \leq 2^{3\log k}$.
\end{proof}
Lemma~\ref{lem:n(l,t,h)} is proved by induction in $\ell$.
However, to establish a relation between $n(\ell,t,h)$ and $n(\ell-1,t,h)$,
we use $f(\ell-1,t,h)$ as an intermediate step.
We need the following observation.
\begin{observation}\label{obs:f_vs_n}
Let $I$ be an interval at level $\ell\geq 2$, and let $J_1, \dotsc, J_m$ denote
all its children. Then we have
\begin{equation}\label{eq:f_vs_n}
f(\ell,t,h) \leq (t-h+1)\,n(\ell,t+1,h).
\end{equation}
\end{observation}
\begin{proof}
Observation~\ref{obs:C_to_I_list} already shows that a $t$-tuple
$A$ belongs to $R(I)$ if and only if there is a $(t+1)$-tuple
$B \in R(J_1, \dotsc, J_m)$ such that $A\subset B$.
If $B$ is not an inclusion-wise minimal member of $R(J_1, \dotsc, J_m)$,
then there is $B'\subsetneq B$ in $R(J_1, \dotsc, J_m)$ and a point
$p$ such that $(B'\setminus \{p\}) \subsetneq A$ also belongs to $R(I)$. This
implies that $A$ does not belong to $L(I)$. Therefore we know that each
$t$-tuple $A\in L(I)$ is a subset of some $B \in L(J_1, \dotsc, J_m)$.
On the other hand, it is easy to see that each $B \in L(J_1, \dotsc, J_m)$
contains precisely $t+1$ distinct $t$-tuples, each created by removing one page
from $B$. If we want all of them to contain
a predefined set $P$ of $h$ pages, then surely $P$ has to be contained in $B$,
and there can be precisely $t+1-h$ such $t$-tuples, each of them
equal to $B\setminus \{p\}$ for some $p\in B\setminus P$.
Therefore we have $f(\ell,t,h)\leq (t+1-h)\,n(\ell,t+1,h)$.
\end{proof}
Therefore, our main task is to bound
$n(\ell,t,h)$ with respect to the values of $f(\ell-1,t',h')$.
\paragraph{Two simple examples and the basic idea.}
Let $I$ be an interval at level $\ell$ and $J_1, \dotsc, J_m$ its children.
Each $t$-tuple in the joint request list $L(J_1, \dotsc, J_m)$
needs to be composed of smaller tuples from $L(J_1), \dotsc, L(J_m)$
(see Observation~\ref{obs:list_prod})
whose numbers are bounded by function $f(\ell-1,t',h')$.
However, to make use of the values $f(\ell-1,t',h')$, we need to consider the
ways in which a $t$-tuple could be created.
To illustrate our basic approach, we consider the following simple
situation.
Let us assume that $L(J_1), \dotsc, L(J_m)$ contain only singletons.
Recall that a pair $\{p,q\}$ can belong to $L(J_1, \dotsc, J_m)$
only if each list $L(J_i)$ contains either $p$ or $q$ as a singleton,
see Observation~\ref{obs:list_prod}.
Therefore, one of them must be contained in
at least half of the lists and we call it a ``popular'' page.
Each list has size at most $f(\ell-1,1,0)$, and therefore
there can be at most $2 f(\ell-1, 1,0)$ popular pages contained in the lists
$L(J_1), \dotsc, L(J_m)$.
A fixed popular page $p$, can be extended to a pair $\{p,q\}$ by at most
$f(\ell-1,1,0)$ choices for $q$, because $q$ has to lie in all the lists
not containing $p$.
This implies that there can be at most $2f(\ell-1,1,0)\cdot f(\ell-1,1,0)$
pairs in $L(J_1, \dotsc, J_m)$.
Here is a bit more complicated example.
We estimate, how many $t$-tuples $A$ can be contained in
$L(J_1, \dotsc, J_m)$, such that the following holds:
$A = A_1 \cup A_2$, where $A_1$ is a $t_1$-tuple from
lists $L(J_1), \dotsc, L(J_{m-1})$ and $A_2$ is a $t_2$-tuple from $L(J_m)$.
We denote $h := |A_1\cap A_2|$.
Then, the number of $t$-tuples in $L(J_1, \dotsc, J_m)$ created
from $L(J_1), \dotsc, L(J_m)$ in this way
cannot be larger than $f(\ell-1, t_1, 0) \cdot f(\ell-1, t_2, h)$,
since the choice of $A_1$ already determines the $h$ pages in $A_2$.
However, the $t$-tuples in $L(J_1, \dotsc, J_m)$
can be created in many complicated ways.
To make our analysis possible, we classify each tuple according to its
{\em specification}, which describes the way it was generated
from $L(J_1), \dotsc, L(J_m)$.
The main idea of our proof is to
bound the number of $t$-tuples which correspond to a given
specification. Then, knowing the number of specifications and having the bounds
for each $L(J_i)$ from the induction, we can get an upper
bound for the overall number of $t$-tuples.
\paragraph{Specifications of $t$-tuples.}
For a fixed $t$-tuple $A\in L(J_1, \dotsc, J_m)$,
we construct its specification $S$ as follows.
First, we sort the pages in $A$ lexicographically, denoting them
$p_1, \dotsc, p_t$.
Let $A_1$ be the subset of $A$ contained in the largest number of lists
$L(J_1), \dotsc, L(J_m)$ as a tuple.
Then, by pigeon-hole principle, $A_1$ lies in at least
$1/2^t$ fraction of the lists, since there are only $2^t$ subsets of $A$
and each list has to contain at least one of them.
We define $T_1$ as the set of indices of the pages in $A_1$, i.e.,
$T_1 = \{i\;|\: p_i\in A_1\}$. Set $T_1$ becomes the first part of the
specification $S$.
Having already defined $A_1, \dotsc, A_j$, we choose $A_{j+1}$ from the lists
which do not contain any subset of $A_1\cup \dotsb \cup A_j$.
We choose $A_{j+1}$ to be the tuple which is contained in the largest number of
them and set $T_{j+1} = \{i\;|\: p_i \in A_{j+1}\}$.
This way we get two important properties.
First, $A_{j+1}$ contains at least one page which is not present in
$A_1\cup\dotsb\cup A_j$.
Second, at least $1/2^t$ fraction of lists which do not contain any subset of
$A_1\cup\dotsb\cup A_j$, contain $A_{j+1}$ as a tuple.
We stop after $n_S$ steps, as soon as $A_1\cup\dotsb\cup A_{n_S} = A$
and each of the
lists contains some subset of $A_1\cup\dotsb\cup A_{n_S}$.
We define the specification $S$ as an ordered tuple $S=(T_1, \dotsc, T_{n_S})$.
Note that $n_S\leq t$, since each $A_{j+1}$ contains a page not yet present in
$A_1\cup \dotsb \cup A_j$.
Let us denote $\ensuremath{\mathcal{S}}_t$ the set of all possible specifications of $t$ tuples.
The size of $\ensuremath{\mathcal{S}}_t$ can be bounded easily: there are at most $t$ sets contained
in each specification, each of them can be chosen from at most $2^t$
subsets of $\{1, \dotsc, t\}$,
implying that $|\ensuremath{\mathcal{S}}_t| \leq (2^t)^t = 2^{t^2}$.
Let us denote $n(\ell, S, h)$ the number of $t$-tuples in $L(J_1, \dotsc, J_m)$
having $h$ pages predefined which correspond to the specification $S$.
Since each $t$-tuple $A$ has a (unique) specification, we have the following
important relation:
\begin{equation}\label{eq:n(l,t,h)}
n(\ell, t, h) \leq \sum_{S\in \ensuremath{\mathcal{S}}_t} n(\ell, S, h).
\end{equation}
\paragraph{Number of $t$-tuples per specification.}
First, let us consider a simpler case when $h = 0$.
Let $S = (T_1, \dotsc, T_{n_S})$ be a fixed specification of $t$-tuples.
For each $j=1,\dotsc, n_S$, we define
$t_j := |T_j|$, and $d_j := |T_j \setminus \bigcup_{i=1}^{j-1} T_i|$ the number
of new indices of $T_j$ not yet contained in the previous sets $T_i$.
There can be at most $2^t f(\ell-1, t_1, 0)$ choices for a
$t_1$-tuple $A_1$ corresponding to the indices of $T_1$.
This can be shown by a volume argument: each such tuple has to be
contained in at least $1/2^t$ fraction of $L(J_1),\dotsc, L(J_m)$, and each
list can contain at most $f(\ell-1, t_1, 0)$ $t_1$-tuples.
By choosing $A_1$, some of the request lists are already covered, i.e., the
ones that contain $A_1$ or its subset as a tuple.
According to the specification, $A_2$ has to be contained in at least $1/2^t$
fraction of the lists which are not yet covered by $A_1$.
However, the choice of $A_1$ might have already determined some pages of
$A_2$ unless $t_2=d_2$.
Therefore, the number of choices for $A_2$ can be at
most $2^t f(\ell-1, t_2, t_2-d_2)$.
In total, we get the following bound:
\begin{equation}
n(\ell, S, 0) \leq \prod_{i=1}^{n_S} 2^t f(\ell-1, t_i, t_i-d_i).
\end{equation}
For the inductive step, we also need to consider the case when some
pages of the tuple are fixed in advance.
Let $P$ be a predefined set of $h$ pages.
We want to bound the maximum possible number of $t$-tuples
containing $P$ in $L(J_1, \dotsc, J_m)$.
However, two different $t$-tuples containing $P$ can have the pages of $P$
placed at different indices, which affects the number of pre-fixed indices in
each $T_i$.
Therefore, we first choose the set $C$ of $h$ indices which
will be occupied by the pages of $P$. There are $\binom{t}{h}$ choices for $C$,
and, by definition of the specification,
the pages of $P$ have to be placed at those indices in alphabetical order.
For a fixed $C$, we denote $\bar{d}_i$ the number of not predetermined indices
contained in $T_i$, i.e.
$\bar{d}_i := |T_i \setminus (C \cup T_1 \cup \dotsb \cup T_{j-1})|$.
We get the following inequality:
\begin{equation}\label{eq:n(l,S,h)}
n(\ell, S, h)
\leq \sum_{C\in \binom{[t]}{h}}
\prod_{i=1}^{n_S} 2^t f(\ell-1, t_i, t_i - \bar{d}_i)
\leq \sum_{C\in \binom{[t]}{h}} 2^{t^2}
\prod_{i=1}^{n_S} f(\ell-1, t_i, t_i - \bar{d}_i).
\end{equation}
Now we are ready to prove Lemma~\ref{lem:n(l,t,h)}.
\paragraph{Proof of Lemma~\ref{lem:n(l,t,h)}.}
Combining equations~\eqref{eq:n(l,t,h)} and~\eqref{eq:n(l,S,h)},
we can bound $n(\ell, t, h)$ with
respect to $f(\ell-1, t', h')$. We get
\begin{equation}\label{eq:n(l,t,h)_final}
n(\ell, t, h) \leq \sum_{S\in\ensuremath{\mathcal{S}}_t} \sum_{C\in \binom{[t]}{h}}
2^{t^2} \prod_{i=1}^{n_S} f(\ell-1, t_i, t_i-\bar{d}_i).
\end{equation}
Now, we proceed by induction.
We bound this quantity using Observation~\ref{obs:f_vs_n} with respect to values
of $n(\ell-1,t',h')$, which we know from the inductive hypothesis.
In the base case $\ell=2$, we use \eqref{eq:n(l,t,h)_final}
to bound the value of $n(2,t,h)$.
Here, we have $f(1,t',h') = t'-h'+1$ because of the following reason.
If a leaf interval $I$ has a $t'$-tuple in its request list
$L(I)$, there must be a set $Q$ of $t'+1$ distinct pages requested during the
time interval of $I$. Then, $L(I)$ contains precisely $t'+1$ distinct
$t'$-tuple depending on which page becomes a label of $I$. Those $t$-tuples are
$Q\setminus \{q\}$ for $q\in Q$.
However, if we count only $t'$-tuples which contain some predefined
set $P\subseteq Q$ of $h'$ pages, there can be only $t'+1-h'$ of them,
since $Q\setminus \{q\}$ contains
$P$ if and only if $q$ does not belong to $P$.
Thereby, for any choice of $S$ and $C$, we have
$f(1,t_i, t_i-\bar{d}_i) = t_i - (t_i-\bar{d}_i) + 1 \leq t-h+1$,
since $t-h = \sum_{i=1}^{n_S}\bar{d}_i$. If $t=h$, we clearly have
$n(\ell,t,h) = 1$.
Otherwise, use \eqref{eq:n(l,t,h)_final} with the following estimations applied:
the size of $\ensuremath{\mathcal{S}}_t$ is at most $2^{t^2}$,
the number of choices for $C$ is at most $t^h$,
and $n_S$ is at most $t$. We get
\[ n(2, t, h)\leq 2^{t^2}\, t^h\, 2^{t^2}\, (t-h+1)^t
\leq 2^{4t^2} \leq 2^{t^2\, 2^{1+t-h}},
\]
where the first inequality holds since both $t^h$ and $(t-h+1)^t$ can be bounded
by $2^{t^2}$. The last inequality follows, since $2^{1+t-h}\geq 4$,
and this concludes the proof of the base case.
Now we proceed to the case $\ell > 2$.
For a fixed $S$ and $C$, we bound the product inside
equation~\eqref{eq:n(l,t,h)_final}, and our goal is to get a bound independent
on the particular choice of $S$ and $C$. Using Observation~\ref{obs:f_vs_n},
we get
\[ \prod_{i=1}^{n_S} f(\ell-1, t_i, t_i-\bar{d}_i)
\leq \prod_{i=1}^{n_S} (t_i+1)\,n(\ell-1, t_i+1, t_i-\bar{d}_i).
\]
Now, we take the logarithm of this inequality and apply the
inductive hypothesis. We get
\[ \log \prod_{i=1}^{n_S} f(\ell-1, t_i, t_i-\bar{d}_i)
\leq \sum_{i=1}^{n_S} \log(t_i+1)
+ \sum_{i=1}^{n_S} (\ell-2)(\ell-2+t_i+1)^2\,
2^{(\ell-2) + (t_i+1) - (t_i-\bar{d}_i)}.
\]
This is at most
$t\log(t+1) + (\ell-2)(\ell-1+t)^2\,2^{\ell-1} \sum_{i=1}^{n_S} 2^{\bar{d}_i}$,
where the last sum cannot be larger than $2^{t-h}$, since we have
$\sum_{i=1}^{n_S} \bar{d}_i = t-h$ and $\sum 2^{x_i} \leq 2^{\sum x_i}$.
Now, we can get rid of all $t_i$ and $\bar{d}_i$ which are dependent on the
choice of $S$ and $C$. We can write the preceding inequality as follows:
\begin{equation}\label{eq:log_prod_f}
\log \prod_{i=1}^{n_S} f(\ell-1, t_i, t_i-\bar{d}_i)
\leq t\log(t+1) + (\ell-2)(\ell-1+t)^2\,2^{\ell-1 + t - h}.
\end{equation}
To finish the proof, we plug the bound from \eqref{eq:log_prod_f} to
\eqref{eq:n(l,t,h)_final}:
\[ n(\ell, t, h) \leq |\ensuremath{\mathcal{S}}_t|\cdot t^h \cdot 2^{t^2} \cdot
2^{t\log(t+1) + (\ell-2)(\ell-1+t)^2\,2^{\ell-1 + t - h}},
\]
where the size of $\ensuremath{\mathcal{S}}_t$ is at most $2^{t^2}$. Taking the
logarithm of this inequality, we get
\[ \log n(\ell,t,h) \leq t^2 + h \log t + t^2 + t\log(t+1)
+ (\ell-2)(\ell-1+t)^2\,2^{\ell-1 + t - h}
\leq (\ell-1)(\ell-1+t)^2\,2^{\ell-1 + t - h},
\]
what already implies the statement of the lemma.
To see why the last inequality holds,
note that $(\ell-1+t)^2$ is at least $t^2$, and
$2^{\ell-1 + t - h}$ is always greater than $2^2$, since we have
$\ell \geq 3$.
Therefore, the sum of the four smaller-order terms can be bounded by
$(\ell-1+t)^2\,2^{\ell-1 + t - h}$.
\hfill\qedsymbol
\subsection{Case of $d$ different weights}
\label{sec:d-dichotomy}
Now we prove a dichotomy theorem for the case of $d$ different weight classes.
For each $i= 1, \dotsc, d$, let $k_i$ denote the number of server of weight
$w_i$, so that $k = k_1 + \dotsb + k_d$.
In the rest of this section we assume that $k_1, \dotsc, k_d$ are fixed
and our estimations of $f(\ell, t, h)$ and $n(\ell, t, h)$ will implicitly
depend on their values.
We consider a service pattern $\ensuremath{\mathcal{I}}$ consisting of $d$
levels $\ensuremath{\mathcal{I}}_1, \dotsc, \ensuremath{\mathcal{I}}_d$, where an interval at level $i$ has a label
consisting of at most $k_i$ different pages describing the position of the $k_i$
servers of weight $w_i$.
The cost of $\ensuremath{\mathcal{I}}$ is computed as $\sum_{i=1}^d k_iw_i (|\ensuremath{\mathcal{I}}_i|-1)$, and the
assignment function $\alpha$ labels each interval in $\ensuremath{\mathcal{I}}_i$ with a set
$C_i$ of at most $k_i$ points.
The definition of the request sets and the request lists stays similar to the
general setting.
We say that a tuple of pages $S$ belongs to the request set $R(I)$ of an
interval $I$, if there is a feasible assignment $\alpha$ which labels the
ancestors of $I$ only using pages of $S$, and again we define $L(I)$ to be the
set of inclusion-wise minimal tuples from $R(I)$.
Observation~\ref{obs:list_prod} holds as it is stated in
Section~\ref{sec:intervals}. For the Observation~\ref{obs:C_to_I_list}, we have
the following variant.
\begin{observation}\label{obs:Cd_to_I_list}
Let $J_1, \dotsc, J_m$ denote all the children of some $\ell$th level
interval $I$.
A tuple $Q$ belongs to $R(I)$ if and only if there is a set $C$ of $k_\ell$
pages such that $Q \cup C$ belongs to $R(J_1, \dotsc, J_m)$.
\end{observation}
The statement of the theorem for this case is a bit more complicated due to the
following phenomenon.
Let $I$ be a top-level interval such that the joint request list of its children
contains precisely one singleton $\{p\}$.
Then any $k_d$-tuple can be feasibly assigned to
$I$, whenever it contains $p$. This way there is potentially infinite number of
feasible labels for $I$, but the labels are not yet
arbitrary and they all have to contain $p$
what makes them easy to identify.
Therefore we state the theorem in the following way.
\begin{theorem}[Dichotomy theorem for $d$ different weights]
\label{thm:dichotomy-d}
Let $\ensuremath{\mathcal{I}}$ be a service pattern, $I\in \ensuremath{\mathcal{I}}$ be an arbitrary interval at
level $d$ and let us denote $\ensuremath{\mathcal{Q}} = \ensuremath{\mathcal{Q}}_1 \cup \dotsb \cup \ensuremath{\mathcal{Q}}_{k_d}$ a set
of labels for $I$ satisfying the following:
\begin{itemize}
\item Each $\ensuremath{\mathcal{Q}}_t$ contains feasible labels $T$ for $I$, such that $|T|=t$.
\item Whenever $T$ is in $\ensuremath{\mathcal{Q}}_t$, no $T'$ containing $T$ as a subset
belongs to any $\ensuremath{\mathcal{Q}}_j$ for $j>t$.
\end{itemize}
Then, either $Q_1 = U$, or $|Q_t| \leq n(d,t)$ for each $t$, where
$n(d,t) \leq 2^{4dk^2 t \prod_{j=1}^{d-1}(k_j+1)}$.
\end{theorem}
If $\ensuremath{\mathcal{Q}}_1 = U$, any label $C \in \binom{U}{k_d}$ can be feasibly
assigned to $I$. The crucial part of the proof is the following lemma
that bounds the size of the request list in each level.
It is proved similarly as Lemma~\ref{lem:n(l,t,h)},
although the recursion and the resulting
bounds have a different form.
\begin{lemma}\label{lem:n(ld,t,h)}
Let $I$ be an interval at level $\ell\geq 2$ and $J_1, \dotsc, J_m$
be its children.
The number $n(\ell, t, h)$ of distinct
$t$-tuples in their joint list $L(J_1, \dotsc, J_m)$ having $h$ pages
fixed satisfies:
\[ n(\ell, t, h)
\leq 2^{\ell\cdot 4k^2 (t-h)\prod_{i=1}^{\ell-1}(k_i + 1)}. \]
\end{lemma}
First, let us show that this lemma already implies
Theorem~\ref{thm:dichotomy-d}.
\begin{proof}[Proof of Theorem~\ref{thm:dichotomy-d}]
Let us denote $J_1, \dotsc, J_m$ the children of $I$.
If their joint request list contains only empty set, then there is a feasible
assignment $\alpha$ which gives no label to $I$. In this case, $I$ can be
feasibly labeled by an arbitrary singleton and we have $\ensuremath{\mathcal{Q}}_1 = U$.
Otherwise, a feasible label for $I$ can only be some tuple which is contained
in $L(J_1, \dotsc, J_m)$, and there can be at most $n(k,t)$ $t$-tuples in
$L(J_1, \dotsc, J_m)$.
Lemma~\ref{lem:n(ld,t,h)} implies, that the number $n(k,t)$ fulfills the bound
stated by the theorem:
$ n(k, t) = n(k,t,0) \leq 2^{d\cdot 4k^2 t \prod_{i=1}^{d-1}(k_i + 1)}. $
\end{proof}
To prove Lemma~\ref{lem:n(ld,t,h)}, we proceed by induction in $\ell$.
First, we establish the relation between $f(\ell,t,h)$ and $n(\ell,t,h)$.
\begin{observation}\label{obs:fd_vs_n}
Let $I$ be an interval at level $\ell\geq 2$, and $J_1, \dotsc, J_m$ its
children. Then we have
\begin{equation}
f(\ell,t,h) \leq \binom{t+k_\ell-h}{k_\ell}\,n(\ell,t+1,h).
\end{equation}
\end{observation}
\begin{proof}
Observation~\ref{obs:Cd_to_I_list} already shows that a $t$-tuple
$A$ belongs to $R(I)$ if and only if there is a $(t+k_\ell)$-tuple
$B \in R(J_1, \dotsc, J_m)$ such that $A\subset B$.
If $B$ is not an inclusion-wise minimal member of $R(J_1, \dotsc, J_m)$,
then there is some $B'\subsetneq B$ in $R(J_1, \dotsc, J_m)$ and a set $C$ of
$k_\ell$ pages such that $(B'\setminus C) \subsetneq A$ also belongs to $R(I)$.
This implies that $A$ does not belong to $L(I)$. Therefore we know that each
$t$-tuple $A\in L(I)$ is a subset of some $B \in L(J_1, \dotsc, J_m)$.
On the other hand, each $B \in L(J_1, \dotsc, J_m)$
contains precisely $\binom{t+k_\ell}{k_\ell}$ distinct $t$-tuples,
each created by removing $k_\ell$ page from $B$.
If we want all of them to contain a predefined set $P$ of $h$ pages,
then $P$ has to be contained in $B$,
and there can be precisely $\binom{t+k_\ell-h}{k_\ell}$ such $t$-tuples,
each of them equal to $B\setminus C$ for some
$C\subset B\setminus P$ of size $k_\ell$.
Therefore we have $f(\ell,t,h)\leq \binom{t+k_\ell-h}{k_\ell}\,n(\ell,t+1,h)$.
\end{proof}
To bound $n(\ell,t,h)$ with respect to numbers $f(\ell-1,t,h)$,
we use specifications as they are defined the previous subsection,
so that we have
\begin{equation}\label{eq:n(ld,t,h)}
n(\ell,t,h) \leq \sum_{S\in \ensuremath{\mathcal{S}}_t} n(\ell, S, h),
\end{equation}
and also
\begin{equation}\label{eq:n(ld,S,h)}
n(\ell,S,h) \leq \sum_{C\in \binom{[t]}{i}}
2^{t^2} \prod_{i=1}^{n_S} f(\ell-1, t_i, t_i - \bar{d}_i).
\end{equation}
Therefore we can proceed directly to the proof.
\begin{proof}[Proof of Lemma~\ref{lem:n(ld,t,h)}]
Combining equations \eqref{eq:n(ld,t,h)} and \eqref{eq:n(ld,S,h)}, we can bound
$n(\ell, t, h)$ with respect to the values of $f(\ell-1,t',h')$. We get
\begin{equation}\label{eq:n(ld,t,h)-final}
n(\ell,t,h) \leq \sum_{S\in \ensuremath{\mathcal{S}}_t} \sum_{C\in \binom{[t]}{i}}
2^{t^2} \prod_{i=1}^{n_S} f(\ell-1, t_i, t_i - \bar{d}_i).
\end{equation}
In the rest of the proof, we use induction to show
that this inequality together with Observation~\ref{obs:Cd_to_I_list}
implies the desired bound.
In the base case, we have $\ell=2$, and we can use \eqref{eq:n(ld,t,h)-final}
directly with the values of $f(1,t',h')$.
We know that $f(1,t',h') = \binom{t'-h'+k_1}{k_1}$,
for the following reason.
To obtain a tuple of length $t'$ in the request list of a first level interval
$I$, there must be $t'+k_1$ distinct points requested in the input
sequence during this interval.
As $h'$ of them are pre-specified to be contained in each tuple, we are left
with $t'-h'+k_1$ points from which the label for $I$ is chosen, and this label
contains precisely $k_1$ points. Therefore, $L(I)$ can contain at most
$\binom{t'-h'+k_1}{k_1}$ distinct $t'$-tuples which contain the $h'$
predetermined points.
Therefore, for each $i$ in the product in \eqref{eq:n(ld,t,h)-final}, we have
$f(1, t_i, t_i - \bar{d}_i) \leq \binom{t_i - (t_i-\bar{d}_i) + k_1}{k_1}
\leq \binom{\bar{d}_i + k_1}{k_1} \leq (t-h+k_1)^{k_1}$.
However, $\sum_{i=1}^{n_S} \bar{d}_i = t-h$ and $f(1,t_i,t_i-\bar{d}_i)=1$
whenever $\bar{d}_i=0$. Therefore at most $t-h$ factors in that product can be
greater than $1$, and we have
$\prod_{i=1}^{n_S} f(\ell-1, t_i, t_i - \bar{d}_i) \leq (t-h+k_1)^{k_1(t-h)}$.
Recall that there are at most $|\ensuremath{\mathcal{S}}_t|\leq 2^{t^2}$ choices for $S$ and at most
$t^h$ choices for $C$. Using the trivial estimate $h\leq t\leq k$, we get
\[ n(2,t,h) \leq 2^{t^2}\, t^h\, 2^{t^2}\, (t-h+k_1)^{k_1(t-h)}
\leq 2^{t^2 +h\log t + t^2 + k_1(t-h) \log(t-h+k_1)}
\leq 2^{2\cdot 4k^2 (t-h) (k_1+1)},
\]
where the right-hand side corresponds to the bound claimed by the lemma.
Let us now focus on the inductive step with $\ell>2$.
For a fixed $S$ and $C$, we bound the product inside
equation~\eqref{eq:n(ld,t,h)-final} by an expression independent on $S$ and $C$.
First, let us apply Observation~\ref{obs:fd_vs_n} to each term of the product.
Since
$\binom{t_i+k_{\ell-1} - (t_i-\bar{d}_i)}{k_{\ell-1}}
= \binom{k_{\ell-1} + \bar{d}_i}{k_{\ell-1}}$,
we have
\[ f(\ell-1,t_i,t_i-\bar{d}_i)
\leq \binom{k_{\ell-1}+\bar{d}_i}{k_{\ell-1}}
n(\ell-1, t_i+k_{\ell-1}, t_i-\bar{d}_i).
\]
Let us now consider the logarithm of this inequality. Bounding
$\binom{k_{\ell-1}+\bar{d}_i}{k_{\ell-1}}$ by
$(k_{\ell-1}+t-h)^{k_{\ell-1}}$ and
applying the inductive hypothesis, we get
\[ \log f(\ell-1,t_i,t_i-\bar{d}_i) \leq k_{\ell-1}\log(k_{\ell-1}+t-h)
+ (\ell-1)\cdot 4k^2\big(t_i+k_{\ell-1} -(t_i-\bar{d}_i)\big)
\prod_{j=1}^{\ell-2} (k_j+1).
\]
Note that the only term in this bound, which is dependent on $i$, is
$(t_i+k_{\ell-1} -(t_i-\bar{d}_i)) = (\bar{d}_i+k_{\ell-1})$.
Now we would like to bound
$\log \prod_{i=1}^{n_S} f(\ell-1, t_i, t_i - \bar{d}_i)$, which equals to a sum
of $\log f(\ell-1,t_i,t_i-\bar{d}_i)$ over $i=1, \dotsc, n_S$.
First, note that $f(\ell,t_i,t_i-\bar{d}_i)=1$ whenever $\bar{d}_i=0$.
Let us denote $A$ the set of indices $i$ such that $\bar{d}_i>0$.
Then, by the inequality above,
\[ \log \prod_{i\in A} f(\ell-1, t_i, t_i - \bar{d}_i)
\leq |A|\cdot k_{\ell-1}\log(t+k_{\ell-1}-h)
+ (\ell-1)\cdot 4k^2\prod_{j=1}^{\ell-2} (k_j+1)
\cdot \sum_{i\in A}(\bar{d}_i + k_{\ell-1}).
\]
We know that $\sum_{i=1}^{n_S} \bar{d}_i = t-h$, what implies that the size of
$A$ is also at most $t-h$. Therefore we can bound the last sum as follows:
$\sum_{i\in A}(\bar{d}_i + k_{\ell-1}) \leq (t-h)+|A|\cdot k_{\ell-1}
\leq (t-h)(k_{\ell-1}+1)$.
Since $f(\ell,t_i,t_i-\bar{d}_i)=1$ for each $i\notin A$, we get the following
bound:
\[ \prod_{i=1}^{n_S} f(\ell-1, t_i, t_i - \bar{d}_i)
= \prod_{i\in A} f(\ell-1, t_i, t_i - \bar{d}_i)
\leq (t+k_{\ell-1}-h)^{(t-h)k_{\ell-1}}\,
2^{(\ell-1) \cdot 4k^2(t-h)\prod_{j=1}^{\ell-1} (k_j+1)}.
\]
Now we are almost done.
The preceding bound is universal and independent of $S$ and $C$,
and therefore we can plug it in the equation \eqref{eq:n(ld,t,h)-final} in the
following way:
\[ n(\ell,t,h)\leq |\ensuremath{\mathcal{S}}_t|\cdot t^h \cdot 2^{t^2}\cdot
(t+k_{\ell-1}-h)^{(t-h)k_{\ell-1}}\cdot
2^{(\ell-1)\cdot 4k^2(t-h)
\prod_{i=1}^{\ell-1}(k_i+1)}, \]
where the size of $\ensuremath{\mathcal{S}}_t$ is at most $2^{t^2}$.
It is enough to show that the last term is much larger and dominates the
smaller terms. To show this, we take the logarithm of this
inequality and we get
\[ \log n(\ell,t,h) \leq t^2 + h\log t + t^2 +
(t-h) k_{\ell-1}\log(t+k_{\ell-1}-h)
+ (\ell-1)\cdot 4k^2 (t-h) \prod_{i=1}^{\ell-1}(k_i+1), \]
where each of the four smaller-order terms is smaller than
$k^2(t-h)\prod_{i=1}^{\ell-1}(k_i+1)$.
Therefore, we get the final inequality which concludes the proof:
$n(\ell,t,h)\leq 2^{\ell\cdot 4k^2(t-h) \prod_{i=1}^{\ell-1}(k_i+1)}.$
\end{proof}
\section{Upper bounds for generalized WFA}
\label{sec:ubs}
We now show that the generalized Work
Function Algorithm with $\lambda=0.5$ achieves the bounds claimed in Theorems~\ref{thm:ub} and \ref{thm:ubd}.
Even though Theorem \ref{thm:ub} follows as a special case of Theorem \ref{thm:ubd} (up to lower order terms in the exponent) we describe these results
separately in Sections \ref{sec:ub} and \ref{sec:ubd} as the proof of Theorem \ref{thm:ub} is simpler and highlights the main ideas directly.
\subsection{Upper bound for arbitrary weights}
\label{sec:ub}
We prove Theorem~\ref{thm:ub} by induction on the number of servers. Let $r_k$ denote the bound on the competitive ratio with $k$ servers. We will show that $r_k = O((n_k)^3 r_{k-1})$, where $n_k$ is the constant from the Dichotomy Theorem~\ref{thm:dichotomy}. As $r_1=1$ trivially, this will imply the result.
We begin with some definitions and the basic properties of WFA.
\paragraph{Definitions and Notation.}
Recall the definition of Work functions and the generalized WFA from Section \ref{sec:prelim}. A basic property of work functions is that for any two configurations $C$ and $C'$ and any time $t$, the work function values $\WF_t(C)$ and $\WF_t(C')$ can differ by at most $d(C,C')$. Moreover, at any time $t$, the generalized WFA will always be in some configuration that contains the current request $\sigma_t$. For the rest of this section we focus on $\WFA_{0.5}$ and denote it by $\ALG$.
Let $M_t$ denote the minimum work function value at
time $t$ over all configurations, and
let $\WF_t(p) = \min\{\WF_t(C)\;|\, C(k) = p\}$ denote the minimum work function value over all configurations with the heaviest server $s_k$ at $p$.
We denote $W_i = \sum_{j=1}^i w_i$.
We will assume (by rounding if necessary) that the weights $w_i$ are well-separated and satisfy $W_{i-1} \leq w_i/(20 i n_i)$ for each $i=2,\ldots,k$.
This can increase the competitive ratio by at most $O(k^k \Pi_{i=1}^k n_i) \ll
O(n_k^3)$. This will ensure that for any two configurations $C$ and $C'$ that
both have $s_k$ at $p$, their work function values differ by at most $W_{k-1}$
which is negligibly small compared to $w_k$.
For a point $p \in U$, we define the {\em ``static'' work function} $\SWF_t(p)$ as the optimal cost to serve requests $\sigma_1, \dotsc, \sigma_t$ while keeping server $s_k$ fixed at point $p$. Note that this function will in general take very different values than the (usual) work function.
However, the local changes of $\SWF(p)$ will be useful in our inductive argument. Intuitively, if $\ALG$ keeps $s_k$ at $p$ during some interval $[t_1,t_2]$ and $\SWF(p)$ rises by $x$ during this period, then the cost incurred by $\ALG$ should be at most $r_{k-1} x$.
For any quantity $X$, we use $\Delta_{t_1}^{t_2} X:=X_{t_2} - X_{t_1}$ to denote the change in $X$ during the time interval $[t_1,t_2]$. If the time interval is clear from the context, we use $\Delta X$.
We partition the request sequence into {\em phases}, where a phase ends whenever $\ALG$ moves its heaviest server $s_k^{\ALG}$.
\paragraph{Basic Properties of WFA.}
We describe some simple facts that follow from basic properties of $\WFA_{\lambda}$ and work functions. The proofs of the following lemmas are in Appendix~\ref{app_sec:ub}.
\begin{lemma}
\label{cl:phase_start_end}
Consider a phase that starts at time $t_1$ and end at $t_2$, and let $p$ be the location of $s_k^{\ALG}$ during this phase. Then,
\begin{enumerate}[(i)]
\item $M_{t_1}\leq \WF_{t_1}(p) \leq M_{t_1} + W_{k-1} $, and
\item $ w_k/2 - 2W_{k-1} \leq \Delta \WF(p) \leq \Delta M + w_k/2 + 2 W_{k-1}.$
\end{enumerate}
\end{lemma}
The next lemma shows that $\WF(p)$ and $\SWF(p)$ increase by similar amount while $s_k^{\ALG}$ remains at point $p$.
\begin{lemma}
\label{lem:swf_wf_phase}
For a phase where $s_k^{\ALG}$ is at point $p$, we have that $
|\Delta \WF(p) - \Delta \SWF(p) | \leq W_{k-1}$.
\end{lemma}
We remark that the preceding lemma does not hold for some $q$ where $s_k^{\ALG}$ is not present.
The following lemma is more general and holds for any point $p \in U$ and for any time interval, even if there are many phases in between.
\begin{lemma}\label{lem:WFvsSWF} For any $t'>t$, $p \in U$,
\[ \WF_{t'}(p) \geq \min\{\WF_t(p) + \Delta_{t}^{t'}\SWF(p)
- W_{k-1}, M_t + w_k\}.
\]
\end{lemma}
\paragraph{Bounding the Performance.}
We are now ready to prove Theorem \ref{thm:ub}. The key lemma will be the following.
\begin{lemma}
\label{lem:main_gen}
Consider any sequence of $m = n_k +1$ consecutive phases. Then, $\Delta M \geq w_k/(8 k \cdot n_k) $ and the cost incurred by $\ALG$ is at most $4 n_k \cdot r_{k-1} \cdot w_k + r_{k-1} \cdot \Delta M $.
\end{lemma}
Before proving Lemma \ref{lem:main_gen}, let us see why gives a competitive ratio $r_k = O(n_k^3) \cdot r_{k-1}$, and hence proves Theorem \ref{thm:ub}.
\paragraph*{Proof of Theorem~\ref{thm:ub}}
Let $\cost(\ALG)$ and $\cost(\OPT)$ denote the cost of the algorithm and the optimal cost respectively. We show that $\ALG$ with $k$ servers is strictly $r_k$-competitive, i.e. $\cost(\ALG) \leq r_k \cdot \cost(\OPT)$ for any request sequence, given that $\ALG$ and $\OPT$ start from the same initial configuration.
For $k=1$, $\ALG$ is obviously strictly 1-competitive. Assume inductively that $\ALG$ with $k-1$ servers is strictly $r_{k-1}$-competitive. We now bound $r_k$.
Let $m$ denote the total number of phases. We partition the sequence into $h = \lceil \frac{m}{n_k+1} \rceil $ groups where each group (except possibly the last one) consists of $n_{k}+1$ phases. Note that $\cost(\OPT) = M_T$, where $M_T$ is the minimum work function value at
the end of the request sequence. Thus for each group of phases we can use $\Delta M$ as an estimate of the optimal cost.
{\em Competitive Ratio:} We first show that $\ALG$ is $r_k$-competitive, since this proof is simple and highlights the main idea. We then give a more careful analysis to show that in fact $\ALG$ is strictly $r_k$-competitive.
By Lemma~\ref{lem:main_gen}, during $i$th group, $ i \leq h-1$, the ratio between the cost of $\ALG$ and $\Delta M$ is at most
\begin{equation}
\label{eq:ratio_group}
\frac{4 n_k \cdot w_k \cdot r_{k-1} + r_{k-1} \Delta M}{\Delta M} \leq
\frac{4 n_k \cdot w_k \cdot r_{k-1}}{w_k / (8 k \cdot n_k)} + \frac{r_{k-1} \Delta M}{\Delta M} \leq 33 k \cdot n_k^2 \cdot r_{k-1}.
\end{equation}
Due to Lemma~\ref{lem:main_gen}, we have that for the last group of phases the cost of $\ALG$ is at most $ 4 n_k \cdot r_{k-1} \cdot w_k + r_{k-1} \cdot \Delta M $.
Overall, we get that $\cost(\ALG) \leq r_k \cdot M_T + 4 n_k \cdot r_{k-1} \cdot w_k$, for some $r_k = O((n_k)^3 r_{k-1}) $, i.e. $\ALG$ is $r_k$-competitive.
{\em Strict Competitive Ratio:} In order to prove strict competitive ratio, we need to remove the additive term due to the last group of phases. In case $h \geq 2$, we do that by considering the last two groups together. By a similar calculation as in~\eqref{eq:ratio_group} we get that during groups $h-1$ and $h$, the ratio between cost of $\ALG$ and $\Delta M$ is at most $65k n_k^2 \cdot r_{k-1}$. For $i$th group, $i \leq h-2$ we use inequality~\eqref{eq:ratio_group}. Thus, in case $h \geq 2 $ we get that
\begin{equation}
\frac{\cost(\ALG)}{M_T} \leq 65k n_k^2 \cdot r_{k-1} = O(n_k^3 r_{k-1}).
\end{equation}
It remains to consider the case $h=1$, i.e there are no more than $n_k+1$ phases. To this end, we distinguish between two cases.
\begin{enumerate}
\item $\OPT$ moves $s_k$: Then $\cost(\OPT) = M_T \geq w_k$ and by Lemma~\ref{lem:main_gen}, $\cost(\ALG) \leq 4 n_k \cdot r_{k-1} \cdot w_k + r_{k-1} \cdot M_T $. We get that
\begin{align*}
\frac{\cost(\ALG)}{\cost(\OPT)} &\leq \frac{4 n_k \cdot r_{k-1} \cdot
w_k}{M_T} + \frac{r_{k-1} \cdot M_T}{M_T} \leq \frac{4 n_k \cdot r_{k-1} \cdot
w_k}{w_k} + r_{k-1} \ll 65k n_k^2 \cdot r_{k-1}
\end{align*}
\item $\OPT$ does not move $s_k$: In this case, $M_T = \WF_T(p_1)$, where $p_1$ is the initial location of the heaviest server $s_k$. We consider 2 sub-cases.
\begin{enumerate}
\item First phase never ends: In this case, both $\ALG$ and $\OPT$ use $k-1$ servers and start from the same initial configuration, so by the inductive hypothesis $\cost(\ALG) \leq r_{k-1} \cdot \cost(\OPT)$.
\item First phase ends: By Lemma~\ref{cl:phase_start_end}, we have that for the
first phase $ \Delta \WF(p_1) \geq w_k/2 - 2W_{k-1} \geq w_k/4 $. Thus we get
that $\WF_T(p_1) \geq w_k/4$, which by a calculation similar
to~\eqref{eq:ratio_group} gives that $ \cost(\ALG) / \cost(\OPT) \leq 17 n_k
r_{k-1} \ll 65k n_k^2 \cdot r_{k-1}$.
\end{enumerate}
\end{enumerate}
We conclude that for any request sequence
\begin{equation}
r_k \leq \frac{\cost(\ALG)}{M_T} \leq 65k n_k^2 \cdot r_{k-1}.
\end{equation}
{\em Calculating the Recurrence.}
Assuming that $r_{k-1} \leq 2^{2^{k+5\log k }}$, and as $n_k = 2^{2^{k+3 \log k}}$ and $\log 65k < 2^{k + 3\log k}$, it follows that
\[
\pushQED{\qed}
\log r_k \leq \log (65k) + 2^{k+ 3 \log k +1} + 2^{k+5\log k } \leq 2^{k +1 + 5
\log (k+1)}.\qedhere
\popQED
\]
We now focus on proving Lemma \ref{lem:main_gen}.
The crucial part is to lower bound the increase in $\Delta M$ during the $m$ phases. Let $t_1$ and $t_2$ denote the start and end times of the $m$ phase sequence. We will show that for all points $p$, $\WF_{t_2}(p) \geq M_{t_1} + w_k/(8 k \cdot n_k)$.
To do this, we upper bound the number of points $p$ where the increase in $\WF(p)$ could be very small in the first phase (Lemma~\ref{lem:egalitarians_cost}). Then, using Lemma~\ref{cl:phase_start_end} we show that, during the subsequent $m$ phases, $s_k^{\ALG}$ will visits all such points $p$ which would increase $\WF(p)$ significantly for each of them.
We now give the details.
Call a point $q$ {\em lucky} during a phase, if its static work function increases by at most $\Delta \SWF(q) < w_k/(4kn_k)$ during that phase. The next lemma shows that there cannot be too many lucky points during a phase.
\begin{lemma}\label{lem:egalitarians_cost}
Let $L$ be the set of lucky points during some phase. Then, $|L| \leq n_k$.
\end{lemma}
\begin{proof}
For the sake of contradiction, suppose that $|L| > n_k$.
Let $Q$ be an arbitrary subset of $L$ such that $|Q| = n_k+1 $. For each $q\in Q$, let $\ensuremath{\mathcal{I}}^q$ be the optimal service pattern for the
phase where $s_k$ remained at $q$ throughout. Clearly, $\cost (\ensuremath{\mathcal{I}}^q) \leq \Delta \SWF(q)$.
We create a new service pattern $\ensuremath{\mathcal{I}}$ that is a {\em refinement} of all $\ensuremath{\mathcal{I}}^q$, for $q \in Q$ as follows.
For each $\ell=1, \dotsc, k$, we set
$\ensuremath{\mathcal{I}}_\ell = \{ [t_i, t_{i+1})\;|\, \text{ for } i = 1, \dotsc, s-1\}$, where
$t_1 < \dotsb < t_s$ are the times when at least one interval from
$\ensuremath{\mathcal{I}}^1_\ell, \dotsc, \ensuremath{\mathcal{I}}^{|Q|}_\ell$ ends. This way, each interval $I\in \ensuremath{\mathcal{I}}^q_\ell$
is a union of some intervals from $\ensuremath{\mathcal{I}}_\ell$.
Let $\ensuremath{\mathcal{I}} = \ensuremath{\mathcal{I}}_1 \cup \dotsb \cup \ensuremath{\mathcal{I}}_k$. Note that any feasible labeling $\alpha$ for any $\ensuremath{\mathcal{I}}^q$, extends naturally to a feasible labeling for $\ensuremath{\mathcal{I}}$: If an interval $I \in \ensuremath{\mathcal{I}}^q $ is partitioned into smaller intervals, we label all of them with $\alpha(I)$.
We modify $\ensuremath{\mathcal{I}}$ to be hierarchical, which increases its cost at most by a factor of $k$.
By construction, we have
\begin{equation}\label{eq:cost_refinement}
\cost(\ensuremath{\mathcal{I}}) \leq k \cdot \sum_{q \in Q} \cost (\ensuremath{\mathcal{I}}^q)
\leq k \cdot \sum_{q \in Q} \Delta \SWF(q)
\leq k(n_k+1) \cdot \frac{w_k}{4kn_k} \leq \frac{w_k}{3}.
\end{equation}
Now the key point is that $\ensuremath{\mathcal{I}}$ has only one interval $I$ at level $k$, and all $q \in Q$ can be feasibly assigned to it.
But by the Dichotomy theorem~\ref{thm:dichotomy}, either the number of points which
can be feasibly assigned to $I$ is at most $n(k,1)$, or else
any point can be feasibly assigned there. As $|Q| > n_k \geq n(k,1)$, this implies that any point can be feasibly assigned to $I$. Let $p$ be the location of $s_k^{\ALG}$ during the phase. One possible way to serve all requests of this phase having $s_k$ at $p$ is to use $\ensuremath{\mathcal{I}}$ (with possibly some initial cost of at most $W_{k-1}$ to bring the lighter servers in the right configuration). This gives that,
\begin{equation}
\label{eq:delta_heav_loc}
\Delta \SWF(p) \leq \cost(\ensuremath{\mathcal{I}}) + W_{k-1} \leq w_k/3 + W_{k-1}.
\end{equation}
On the other hand, by Lemma~\ref{cl:phase_start_end}, during the phase
$\Delta\WF(p) \geq w_k/2 - 2W_{k-1}$, and by Lemma~\ref{lem:swf_wf_phase}, $ \Delta\SWF(p) \geq \Delta\WF(p) - W_{k-1} $. Together, this gives
$\Delta\SWF(p) \geq w_k/2 - 3W_{k-1}$
which contradicts \eqref{eq:delta_heav_loc}, as $W_{k-1} \ll w_k/40k$.
\end{proof}
The next simple observation shows that if a point is not lucky during a phase, its work function value must be non-trivially high at the end of the phase.
\begin{observation}
\label{obs:non-lucky}
Consider a phase that starts at time $t$ and ends at $t'$. Let $p$ be a point which is not lucky during that phase. Then, $ \WF_{t'}(p) \geq M_t + w_k/ (5k \cdot n_k) $.
\end{observation}
\begin{proof}
By Lemma~\ref{lem:WFvsSWF} we have either $\WF_{t'}(p) \geq M_{t} + w_k$, in
which case the result is trivially true.
Otherwise, we have that \[\WF_{t'}(p) \geq \WF_{t}(p) + \Delta_{t}^{t'}\SWF(p) - W_{k-1}.\]
But as $p$ is not lucky, $\Delta \SWF(p) \geq w_k/(4kn_k)$, and as $W_{k-1} \leq w_k/(20 k \cdot n_k)$, together this gives have that
$\WF_{t'}(p) \geq \WF_{t}(p) + w_k/ (5k \cdot n_k)$.
\end{proof}
\paragraph*{Proof of Lemma \ref{lem:main_gen}} We first give the upper bound on cost of $\ALG$ and then the lower bound on $\Delta M$.
\vspace{2mm}
{\em Upper Bound on cost of $\ALG$:} We denote by $\cost_i(\ALG)$ the cost of $\ALG$ during $i$th phase. Let $p_i$ be the location of $s_k^{\ALG}$, and $\Delta_i M$ the increase of $M$ during the $i$th phase. We will show that $\cost_i(\ALG) \leq 2 \cdot r_{k-1} \cdot w_k + r_{k-1} \cdot \Delta_i M$. By summing over all $n_k+1$ phases, we get the desired upper bound.
During the $i$th phase, $\ALG$ uses $k-1$ servers. Let $C_i^{k-1}$ denote the optimal cost to serve all requests of the $i$th phase starting at the same configuration as $\ALG$ and using only the $k-1$ lightest servers. By the inductive hypothesis of Theorem~\ref{thm:ub}, $\ALG$ using $k-1$ servers is strictly $r_{k-1}$-competitive, thus the cost incurred by $\ALG$ during the phase is at most $r_{k-1} \cdot C_i^{k-1} $.
Now we want to upper bound $C_i^{k-1}$. By definition of static work function, there exists a schedule $S$ of cost $ \Delta\SWF(p_i)$ that serves all requests of the phase with $s_k$ fixed at $p_i$. Thus, a possible offline schedule for the phase starting at the same configuration as $\ALG$ and using only the $k-1$ lightest servers, is to move them at the beginning of the phase to the same locations as they are in $S$ (which costs at most $W_{k-1}$) and then simulate $S$ at cost $ \Delta\SWF(p_i)$. We get that $C_i^{k-1} \leq \Delta\SWF(p_i) + W_{k-1} $.
Moreover, $\ALG$ incurs an additional cost of $w_k$ for the move of server $s_k$ at the end of the phase. We get that
\begin{equation}
\label{eq:alg_cost_phase}
\cost_i(\ALG) \leq w_k + r_{k-1} \cdot(\Delta \SWF(p_i) + W_{k-1}).
\end{equation}
Combining this with Lemmas~\ref{cl:phase_start_end} and~\ref{lem:swf_wf_phase}, and using that $W_{k-1} \leq w_k /(20k \cdot n_k) $, we get
\begin{align*}
\cost_i(\ALG) & \leq w_k + r_{k-1} \cdot(\Delta \SWF(p_i) + W_{k-1}) \leq w_k + r_{k-1} \cdot(\Delta \WF(p_i) + 2W_{k-1}) \\
& \leq w_k + r_{k-1} \cdot(\Delta_i M + w_k/2 + 4W_{k-1}) \leq w_k + r_{k-1} \cdot(\Delta_i M + w_k/2 + (4/20) \cdot w_k) \\
& \leq 2 w_k \cdot r_{k-1} + r_{k-1} \cdot \Delta_i M.
\end{align*}
{\em Lower bound on $\Delta M$:} Let $t_1$ and $t_2$ be the start and the end time of the $m = n_k+1$ phases. For the sake of contradiction, suppose that $M_t < M_{t_1} + w_k/(8 k \cdot n_k)$ for all $t \in [t_1,t_2]$.
By Lemma~\ref{lem:egalitarians_cost}, during first phase there are at most $n_k$ lucky points.
We claim that $s_k^{\ALG}$ must necessarily visit some lucky point in each subsequent phase. For $1 \leq i \leq m$, let $Q_i$ denote the set of points that have been lucky during all the phases $1,\dotsc,i$.
Let $t$ denote the starting time of $i$th phase and $p$ the location of $s_k^{\ALG}$ during this phase, for any $i \geq 2$. By Lemma~\ref{cl:phase_start_end}, we have that
\begin{align*}
\WF_t(p) < M_t + W_{k-1} \leq M_{t_1} + w_k / (5 k \cdot n_k).
\end{align*}
By Observation \ref{obs:non-lucky}, this condition can only be satisfied by
points $p \in Q_{i-1}$ and hence we get that $p$ was lucky in all previous
phases. Now, by Lemma~\ref{cl:phase_start_end}, $i$th phase $\WF(p)$
rises by at least $w_k/2 - 2W_{k-1}$ during the $i$th phase,
and hence $p$ is not lucky. Therefore, $p \notin Q_i$ and $p \in Q_{i-1}$
and $|Q_{i}| \leq |Q_{i-1}| - 1$.
Since $|Q_1| \leq n_k = m-1$, we get that $Q_m=\emptyset$, which gives the
desired contradiction.
\qed
\subsection{Upper bound for $d$ different weights}
\label{sec:ubd}
For the case of $d$ weight classes
we prove a more refined upper bound.
The general approach is quite similar as before.
However, the proof of the variant of Lemma~\ref{lem:egalitarians_cost}
for this case is more subtle as the number of ``lucky"
locations for the heaviest servers can be infinite. However, we handle this
situation by maintaining posets of lucky tuples. We show that it suffices for
$\ALG$ to traverse all the minimal elements of this poset, and we use
Dichotomy theorem~\ref{thm:dichotomy-d} to bound the number of these minimal
elements.
\paragraph*{Definitions and Notation.}
First, we need to generalize a few definitions which were used until now.
Let $w_1 < \dotsb < w_d$ be the weights of the servers, where
$k_i$ is the number of servers of weight $w_i$, for $i=1, \dotsc, d$.
Henceforth, we assume that the values of $k_1, \dotsc, k_d$ are fixed,
as many constants and functions in this section will implicitly depend on
them. For example, $r_{d-1}$ denotes
the competitive ratio of $\ALG$ with servers of $d-1$ different weights, and
it depends on $k_1, \dotsc, k_{d-1}$.
We denote $W_i = \sum_{j=1}^i w_j k_j$,
and we assume $W_{d-1} \leq w_d/(20kn_d)^{k_d}$, where $n_d$ equals to the value
of $n(d,k_d)$ from Dichotomy theorem~\ref{thm:dichotomy-d}.
This assumption can not affect the competitive ratio by
more than a factor $(20kn_d)^{dk_d}$, what is smaller than our targeted ratio.
We also assume that the universe of pages $U$ contains at least $k$ pages that
are never requested. This assumption is only for the purpose of the
analysis and can be easily satisfied by adding artificial pages to $U$, without
affecting the problem instance.
A configuration of servers is a function
$C\colon \{1, \dotsc, d\} \to 2^U$, such that
$|C(i)| = k_i$ for each $i$.
Servers with the same weight are not distinguishable and we manipulate them in
groups. Let $K_i$ denote the set of servers of weight $w_i$.
For a $k_d$-tuple $A_d$,
we define the minimum work function value over all configurations having the
servers of $K_d$ at $A_d$, i.e.~$\WF_t(A_d) = \min\{\WF_t(C) \;|\, C(d) = A_d\}$.
Similarly, we define $\SWF_t(A_d)$ the {\em static work function} at time $t$
as the optimal cost of serving the requests $\sigma_1,\dotsc, \sigma_t$ while
keeping the servers of $K_d$ fixed at $A_d$.
When calculating the value of the work function and the static work function,
we use the distance $k_i\,w_i$ whenever the optimal solution moved at least one server from
$K_i$.
This work function still estimates the offline
optimum with a factor $k$, and is easier to work with.
As in previous subsection, we use $\Delta_{t_1}^{t_2} X$ to denote the change in
quantity $X$ during time interval $[t_1,t_2]$. We also use the function
$n(d,t)$ from Theorem~\ref{thm:dichotomy-d}.
Observe that $n(d,t) \leq n_d$ for all $1\leq t \leq k_d$.
\paragraph*{Algorithm.}
We prove the bound for $\WFA_{0.5}$ with slightly
deformed distances between the configurations. More precisely,
we define $d(A,B) = \sum_{i=1}^d k_iw_i \mathbf{1}_{(A(i) \neq B(i))}$,
and denote $\ALG$ the $\WFA_{0.5}$ with this distance function.
In particular, $\ALG$ chooses new configuration for its heaviest
servers without distinguishing between those which differ only in a single
position and those which are completely different.
We call a \textit{phase} the maximal time interval when $\ALG$
does not move \textit{any} server from $K_d$.
\paragraph*{Basic Properties of WFA.}
Here are a few simple properties whose proofs are not very interesting
and are contained in Appendix~\ref{app_sec:ub}.
\begin{lemma}\label{lem:d_phase_start_end}
Consider a phase that starts at time $t_1$ and finishes at time $t_2$. Let $C_d$
be the $k_d$-tuple where the algorithm has its heaviest servers $K_d$ during the
phase. Then,
\vspace{-1ex}
\begin{enumerate}[(i)]
\item $M_{t_1}\leq \WF_{t_1}(C_d) \leq M_{t_1} + W_{d-1} $, and
\item $k_d\,w_d/2 - 2W_{d-1} \leq \Delta\WF(C_d)
\leq \Delta M + k_d\,w_d/2 + 2W_{d-1}$.
\end{enumerate}
\end{lemma}
\begin{lemma}
\label{lem:swf_wf_dlev}
For a phase where $\ALG$ has its servers from $K_d$ at a $k_d$-tuple $C_d$,
we have
\[ \Delta\WF(C_d) - W_{d-1} \leq \Delta\SWF(C_d) \leq \Delta\WF(C_d) + W_{d-1}.
\]
\end{lemma}
\begin{lemma}\label{lem:WFvsSWFd}
Let $M_t$ be the minimum value of work function at time $t$. For $t'>t$ and any
$k_d$-tuple $C_d$, we have the following:
\[ \WF_{t'}(C_d) \geq \min\{\WF_t(C_d) + \Delta_{t}^{t'}\SWF(C_d)
- W_{d-1}, M_t + w_d\}.
\]
\end{lemma}
\paragraph*{Main Lemma.}
The following lemma already implies a competitive ratio of order
$n_d^{O(k_d)}\,r_{d-1}$.
\begin{lemma}\label{lem:main-d}
Let us consider a group of $a_d = (k_d^3\,n_d)^{k_d}$ consecutive phases.
We have, $\Delta M \geq w_d/(10kn_d)^{k_d}$ and
$\cost(\ALG) \leq 2a_d\,r_{d-1} k_d\,w_d + r_{d-1} \Delta M$,
where $r_{d-1}$ is the strict competitive ratio of $\ALG$ with servers
$K_1, \dotsc, K_{d-1}$.
\end{lemma}
The bound for $\cost(\ALG)$ is easy and can be shown using a combination of the
basic properties mentioned above.
Therefore, most of this section focuses on lower bounding $\Delta M$.
Let $t_1$ and $t_2$ denote the beginning and the end of this group of phases.
At each time $t$, we maintain a structure containing
all configurations $C_d$ for the servers in $K_d$ such that
$\WF_t(C_d)$ could still be below $M_{t_1} + w_d/(10kn_d)^{k_d}$.
We call this structure a {\em poset of lucky tuples} and it is defined below.
Then, we show that this poset gets smaller with each phase until it becomes
empty before time $t_2$.
\paragraph{Poset of lucky tuples.}
Let us first consider a single phase. We call a $k_d$-tuple $C_d$ {\em lucky},
if we have $\Delta\SWF(C_d) < w_d/(4kn_d)^{k_d}$ during this phase.
A tuples $T$ of size $t<k_d$ is called lucky,
if $\Delta\SWF(C_d) <w_d/(4kn_d)^t$ for
each $k_d$-tuple $C_d$ containing $T$.
Let $\ensuremath{\mathcal{Q}}_i$ be the set of tuples which were lucky during phase $i$.
We denote $(\ensuremath{\mathcal{L}}_i, \subseteq) = \bigcup_{T\in \ensuremath{\mathcal{Q}}_i} \cl(T)$ and we call it
the poset of lucky tuples during the phase $i$. Here, the closure $\cl(T)$ is a
set of all tuples of size at most $k_d$ which contain $T$ as a subset.
The following lemma bounds the number of its minimal
elements and uses Dichotomy theorem~\ref{thm:dichotomy-d}.
\begin{lemma}\label{lem:dipaying}
Let us consider a poset $\ensuremath{\mathcal{L}}$ of tuples which are lucky during one phase,
and let $\ensuremath{\mathcal{E}}_t$ the set of its minimal elements of size $t$.
Then we have $|\ensuremath{\mathcal{E}}_t| \leq n(d,t)$.
\end{lemma}
The following observation show that if a $k_d$-tuple was unlucky during at
least one of the phases, its work function value must already be
above the desired threshold.
\begin{observation}\label{obs:unluckyd}
Let us consider a phase between times $t$ and $t'$.
If a $k_d$-tuple $C_d$ was not lucky during this phase,
we have $\WF_{t'}(C_d) \geq M_{t} + w_d/(5kn_d)^{k_d}$.
\end{observation}
\begin{proof}
By Lemma~\ref{lem:WFvsSWFd}, we have either $\WF_{t'}(C_d) \geq M_{t} + w_d$,
in which case the result trivially holds, or
\[ \WF_{t'}(C_d) \geq \WF_{t}(C_d) + \Delta_{t}^{t'}\SWF(C_d).\]
But then we have that
$\WF_{t'}(C_d) \geq \WF_{t} + w_d/(4kn_d)^{k_d} -W_{d-1}$,
as $C_d$ was unlucky, and this is at least $w_d/(5kn_d)^{k_d}$ as
$W_{d-1} \leq w_d/(20kn_d)^{k_d}$.
\end{proof}
Therefore, we keep track of the tuples which were lucky in all the phases.
We denote $\ensuremath{\mathcal{G}}_m = \bigcap_{i=1}^m \ensuremath{\mathcal{L}}_i$ the poset of tuples which were lucky
in each phase $1, \dotsc, m$.
Note that we can write $\ensuremath{\mathcal{G}}_m = \bigcup_{T\in\ensuremath{\mathcal{E}}} \cl(T)$, where $\ensuremath{\mathcal{E}}$ is the
set of the minimal elements of $\ensuremath{\mathcal{G}}_m$.
If, in phase $m+1$, we get $\ensuremath{\mathcal{L}}_{m+1}$ which does not contain some
$\cl(T)\subseteq \ensuremath{\mathcal{G}}_m$, then $\cl(T)$ might break into closures of some
supersets of $T$. This is a favourable situation for us because it makes
$\ensuremath{\mathcal{G}}_{m+1}$ smaller than $\ensuremath{\mathcal{G}}_m$.
The following lemma claims that $\cl(T)$ cannot break into too many
pieces.
\begin{lemma}\label{lem:Td_decompose}
Let $T$ of size $t$ be a fixed minimal tuple in $\ensuremath{\mathcal{G}}_{m}$.
If $\cl(T) \nsubseteq \ensuremath{\mathcal{L}}_{m+1}$, then
$\cl(T) \cap \ensuremath{\mathcal{L}}_m$ contains no tuple of size $t$
and, for $i=1, \dotsc, k_d-t$, it contains at most
$k_d\,n_d$ tuples of size $t+i$.
\end{lemma}
\begin{proof}
Let $T'$ be some inclusion-wise minimal tuple from $\ensuremath{\mathcal{L}}_m$.
It is easy to see that $\cl(T)\cap\cl(T') = \cl(T\cup T')$,
and $T\cup T'$ is the new (potentially) minimal element.
Denoting $\ensuremath{\mathcal{E}}$ the set of minimal elements in $\ensuremath{\mathcal{L}}_m$, we have
$\cl(T) \cap \ensuremath{\mathcal{L}}_m = \bigcup_{T'\in \ensuremath{\mathcal{E}}} \cl(T)\cap \cl(T')$.
Therefore, $\cl(T) \cap \ensuremath{\mathcal{L}}_m$ contains
at most one minimal element per one minimal tuple from $\ensuremath{\mathcal{L}}_m$.
Let us now consider the resulting $T\cup T'$ according to its size.
The size of $T\cup T'$ can be $t+i$ if the size of $T'$ is at least $i$ and at
most $t+i$. Therefore, by Lemma~\ref{lem:dipaying},
we have at most $\sum_{j=i}^{t+i} n(d,j) \leq k_d\,n_d$ minimal elements of size
$t+i$.
\end{proof}
\paragraph{Proof of the main lemma.}
First, let us bound the cost of the algorithm.
During phase $i$ when its heaviest servers reside in $C_d^i$, it
incurs cost $\cost_i(\ALG) \leq k_d\,w_d+ r_{d-1}(\Delta\SWF(C_d^i)+W_{d-1})$.
The first $k_d\,w_d$ is the cost for the single move of servers in $K_d$ at the
end of the phase, and we claim that the second term is due to the movement of
the servers $K_1, \dotsc, K_{d-1}$.
To show this, we use the assumption that $\ALG$ is strictly
$r_{d-1}$-competitive when using servers $K_1, \dotsc, K_{d-1}$.
Let us denote $C^i_1, \dotsc, C^i_{d-1}$ their configuration at the beginning of
the phase. The servers from $K_1, \dotsc, K_{d-1}$
have to serve the request sequence
$\bar{\sigma}^i$, consisting of all requests issued during the phase which do
not belong to $C_d^i$, starting at configuration $C^i_1, \dotsc, C^i_{d-1}$.
We claim that there is such offline solution with cost
$\Delta\SWF(C_d^i)+W_{d-1}$: the solution certifying the value of $\SWF(C_d^i)$
has to serve the whole $\bar{\sigma}^i$ using only $K_1, \dotsc, K_{d-1}$,
although it might start in a different initial position, and therefore we need
additional cost $W_{d-1}$.
Therefore, the cost incurred by $\ALG$ during the phase $i$ is at most
$k_d\,w_d+ r_{d-1}(\Delta\SWF(C_d^i)+W_{d-1})$.
Combining lemmas \ref{lem:d_phase_start_end} and \ref{lem:swf_wf_dlev}, we get
$\Delta\SWF(C_d^i) \leq \Delta_i M + k_d\,w_d/2 + 3W_{d-1}$, and summing this
up over all phases, we get
\begin{align*}
\cost(\ALG)
&\leq a_d\,k_d\,w_d + r_{d-1}
(\Delta M + a_d\cdot k_d\,w_d/2 + a_d\cdot 4W_{d-1})\\
&\leq a_d\,k_d\,w_d + r_{d-1}\,a_d\,k_d\,w_d/2
+ r_{d-1}\,a_d\,4W_{d-1} + r_{d-1}\Delta M
\leq 2a_d\,r_{d-1}\,k_d\,w_d + r_{d-1}\Delta M,
\end{align*}
since $4W_{d-1} < w_d/2$.
Now we bound $\Delta M$.
Clearly, if $M_t \geq M_{t_1} + w_d/(10kn_d)^{k_d}$ for some $t\in [t_1, t_2]$,
we are done.
Otherwise, we claim that the posets $\ensuremath{\mathcal{G}}_i$ become smaller with each phase and
become empty before the last phase ends. We define a potential which captures
their size:
\[
\Phi(i) = \sum_{j=1}^{k_d} (2k_d)^{k_d-j} \cdot (k_d\,n_d)^{k_d-j}
\cdot L_j(i),
\]
where $L_j(i)$ is the number of minimal $j$-tuples in $\ensuremath{\mathcal{G}}_i$.
Let $t$ and $t'$ denote the beginning and the end of the $i$th phase,
and $A_d$ be the configuration of $K_d$ during this phase.
By Lemma~\ref{lem:d_phase_start_end},
we have $\WF_t(A_d) \leq M_t+W_{d-1} < M_{t_1} + w_d/(10kn_d)^{k_d} +W_{d-1}$
and $\Delta_t^{t'}\WF(A_d) \geq k_d\,w_d/2 - 2W_{d-1}$.
By Observation~\ref{obs:unluckyd}, this implies that $A_d$ belongs to
$\ensuremath{\mathcal{G}}_{i-1}$ and does not belong to $\ensuremath{\mathcal{G}}_i$.
Therefore, at least one $\cl(T) \subseteq \ensuremath{\mathcal{G}}_{i-1}$
(the one containing $A_d$) must have broken during phase $i$.
Each $\cl(T)$ that breaks into smaller pieces causes a change of the potential,
which we can bound using Lemma~\ref{lem:dipaying}. We have
\[ \Delta \Phi \leq
- (2k_d)^{k_d-|T|} \, (k_d\, n_d)^{k_d-|T|}
+ (2k_d)^{k_d-(|T|+1)} \,
(k_d\,n_d)^{k_d-(|T|+1)} \cdot k_d\cdot k_d\,n_d.
\]
The last term can be bounded by
$k_d (2k_d)^{k_d-(|T|+1)} (k_d\, n_d)^{k_d-|T|}$, what is strictly smaller than
$(2k_d)^{k_d-|T|} \cdot (k_d\, n_d)^{k_d-|T|}$.
So, we have $\Delta\Phi \leq -1$, since the value of $\Phi(i)$ is always
integral.
The value of $\Phi$ after the first phase is
$\Phi(1) \leq k_d \cdot \big((2k_d)^{k_d} (k_d\,n_d)^{k_d-1} \cdot n_d\big)
< a_d$,
by Lemma~\ref{lem:dipaying}, and $\Phi(i)$ becomes zero as soon as $\ensuremath{\mathcal{G}}_i$ is
empty. Therefore, no page can be lucky during the entire group of $a_d$ phases.
\qed
\paragraph{Proof of Lemma~\ref{lem:dipaying}.}
We proceed by contradiction.
If the lemma is not true for some $t$,
then there exists a set of $t$-tuples $\ensuremath{\mathcal{Q}}_t\subseteq \ensuremath{\mathcal{E}}_t$
of size $n(d,t)+1$.
For each $T\in \ensuremath{\mathcal{Q}}_t$, we consider a service pattern $\ensuremath{\mathcal{I}}^T$ which is
chosen as follows.
For a $k_d$-tuple $A_T$ containing $T$ and $k_d-t$ points which were not
requested during the phase, we have $\Delta\SWF(A_d) < w_d/(4kn_d)^t$.
Therefore there is a service pattern $\ensuremath{\mathcal{I}}^T$ of cost smaller than
$w_d/(4kn_d)^t$ such that $T$ is a feasible label for its top-level interval.
We consider a common refinement $\ensuremath{\mathcal{I}}$ of all service patterns $\ensuremath{\mathcal{I}}^T$, for $T \in Q_t$.
Its cost is less than $k\sum_{T\in\ensuremath{\mathcal{Q}}_t} \cost(\ensuremath{\mathcal{I}}^T)$, and each
$T\in \ensuremath{\mathcal{Q}}_t$ is a feasible label for its single top-level interval $I$.
Common refinement $\ensuremath{\mathcal{I}}$ has more than $n(d,t)$ minimal feasible $t$-tuples,
so by Theorem~\ref{thm:dichotomy-d}, $Q_1=U$.
This implies that the configuration $A_d$ of the heaviest servers of $\ALG$
during this phase
is also feasible label for $I$, and therefore
\[
\Delta\SWF(A_d) \leq \cost(\ensuremath{\mathcal{I}}^T) + W_{d-1}
< k(n_d+1) w_d/(4kn_d)^t + W_{d-1}
\leq \frac14 (1+1/n_d) \cdot \frac{w_d}{(4kn_d)^{t-1}} + W_{d-1}.
\]
This is smaller than $w_d/(4kn_d)^{t-1}$, because $W_{d-1}$ is less than
$w_d/(20kn_d)^{k_d}$.
However, lemmas \ref{lem:d_phase_start_end} and \ref{lem:swf_wf_dlev}
imply that
$\Delta\SWF(A_d) \geq w_d/2 - W_{d-1}$, what gives a contradiction.
\qed
\paragraph*{Proof of Theorem~\ref{thm:ubd}}
We prove the theorem by induction on $d$. For $d=1$ we have the classical paging
problem and it is known that $\ALG$ is
$O(k_1)$-competitive, see \cite{ST85}.
{\em Competitive ratio.}
Since $\cost(\OPT) = M_T/k$, it is enough to compute the ratio between
$\cost(\ALG)$ and $\Delta M$ during one group of $a_d$ phases, getting $1/k$
fraction of the ratio.
The case where the last group contains less than $a_d$ phases can be handled similarly as in proof of Theorem~\ref{thm:ub}.
By the main lemma~\ref{lem:main-d}, we get the following recurrence.
\begin{equation}\label{eq:R_d-rec}
\frac1k r_d \leq \frac{2a_d\,r_{d-1}\,k_d\,w_d}{w_d/(10kn_d)^{k_d}}
+ \frac{r_{d-1} \Delta M}{\Delta M}
\leq a_d^3 r_{d-1}.
\end{equation}
{\em Strict competitive ratio.}
It is enough to the same case analysis as in the proof of Theorem~\ref{thm:ub}.
Applying the corresponding variants of the technical lemmas
(\ref{lem:d_phase_start_end}, \ref{lem:swf_wf_dlev}),
it can be shown that in all of those cases, the ratio between the cost of the
algorithm and the cost of the adversary is much smaller than $a_d^3\,r_{d-1}$.
{\em Calculating the recurrence.}
Let us assume that $r_{d-1} \leq 2^{12dk^3 \prod_{j=1}^{d-1}(k_j+1)}$,
and recall that $a_d = (k_d^3\,n_d)^{k_d}$, where
$n_d = n(d,k_d)$ where
$\log n(d,k_d) \leq 4dk^2 k_d \prod_{j=1}^{d-1}(k_j+1)
\leq 4dk^3\prod_{j=1}^{d-1}(k_j+1)$.
Therefore, taking the logarithm of \eqref{eq:R_d-rec}, we get
\[ \log r_d \leq \log k + 9k_d \log k_d
+ 3k_d\cdot 4dk^3 \prod_{j=1}^{d-1}(k_j+1)
+ 12dk^3 \prod_{j=1}^{d-1}(k_j+1).
\]
The last two terms are quite similar, and we can bound them by
$(k_d+1)\cdot 12dk^3 \prod_{j=1}^{d-1}(k_j+1)$.
Moreover, the first two terms are
smaller than $12k^3$. Therefore we get the final bound
\[\pushQED{\qed}
r_k \leq 2^{(k_d+1)\cdot 12(d+1)k^3 \prod_{j=1}^{d-1}(k_j+1)}
\leq 2^{12(d+1)k^3 \prod_{j=1}^d(k_j+1)}.\qedhere
\popQED
\]
\section{Concluding Remarks}
\label{sec:conc}
There are several immediate and longer-term research directions.
First, it seems plausible that using randomization a singly exponential (i.e.~logarithmic in the deterministic bound) competitive ratio against oblivious adversaries can be achieved. We are unable to show this, since our loss factor from Lemma~\ref{lem:egalitarians_cost} is much higher due to the refinement technique.
Another natural question is to consider weighted $k$-server for more general metrics. As discussed in Section~\ref{sec:rel_work}, nothing is known even for the line beyond $k=2$. Obtaining any upper bound that is only a function of $k$ would be very interesting, as it should lead to interesting new insights on the generalized work-function algorithm (which seems to be the only currently known candidate algorithm for this problem).
Finally, the generalized k-server problem, described in Section~\ref{sec:rel_work}, is a far reaching generalization of the weighted $k$-server problem for which no upper bound is known beyond $k=2$, even for very special and seemingly easy cases. For example, when all metrics are uniform, Koutsoupias and Taylor \cite{KT04} showed a lower bound of $2^k-1$, but no upper bounds are known.
We feel that exploring this family of problems should lead to very interesting techniques for online algorithms.
\section*{Acknowledgments}
We are grateful to Ren\'e Sitters for first bringing the problem to our attention.
We would like to thank Niv Buchbinder, Ashish Chiplunkar and Janardhan Kulkarni for several useful discussions during the initial phases of this project.
Part of the work was done when NB and ME were visiting the Simons Institute at Berkeley and we thank them for their hospitality.
| proofpile-arXiv_065-9429 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |